Hi all,
I am an researcher in the field of acoustic (ultrasonic) signal processing. Normally we use dedicated hardware where the transmit and receive processes are synchronized. Now I want to use a multi-channel sound card (in my case a RME Multiface card) to perform, for example, simple beamforming. Typically, one transmit pulses on an array of transducers (speakers) where each pulse is delayed a certain amount to create focusing or beemsteering. By knowing the soundspeed in the propagation medium one can calculate the double path time-of-flight to various points and then, by beamforming the received signals, one can get an image of the objects that are in front of the acoustic array.
Now to my problem: in order to do the processing one need to know the delay in the system from transmit to receive. I have looked at the ALSA lib API docs and some code examples (pcm.c, aplay etc) and it looks like the playback and capture processes are independent in ALSA (i.e., there is no write-and-read-at-the-same-time function). The latency inherent in the soundcard isn't that much of a problem if it is know. It is, however a problem if the latency from transmit to receive is not constant from measurement to measurement. I have written some code (for Octave) that uses MMAP and poll:ing but I don't see how to synchronize the write and read processes (they use two different poll fd:s). What is the best way to do this? Is it possible at all?
Regards
/Fredrik