[alsa-devel] Some questions related to ALSA based duplex audio processing

public-hk public-hk at ind.rwth-aachen.de
Sun Jan 29 20:19:45 CET 2012


I have just started to develop a duplex audio processing application 
based on ALSA.
My development goals are - of course - maximum stability as well as 
lowest possible delay (latency).

So, here is what I did on my Ubuntu laptop:

1) Setup of device/soundcard "hw:0,0" for processing at a samplerate of 
44.1 kHz, buffersize 128 samples.
2) I created one input PCM device and one output PCM device handle
3) I use the function "snd_async_add_pcm_handler" to install a callback 
function (ASYNC mode), one for input,
one for output.
4) I use "snd_pcm_link" to synchronize both pcm handles.
5) I use the mmap'ed area access

My motivation to work the way I do is to maximize the control of 
processing behavior and to
minimize the latency. E.g., I have understood from the docs that the 
functions do nothing else than addressing the memory mapped areas. As a 
conclusion, I realize
this access myself to have more control about it.
And by using the async callbacks, I do not have to deal with blocking or 
non-blocking read functions or polling related issues.

So far, I have realized the playback part and noticed some aspects which 
are not clear to me from the ALSA documentation:

a) What is the most efficient way to realize duplex audio processing, 
the async way with mmap'ed read/write that I follow?
b) At first, I created the pcm handles as follows: "snd_pcm_open (.., 
SND_PCM_ASYNC);" When starting to process audio (call of 
"snd_pcm_start"), I realized that the registered callback functions do 
not get called. I had to change this to "snd_pcm_open (.., 0);". Is that 
really intended, it seems contradictory?
c) The function "snd_pcm_info_get_sync" is supposed to return a 
description of the synchronization
behavior of a soundcard. In my case, I called this function for two 
soundcards (one USB and the laptop integrated soundcard).
In both cases, the returned content is all zeros. Should not this be 
different for both devices?
d) By default, it seems that the callbacks for audio frames come within 
a thread with normal priority.
On Windows OS, I am used to increase the thread priority whenever 
starting my threads, but it is commonly not a good idea to increase the 
process priority.
In the ALSA latency-example, the PROCESS priority is increased (which 
can be done only with superuser priv.).
What is the recommended way in Linux to achieve lower latencies?

In the future I will have to deal with synchronization of input and 
output since the samples arrive/leave my application in two independent 
callback functions. So what will I do: Once input samples are available 
I will store these in one buffer, and these samples will be output
the next time when there is space in the output ring buffer. In order to 
minimize the latency, I
would need to know more about the intended exact behavior of the ALSA 
lib which I did not find in the

I) If linking input and output with "snd_pcm_link", I have understood 
that the changes of state for input and
output PCM handle will occur synchronously. That is, when doing the 
operation such as "snd_pcm_prepare" on one
of the handles, both handles will be affected. However, what does this 
means for audio processing on the frame
level: If I use two callbacks for signal processing for input and output 
respectively installed based on the
function "snd_async_add_pcm_handler", will these callbacks occur 
simultaneously? On Windows OS (speaking of ASIO sound) there is
one callback in which input and output is handled simultaneously. Can I 
somehow setup ALSA to have a similar behavor?

Thank you for any assistance and best regards


More information about the Alsa-devel mailing list