[alsa-devel] underruns and POLLERR when copying audio frames
Hi,
I have some code that basically captures audio frames from one device and then writes it for playback to another device.
Ideally mmapped IO could be used for such a task, but in my application this is not possible at the moment, as at least one of the audio devices does not support it.
In my application copying audio frames works with a poll() based approach:
If the capture device is ready for reading, a new audio frame is being read. And if there is a new captured frame available AND the playback device is ready, then it is written to the playback device.
So far so good, the approach works. However, it doesn't take long until poll() on the playback device returns POLLERR since an underrun has occurred. If I just ignore POLLERR, then the next write operation to the playback device fails and I need to snd_pcm_recover().
My question is how to avoid these underruns. Basically I have two ideas:
1. The avail_min parameter specifies when the playback device should start to play buffered frames back. Hence if for example avail_min=160, then the playback device would wait until at least 160 frames have been written to the playback device (i.e. to the internal buffer) and then start with playing back these frames. My idea would be to just increment avail_min to something bigger, so that more frames are being buffered before playback starts. Would this work ?
2. Another idea would be to do some buffering on my own in the application. Thus I would first capture a number of frames and write them to a FIFO buffer. Once the buffer has been filled with enough frames, I would start to write them to the playback device. Therefore while the FIFO is being drained on the one side (playback), it is also refilled on the other side (capture).
* So, which of these ideas do you think is better in terms of avoiding underruns ?
To me it seems that both solutions would lead to the same results. Yet just setting avail_min to something higher seems to be the easier solution as it does not require me to implement a buffering solution on my own.
cheers, stefan
Stefan Schoenleitner wrote:
I have some code that basically captures audio frames from one device and then writes it for playback to another device.
How do you synchronize the clocks of both devices?
Ideally mmapped IO could be used for such a task,
Why would mmap be more ideal?
My idea would be to just increment avail_min to something bigger, so that more frames are being buffered before playback starts. Would this work ?
Yes; to avoid underruns, try to have buffer as full as possible, and try to use a bigger buffer. However, this will increase latency.
To avoid overruns in the capture device, use a bigger buffer, which will _not_ increase latency.
- Another idea would be to do some buffering on my own in the application.
This is bascially the same as the first idea, except that you have to manage this buffer yourself. This might be worthwile if the devices do not have big enough buffers.
Regards, Clemens
Clemens Ladisch wrote:
Stefan Schoenleitner wrote:
I have some code that basically captures audio frames from one device and then writes it for playback to another device.
How do you synchronize the clocks of both devices?
I don't. One device is a regular sound-card, while the other one is actually a speech processing plugin. The output of the plugin is sent over a network to a remote host, where the frames are being processed by the plugin once again and then played back on the sound-card there. I guess if clock drift gets too high, I will get xruns as well ?
Ideally mmapped IO could be used for such a task,
Why would mmap be more ideal?
I though that it would be possible to use mmapping to "connect" sound devices together in a way that read/write operations in between are no longer necessary.
I.e. instead of
soundcard device (read/write) <-- application (read/write) --> plugin device (read/write)
it would be
soundcard device (read/write) <-- application (direct mapping) --> plugin (read/write)
Thus each time one devices writes, it would actually *write through* to the destination device without the need for any additional code in between. But as mmapping just seems to work for files, I no longer think it is possible.
My idea would be to just increment avail_min to something bigger, so that more frames are being buffered before playback starts. Would this work ?
Yes; to avoid underruns, try to have buffer as full as possible, and try to use a bigger buffer. However, this will increase latency.
To avoid overruns in the capture device, use a bigger buffer, which will _not_ increase latency.
Can you explain to me why a bigger capture buffer will not introduce latency ? IMHO if there is a large capture buffer, then the sound-card will need to capture until the buffer is filled. Once this is the case, the application can start to read from the capture device. Hence the latency introduced would depend on the size of the capture buffer, right ?
- Another idea would be to do some buffering on my own in the application.
This is bascially the same as the first idea, except that you have to manage this buffer yourself. This might be worthwile if the devices do not have big enough buffers.
Hmm, ok. It seems that at least one of my devices only supports a small avail_min setting. So I could either set avail_min to a high value on the device that allows it, but still have a low value on the other device that does not support large buffers. Or I could just use FIFO buffering and do everything on my own.
E.g. when capturing I would wait until let's say at least 10 periods are in my buffer. Once this is the case, I will start to drain the buffer (by playing back periods) and at the same time refill the FIFO by capturing new ones.
Maybe the best solution will be to just try it out ;)
cheers, stefan
Stefan Schoenleitner wrote:
Clemens Ladisch wrote:
How do you synchronize the clocks of both devices?
I don't. [...] I guess if clock drift gets too high, I will get xruns as well ?
Yes.
Why would mmap be more ideal?
I though that it would be possible to use mmapping to "connect" sound devices together in a way that read/write operations in between are no longer necessary.
Indeed.
But as mmapping just seems to work for files, I no longer think it is possible.
Some optimization is possible even when only one device supports mmap: When you want to copy from the hardware device to the plugin, you could call the plugin's snd_pcm_writei with an address in the sound card's buffer as the source.
To avoid overruns in the capture device, use a bigger buffer, which will _not_ increase latency.
Can you explain to me why a bigger capture buffer will not introduce latency ? IMHO if there is a large capture buffer, then the sound-card will need to capture until the buffer is filled. Once this is the case, the application can start to read from the capture device.
The application can read from the capture buffer at any time. When using a blocking snd_pcm_read* call, it gets woken up at the end of each period, if avail_min is not larger than the period size.
It seems that at least one of my devices only supports a small avail_min setting.
avail_min can be as large as the buffer size.
Regards, Clemens
Clemens Ladisch wrote:
Stefan Schoenleitner wrote:
Clemens Ladisch wrote:
How do you synchronize the clocks of both devices?
I don't. [...] I guess if clock drift gets too high, I will get xruns as well ?
Yes.
Hmm, ok. For now I will just add buffering and hope that the buffer is big enough so that there will be no xruns for reasonable long amounts of time. In case I get troubles anyway, is there some kind of way to synchronize clocks between the ALSA stack running on one system with the stack on another system ?
But as mmapping just seems to work for files, I no longer think it is possible.
Some optimization is possible even when only one device supports mmap: When you want to copy from the hardware device to the plugin, you could call the plugin's snd_pcm_writei with an address in the sound card's buffer as the source.
right.
Thanks four your input,
cheers, stefan
For now I will just add buffering and hope that the buffer is big enough so that there will be no xruns for reasonable long amounts of time. In case I get troubles anyway, is there some kind of way to synchronize clocks between the ALSA stack running on one system with the stack on another system ?
You cannot 'synchronize' clocks. The only option is to generate the relevant number of samples to compensate for the drift. This can be done by adding/dropping samples, time stretching or fractional resampling (by order of complexity). You may want to try gstreamer for this type of use cases (gst-launch alsasrc ! alsasink), the drift between incoming and outgoing samples is tracked and some configurable resampling can be done. Or module-loopback in PulseAudio if it's installed on your system.
participants (3)
-
Clemens Ladisch
-
pl bossart
-
Stefan Schoenleitner