[alsa-devel] Why does snd_seq_drain_output() need lots of time to execute?

Rafał Cieślak rafalcieslak256 at gmail.com
Fri Jan 27 01:20:17 CET 2012


> A sequencer client has _two_ output buffers, one in alsa-lib, and one in
> the kernel.  Events that are scheduled for later stay in the kernel
> buffer until they are actually delivered; when this buffer would
> overflow, functions that drain the userspace buffer to the kernel buffer
> wait instead.
>
> To increase the kernel buffer's size, use the snd_seq_client_pool* and
> snd_seq_get/set_client_pool functions.  ("pool" is the buffer size, in
> events; "room" is the number of free events that causes a blocked
> function to wake up.)


Great thanks for your help.
That makes sense, and very likely this is what I've been looking for.
If it got it right, the alsa-lib buffer is the one which I can resize
using snd_seq_set_output_buffer_size(). So it seems that I have to
increase the kernel buffer size, as you have suggested.

However, it seems that somehow I cannot change it. If I try using
snd_seq_set_client_pool_output(), this gives absolutely no effect, and
does not seem to make any effect, the pool size stays at 500 (even
though the function returns 0)  (to check the pool size I look at
/proc/asound/seq/clients, and it is always 500). If I use the
snd_seq_client_pool*, it does not change the buffer either, but any
call to snd_seq_get/set_client_pool results in random segmentation
faults a few moments later (these segfaults orginate from nowhere).
Can it be that I am improperly trying to set the kernel buffer size,
or something else is wrong?

Regards,
Rafał Cieślak


More information about the Alsa-devel mailing list