[alsa-devel] Why does snd_seq_drain_output() need lots of time to execute?

Hello,
I have a serious problem using the ALSA sequencer interface. I am writing an application that makes use of it's queues, and have got into troubles which I can't resolve at any cost.
The main problem is that although the documentation states that snd_seq_drain_output() returns immediately, it turns out that it occasionally needs lots of time (even a second or two), and blocks my application, as I call it. According to my research, it looks like a possible bug in ALSA, but I'd rather ask you if it is indeed unintended.
I won't post the code unless you ask me for it, as it is quite a complex application, and that would unnecessarily mess up this post. So let me describe what I've found about this problem.
Most of time snd_seq_drain_output() works as it should. I call it from time to time after putting a bunch of MIDI events on a queue. I use it to schedule lots of notes, MIDI controller change messages, and several custom events. It seems I use it more or less appropriately, for the outputting data is scheduled as I wish it to be. I have followed tutorials and examples to learn how to use ALSA sequencer interface, for the documentation does not mention a word on how one should actually use these calls. However, I have compared my snd_seq_* calls to many other open-source MIDI applications, and it seems I got everything right. And, indeed, most of time the app works perfectly. Problems start when I schedule a big amount of events on the queue. There is no exact number that triggers the problem, but it's usually in between 900 and 1100. In that case snd_seq_drain_output() freezes for a very long time. The exact time depends on the tempo applied to the queue (which I find very weird), but is about 0,6 sec for tempo of 60bpm, and greater for slower tempos.
I tried also scheduling notes using snd_seq_event_output_direct() to avoid calling snd_seq_drain_output(), but that caused snd_seq_event_output_direct() to lag similarly after such large amount of notes gets scheduled on the queue. And before you ask, I've set the output buffer to some huge amount, that is much lager then the amount of notes I am trying to schedule.
It is likely that it's my fault, for I can't fully understand how these calls should be used. In this case could anyone please explain me in which cases snd_seq_drain_output() may block the calling app? I will check if my code might trigger this behaviour. Otherwise, is there any workaround for this lag? Or maybe I have missed something? I need help a lot, because this problem makes me completely unable to continue development of my application.
Please let me know in case you need me to provide additional information.
Regards, Rafał Cieślak

Rafał Cieślak wrote:
The main problem is that although the documentation states that snd_seq_drain_output() returns immediately,
| "The function returns immediately after the events are sent to the | queues ..."
it turns out that it occasionally needs lots of time (even a second or two), and blocks my application, as I call it.
You could enable non-blocking mode to get an error instead of waiting.
Or you could to increase the size of the output buffer.
And before you ask, I've set the output buffer to some huge amount,
A sequencer client has _two_ output buffers, one in alsa-lib, and one in the kernel. Events that are scheduled for later stay in the kernel buffer until they are actually delivered; when this buffer would overflow, functions that drain the userspace buffer to the kernel buffer wait instead.
To increase the kernel buffer's size, use the snd_seq_client_pool* and snd_seq_get/set_client_pool functions. ("pool" is the buffer size, in events; "room" is the number of free events that causes a blocked function to wake up.)
Regards, Clemens

A sequencer client has _two_ output buffers, one in alsa-lib, and one in the kernel. Events that are scheduled for later stay in the kernel buffer until they are actually delivered; when this buffer would overflow, functions that drain the userspace buffer to the kernel buffer wait instead.
To increase the kernel buffer's size, use the snd_seq_client_pool* and snd_seq_get/set_client_pool functions. ("pool" is the buffer size, in events; "room" is the number of free events that causes a blocked function to wake up.)
Great thanks for your help. That makes sense, and very likely this is what I've been looking for. If it got it right, the alsa-lib buffer is the one which I can resize using snd_seq_set_output_buffer_size(). So it seems that I have to increase the kernel buffer size, as you have suggested.
However, it seems that somehow I cannot change it. If I try using snd_seq_set_client_pool_output(), this gives absolutely no effect, and does not seem to make any effect, the pool size stays at 500 (even though the function returns 0) (to check the pool size I look at /proc/asound/seq/clients, and it is always 500). If I use the snd_seq_client_pool*, it does not change the buffer either, but any call to snd_seq_get/set_client_pool results in random segmentation faults a few moments later (these segfaults orginate from nowhere). Can it be that I am improperly trying to set the kernel buffer size, or something else is wrong?
Regards, Rafał Cieślak

On Fri, Jan 27, 2012 at 01:20:17AM +0100, Rafał Cieślak wrote:
However, it seems that somehow I cannot change it. If I try using snd_seq_set_client_pool_output(), this gives absolutely no effect, and does not seem to make any effect, the pool size stays at 500 (even though the function returns 0) (to check the pool size I look at /proc/asound/seq/clients, and it is always 500).
If you manage to fill a 500-event buffer, could that mean that you are sending events a long time ahead of their due time ?
In that case the solution is to keep events in application buffers until say half a second before they are really needed. In other words let the app do the rough timing, and ALSA the fine one.
It requires a bit more logic, but it's probably needed anyway - you can't expect kernel side ALSA to buffer e.g. a complete song.
Limiting the send-ahead time will also make the app more responsive in case you have to stop and/or reposition the stream.
Ciao,

If you manage to fill a 500-event buffer, could that mean that you are sending events a long time ahead of their due time ?
No, I'm not buffering a whole song, events are send at most a second or two ahead (for each bar). The reason why there may be so many of them, is that I emulate smooth movement of control parameters (just as if one was slowly moving a slider/knob on an external MIDI controller), which requires lots of events, and the problem starts when I try to do the same for many different controllers/channels independently.
Also, it turned out that actually snd_seq_set_client_pool_output() does work, unless I try to set it to anything greater then 2000. Is this a hard-coded limit, or can I somehow increase it even more?
Regards, Rafał Cieślak

On Fri, Jan 27, 2012 at 02:28:42PM +0100, Rafał Cieślak wrote:
If you manage to fill a 500-event buffer, could that mean that you are sending events a long time ahead of their due time ?
No, I'm not buffering a whole song, events are send at most a second or two ahead (for each bar). The reason why there may be so many of them, is that I emulate smooth movement of control parameters (just as if one was slowly moving a slider/knob on an external MIDI controller), which requires lots of events, and the problem starts when I try to do the same for many different controllers/channels independently.
I see. MIDI isn't really up to this sort of thing... The receiver can be expected to smooth the controller values, but since this is unspecified you don't know how many are needed.
Also, it turned out that actually snd_seq_set_client_pool_output() does work, unless I try to set it to anything greater then 2000. Is this a hard-coded limit, or can I somehow increase it even more?
Don't know...
2000 events/s would be close to what a hardware MIDI interface can handle anyway - that could be your next problem.
OTOH, you could reduce the update period to 1/4 second or so. That should be no problem if you use a RT thread to do it - we're using much shorter periods for audio.
Ciao,

Rafał Cieślak wrote:
Also, it turned out that actually snd_seq_set_client_pool_output() does work, unless I try to set it to anything greater then 2000. Is this a hard-coded limit,
Yes.
or can I somehow increase it even more?
Change SNDRV_SEQ_MAX_EVENTS in include/sound/seq_kernel.h and recompile the kernel ...
Regards, Clemens
participants (3)
-
Clemens Ladisch
-
Fons Adriaensen
-
Rafał Cieślak