[alsa-devel] How to get correct midi timings from ALSA using the library only
Hi,
(1) I've written a command line MIDI sequencer for lightweight systems and am successful in making it work using the ALSA queue API. However, one drawback of the API is its lack of callback functions. I wish to be able to track events as they are drained by the queue.
(2) I know and have successfully worked on a work around whereby the application itself subscribes to the output port so as to see events as they are played.
However, I wish to be able to make the sequencer or player work without the use of the ALSA queue nor the workaround in (2).
Here's the pseudo-code of the relevant MIDI player routine:
for (i = 0; i < number_of_events; i++) { usleep(event[i].delta_time_in_microseconds); output_and_drain_event(event[i]); }
This routine gives a non-bearable latency on 2.4 kernels but not so much on 2.6 kernels.
How could I get the app to <u|nano>sleep() in the most accurate way in userspace without using the ALSA queue nor the extra subscription to an output port? Or, is there a drain or output routine that supports callbacks? If so, I will be grateful if you could point them out. I seem not to find any output callback routine under the docs.
Thank you very much.
Best Regards,
Carlo
On Tuesday 24 July 2007, Carlo Florendo wrote:
Hi,
(1) I've written a command line MIDI sequencer for lightweight systems and am successful in making it work using the ALSA queue API. However, one drawback of the API is its lack of callback functions. I wish to be able to track events as they are drained by the queue.
(2) I know and have successfully worked on a work around whereby the application itself subscribes to the output port so as to see events as they are played.
However, I wish to be able to make the sequencer or player work without the use of the ALSA queue nor the workaround in (2).
Here's the pseudo-code of the relevant MIDI player routine:
for (i = 0; i < number_of_events; i++) { usleep(event[i].delta_time_in_microseconds); output_and_drain_event(event[i]); }
This routine gives a non-bearable latency on 2.4 kernels but not so much on 2.6 kernels.
How could I get the app to <u|nano>sleep() in the most accurate way in userspace without using the ALSA queue nor the extra subscription to an output port? Or, is there a drain or output routine that supports callbacks? If so, I will be grateful if you could point them out. I seem not to find any output callback routine under the docs.
One way to improve timing is to make sure your process has the highest prio in the system, so that even in the case that other tasks want to run, too, at the moment of wakeup, your process gets the CPU.
Try running your process with SCHED_FIFO scheduling and a high prio of e.g. 99.
Also using a kernel patched with ingo molnar's -rt patches might help quite a bit (additional to running your process SCHED_FIFO).
Also you want to check on every wakeup (after your sleep expired) how long you really slept, so you can adjust the next sleep time. clock_gettime (CLOCK_MONOTONIC) might be useful for this..
Another approach, that works very well in my experience, is to not sleep the total required time until the next event, but rather regularly sleep for very short amounts of time (< 1ms), wakeup, measure the current time and if any event time now lies in the past, simply play it back immediately. This way the sleep time doesn't need to be accurate at all. All that's needed is that it's small enough to get some decent timing.
Regards, Flo
Florian Schmidt wrote:
Here's the pseudo-code of the relevant MIDI player routine:
for (i = 0; i < number_of_events; i++) { usleep(event[i].delta_time_in_microseconds); output_and_drain_event(event[i]); }
This routine gives a non-bearable latency on 2.4 kernels but not so much on 2.6 kernels.
How could I get the app to <u|nano>sleep() in the most accurate way in userspace without using the ALSA queue nor the extra subscription to an output port? Or, is there a drain or output routine that supports callbacks? If so, I will be grateful if you could point them out. I seem not to find any output callback routine under the docs.
One way to improve timing is to make sure your process has the highest prio in the system, so that even in the case that other tasks want to run, too, at the moment of wakeup, your process gets the CPU.
Try running your process with SCHED_FIFO scheduling and a high prio of e.g. 99.
I've tried that in kernel 2.4 and I get the same latency results. Let me try tweaking that though by running the system with high priority. The reason why I'd like to make it work in 2.4 kernels is so that existing systems with 2.4 kernels could run the app without need for a kernel patch.
Also using a kernel patched with ingo molnar's -rt patches might help quite a bit (additional to running your process SCHED_FIFO).
Yes, I've been contemplating on doing that very soon.
Also you want to check on every wakeup (after your sleep expired) how long you really slept, so you can adjust the next sleep time. clock_gettime (CLOCK_MONOTONIC) might be useful for this..
This is new to me. I've never tried it but will try it ASAP.
Another approach, that works very well in my experience, is to not sleep the total required time until the next event, but rather regularly sleep for very short amounts of time (< 1ms), wakeup, measure the current time and if any event time now lies in the past, simply play it back immediately. This way the sleep time doesn't need to be accurate at all. All that's needed is that it's small enough to get some decent timing.
This seems reasonable enough. I'd try this out too.
Regards, Flo
Your ideas have been most helpful :)
Thank you very much!
Best Regards,
Carlo
On Wednesday 25 July 2007, Carlo Florendo wrote:
Try running your process with SCHED_FIFO scheduling and a high prio of e.g. 99.
I've tried that in kernel 2.4 and I get the same latency results. Let me try tweaking that though by running the system with high priority. The reason why I'd like to make it work in 2.4 kernels is so that existing systems with 2.4 kernels could run the app without need for a kernel patch.
2.4.x kernels are terribly unsuited to do any serious sort of realtime work, be it audio or midi.
From my experience this is a classification with increasing suitedness for realtime work:
1] vanilla 2.4.x 2] patched 2.4.x [lowlatency patches] 3] vanilla 2.6.x 4] patched 2.6.x [ingo molnar's realtime preemption patches]
There really should be like 100 bogus places between 2] and 3] and another 100 between unpatched and patched 2.6.x because 2.6.x really is vastly better than 2.4.x and -rt patched 2.6.x actually is a realtime system which can be made to work up to microsecond resolution [not millisecond ;)].
Your ideas have been most helpful :)
No problem. BTW: even when you use ALSA queues, the kernel still plays a big role. Then it's simply ALSA's responsibility to provide good timing and it basically uses the same mechanisms as a userspace program would.
So thrash 2.4.x for all realtime purposes..
Flo
Florian Schmidt wrote:
Another approach, that works very well in my experience, is to not sleep the total required time until the next event, but rather regularly sleep for very short amounts of time (< 1ms), wakeup, measure the current time and if any event time now lies in the past, simply play it back immediately. This way the sleep time doesn't need to be accurate at all. All that's needed is that it's small enough to get some decent timing.
I've now finished implementing this approach, sleeping around 10000 microseconds instead of less than 1ms, and it works very well! Thanks!
I'm getting very decent timing even with complicated MIDI files that have numerous pitch bends at very short times.
I'm about to try your other suggestions later on.
Thank you very much.
Regards, Flo
Best Regards,
Carlo
Carlo Florendo wrote:
(1) I've written a command line MIDI sequencer for lightweight systems and am successful in making it work using the ALSA queue API. However, one drawback of the API is its lack of callback functions. I wish to be able to track events as they are drained by the queue.
(2) I know and have successfully worked on a work around whereby the application itself subscribes to the output port so as to see events as they are played.
You could also send special user-defined events just to yourself, but the principle is the same.
However, I wish to be able to make the sequencer or player work without the use of the ALSA queue nor the workaround in (2).
Why?
How could I get the app to <u|nano>sleep() in the most accurate way in userspace without using the ALSA queue nor the extra subscription to an output port?
On newer kernels, you could try POSIX interval timers.
Or, is there a drain or output routine that supports callbacks?
Why can't you simply call the callback when some event has been received?
Regards, Clemens
Clemens Ladisch wrote:
However, I wish to be able to make the sequencer or player work without the use of the ALSA queue nor the workaround in (2).
Why?
Because the queue output and draining, AFAICS, is implemented in a blocking manner. When events are being played by the queue in the kernel, the application blocks, and I have no control on what is being played, except that I could raise a SIGTERM anytime and end the app.
AFAIK, using the queue is fine for a simple MIDI player, but not for a sequencer, when you need full control of everything that happens with the audio output.
How could I get the app to <u|nano>sleep() in the most accurate way in userspace without using the ALSA queue nor the extra subscription to an output port?
On newer kernels, you could try POSIX interval timers.
What newer kernels? Interval timers, as far as I've tested, result in more latency than simple sleeps. And yes, I've tried select(), which seems to be more accurate than usleep().
Or, is there a drain or output routine that supports callbacks?
Why can't you simply call the callback when some event has been received?
Received by what? By the extra subscription to the output port? As I've mentioned in the original post, I know this works but for now, I'd want to try out simpler sleeps than having an extra output port subscription.
The queue blocks when it plays events. Thus, the supposed callback function which, as you suggest, should be called after the event is received, will already be useless since the callback will only be executed once the queue unblocks.
Thank you very much.
Regards, Clemens
Best Regards,
Carlo
Carlo Florendo wrote:
Clemens Ladisch wrote:
However, I wish to be able to make the sequencer or player work without the use of the ALSA queue nor the workaround in (2).
Why?
Because the queue output and draining, AFAICS, is implemented in a blocking manner.
When non-blocking mode is set (see snd_seq_nonblock()), snd_seq_drain_output() does not block but writes only as many events to the kernel buffer as fit inside (or returns -EAGAIN if the kernel buffer is completely full).
Regards, Clemens
Clemens Ladisch wrote:
Carlo Florendo wrote:
Clemens Ladisch wrote:
However, I wish to be able to make the sequencer or player work without the use of the ALSA queue nor the workaround in (2).
Why?
Because the queue output and draining, AFAICS, is implemented in a blocking manner.
When non-blocking mode is set (see snd_seq_nonblock()), snd_seq_drain_output() does not block but writes only as many events to the kernel buffer as fit inside (or returns -EAGAIN if the kernel buffer is completely full).
Gee! I haven't read the ALSA lib API docs that much but this sounds and looks like a solution to the problem :)
Thank you very much for the pointers and for all your patience.
Regards, Clemens
Best Regards,
Carlo
Clemens Ladisch wrote:
Carlo Florendo wrote:
Clemens Ladisch wrote:
However, I wish to be able to make the sequencer or player work without the use of the ALSA queue nor the workaround in (2).
Why?
Because the queue output and draining, AFAICS, is implemented in a blocking manner.
When non-blocking mode is set (see snd_seq_nonblock()), snd_seq_drain_output() does not block but writes only as many events to the kernel buffer as fit inside (or returns -EAGAIN if the kernel buffer is completely full).
Bingo! My recent tests show that snd_seq_nonblock() is indeed a useful function and enables me to have control of the sequencer every time it outputs an event :)
Gee, I've read the ALSA lib doc for months and never came across this function. Up to now, I've not seen the part of the doc that describes snd_seq_nonblock so I went straight to the source code of alsa-lib under src/seq/seq.c and grepped for snd_seq_block().
The alsa-lib docs have to be improved :)
Thank you for the pointers!
Regards, Clemens
Best Regards,
Carlo
participants (3)
-
Carlo Florendo
-
Clemens Ladisch
-
Florian Schmidt