[alsa-devel] Disabling buffer fill level preprocessing by ALSA
Hi!
In PulseAudio I want to schedule on my own when I need to write audio data into the device and when not. To achieve that I want to be notified via poll() whenever a period boundary is passed (i.e. when an IRQ happens), but only then. That's different from the usual mode where you are notified via poll() whether there is space in the playback buffer that needs to be filled up.
On OSS the mmap() mode enables a mode like I described above. After enabling mmap() the application can decide by itself what it considers full and what empty in the dma buffer, and use GETOPTR to query the playback position. poll() on the OSS fd will directly reflect the sound card IRQs and is not influenced if you ever wrote data to device or not.
I assume that I can enable a mode like that with one of the SW params. But quite frankly the docs for it are not enlighening at all.
Lennart
At Mon, 31 Dec 2007 18:12:27 +0100, Lennart Poettering wrote:
Hi!
In PulseAudio I want to schedule on my own when I need to write audio data into the device and when not. To achieve that I want to be notified via poll() whenever a period boundary is passed (i.e. when an IRQ happens), but only then. That's different from the usual mode where you are notified via poll() whether there is space in the playback buffer that needs to be filled up.
On OSS the mmap() mode enables a mode like I described above. After enabling mmap() the application can decide by itself what it considers full and what empty in the dma buffer, and use GETOPTR to query the playback position. poll() on the OSS fd will directly reflect the sound card IRQs and is not influenced if you ever wrote data to device or not.
I assume that I can enable a mode like that with one of the SW params. But quite frankly the docs for it are not enlighening at all.
Set the stop_threshould sw_params to the boundary size.
snd_pcm_sw_params_get_boundary(sw_params, &boundary); snd_pcm_sw_params_set_stop_threshold(pcm, sw_params, boundary);
then the driver behaves in the "freewheel" mode. The dmix plugin uses this technique.
Takashi
On Mon, 07.01.08 12:07, Takashi Iwai (tiwai@suse.de) wrote:
In PulseAudio I want to schedule on my own when I need to write audio data into the device and when not. To achieve that I want to be notified via poll() whenever a period boundary is passed (i.e. when an IRQ happens), but only then. That's different from the usual mode where you are notified via poll() whether there is space in the playback buffer that needs to be filled up.
On OSS the mmap() mode enables a mode like I described above. After enabling mmap() the application can decide by itself what it considers full and what empty in the dma buffer, and use GETOPTR to query the playback position. poll() on the OSS fd will directly reflect the sound card IRQs and is not influenced if you ever wrote data to device or not.
I assume that I can enable a mode like that with one of the SW params. But quite frankly the docs for it are not enlighening at all.
Set the stop_threshould sw_params to the boundary size.
snd_pcm_sw_params_get_boundary(sw_params, &boundary); snd_pcm_sw_params_set_stop_threshold(pcm, sw_params, boundary);
then the driver behaves in the "freewheel" mode. The dmix plugin uses this technique.
That's not what I was looking for. This will only disable automatic stopping on buffer underrun. I am using that already in PA (however I pass -1 as stop threshold, which should work, too, shouldn't it?)
What I am really looking for is a way to disable that ALSA reports via poll() the buffer fill level, but instead only reports whether an interrupt happened.
In default ALSA mode, if poll() tells us that the ALSA device is ready, and we don't subsequently write any data to it, the next poll() will immediately return, still telling us the device is ready. As long as we don't write anything to the audio device and we call poll() we will be stuck in a busy loop. Basically, some kind of _write() acts as reset the ready state of the ALSA device.
In contrast to that on OSS+mmap the poll() call itself will already reset the ready state. I.e. if you poll() and then poll() again -- without any intermediate write to the device, it will wait for the next IRQ to happen, and we would not enter a busy loop if we did this repeatedly. I am looking for a way to do something like this on ALSA, too.
An example:
int main() { fd = open("/dev/dsp", ...); /* add some code here to enter mmap mode */
for (;;) { struct pollfd pfd = { .fd = fd, events = POLLIN }; poll(&pfd, 1, -1); printf("IRQ!\n"); } }
On OSS a program like this would print "IRQ" every time a sound card interrupt is triggered - but not more often.
A similar program in ALSA mmap mode would behave differently:
int main() { snd_pcm_open(&pcm, ...);
/* add some code here to enter mmap mode */
for (;;) { struct pollfd pfds[...]; snd_pcm_poll_descriptors(pcm, pfs, ...); poll(pfds, ..., -1); snd_pcm_poll_descriptors_revents(pcm, pfs, ..., &revents); printf("Eating CPU"); } }
This application would eat 100% CPU. What I am looking for is a way to make ALSA behave more like OSS in this case. Some mode I can enable to disable the management in ALSA that decides if a playback buffer is full or empty.
This is completely unrelated to start/stop thresolds.
The background why I want this is this: As mentioned I am now scheduling audio in PA mostly based on system timers. To be able to do that I need to be able to translate timespans from the sound card clock to the system clock. Which requires me to get the sample time from the sound card from time to time and filter it through some code that estimates how the sound card clock and the system clock deviates. I'd prefer to do that only once or maybe twice everytime the playback buffer is fully played, and only shortly after an IRQ happened, under the assumption that this is the best time to get the most accurate timing information from the sound card.
Is there any way to enable a mode like that on ALSA?
Right now I chose to completely disable that ALSA informs me about ready states via poll(). To achieve that I called snd_pcm_sw_params_set_avail_min() with INT_MAX. THis seems to work, but the system still gets 2 interrupts per buffer loop, but my program cannot really make any use of them anymore.
Lennart
On Mon, 7 Jan 2008, Lennart Poettering wrote:
On Mon, 07.01.08 12:07, Takashi Iwai (tiwai@suse.de) wrote:
In PulseAudio I want to schedule on my own when I need to write audio data into the device and when not. To achieve that I want to be notified via poll() whenever a period boundary is passed (i.e. when an IRQ happens), but only then. That's different from the usual mode where you are notified via poll() whether there is space in the playback buffer that needs to be filled up.
On OSS the mmap() mode enables a mode like I described above. After enabling mmap() the application can decide by itself what it considers full and what empty in the dma buffer, and use GETOPTR to query the playback position. poll() on the OSS fd will directly reflect the sound card IRQs and is not influenced if you ever wrote data to device or not.
I assume that I can enable a mode like that with one of the SW params. But quite frankly the docs for it are not enlighening at all.
Set the stop_threshould sw_params to the boundary size.
snd_pcm_sw_params_get_boundary(sw_params, &boundary); snd_pcm_sw_params_set_stop_threshold(pcm, sw_params, boundary);
then the driver behaves in the "freewheel" mode. The dmix plugin uses this technique.
That's not what I was looking for. This will only disable automatic stopping on buffer underrun. I am using that already in PA (however I pass -1 as stop threshold, which should work, too, shouldn't it?)
What I am really looking for is a way to disable that ALSA reports via poll() the buffer fill level, but instead only reports whether an interrupt happened.
Note that you can control fill level using snd_pcm_forward/rewind without any R/W calls (of course if supported in whole chain). ALSA supports only "controlled" I/O not dumb I/O as OSS driver for mmap.
If you look for timing source, use timer API - you may try alsa-lib/test/timer.c:
./timer class=3 card=0 device=0 subdevice=0
The background why I want this is this: As mentioned I am now scheduling audio in PA mostly based on system timers. To be able to do that I need to be able to translate timespans from the sound card clock to the system clock. Which requires me to get the sample time from the sound card from time to time and filter it through some code that estimates how the sound card clock and the system clock deviates. I'd prefer to do that only once or maybe twice everytime the playback buffer is fully played, and only shortly after an IRQ happened, under the assumption that this is the best time to get the most accurate timing information from the sound card.
It's not really necessary. You can use only one timing source (system timer) and use position timestamps to do corrections.
But your example does not explain, why you don't move r/w pointer in the ring buffer (use mmap_commit), thus why you don't fullfill the avail_min requirement for poll wakeup. It seems to me that you're trying to do some crazy things with the ring buffer which are not allowed.
Jaroslav
----- Jaroslav Kysela perex@perex.cz Linux Kernel Sound Maintainer ALSA Project, Red Hat, Inc.
On Mon, 07.01.08 19:33, Jaroslav Kysela (perex@perex.cz) wrote:
I assume that I can enable a mode like that with one of the SW params. But quite frankly the docs for it are not enlighening at all.
Set the stop_threshould sw_params to the boundary size.
snd_pcm_sw_params_get_boundary(sw_params, &boundary); snd_pcm_sw_params_set_stop_threshold(pcm, sw_params, boundary);
then the driver behaves in the "freewheel" mode. The dmix plugin uses this technique.
That's not what I was looking for. This will only disable automatic stopping on buffer underrun. I am using that already in PA (however I pass -1 as stop threshold, which should work, too, shouldn't it?)
What I am really looking for is a way to disable that ALSA reports via poll() the buffer fill level, but instead only reports whether an interrupt happened.
Note that you can control fill level using snd_pcm_forward/rewind without any R/W calls (of course if supported in whole chain). ALSA supports only "controlled" I/O not dumb I/O as OSS driver for mmap.
If you look for timing source, use timer API - you may try alsa-lib/test/timer.c:
./timer class=3 card=0 device=0 subdevice=0
How does this timer depend on the PCM clock? Is its wakeup granularity dependant on the period parameters of the matching PCM device? Or am I supposed to first initialize PCM, and chose some period parameters the hw likes and than pass that on to the timer subsystem?
I assume I don't have any guarantee that all alsa devices have such a timer attached? So I'd need some major non-trivial fallback code if I make use of these timers?
The background why I want this is this: As mentioned I am now scheduling audio in PA mostly based on system timers. To be able to do that I need to be able to translate timespans from the sound card clock to the system clock. Which requires me to get the sample time from the sound card from time to time and filter it through some code that estimates how the sound card clock and the system clock deviates. I'd prefer to do that only once or maybe twice everytime the playback buffer is fully played, and only shortly after an IRQ happened, under the assumption that this is the best time to get the most accurate timing information from the sound card.
It's not really necessary. You can use only one timing source (system timer) and use position timestamps to do corrections.
Position timestamps? You mean status->tstamp, right? I'd like to use that. But this still has two problems:
1) As mentioned, CLOCK_MONOTONIC support is still missing in ALSA(-lib)
2) I'd like to correct my estimations as quickly as possible, i.e. as soon as a new update is available, and not only when I ask for it. So basically, I want to be able to sleep in a poll() for timing updates.
But your example does not explain, why you don't move r/w pointer in the ring buffer (use mmap_commit), thus why you don't fullfill the avail_min requirement for poll wakeup. It seems to me that you're trying to do some crazy things with the ring buffer which are not allowed.
As mentioned, when PA starts up it configures the audio hw buffer to 2s or so with the minimal number of periods (2 on my sound cards). Then, clients come and go. Depending on the what the minimal latency constraints of the clients are, I however will only fill up part of the buffer.
Scenario #1:
Only one simple MP3 playing music application is connected. It doesn't have any real latency constraints. We always fill up the whole 2s buffer, then sleep for 1990 ms, and then fill it up again, and so on. If the MP3 player pauses or seeks, we rewrite the audio buffer with _rewind(). Thus alsthough we buffer two full seconds the user interfaces still reacts snappy.
Now, because the user starts and stops applications all the time, we dynamically change into scenario #2:
The MP3 playing application is still running. However, now a VoIP application is running too. It wants a worst case latency of let's say 20ms. When this applications starts up we don't want to interrupt playback of the MP3 application. So from now on we only use 20ms of the previously configured 2s hw buffer. And as soon as we wrote 20ms, we sleep for 10ms, and then fill it up again, and so on.
Now, after a while the VoIP call is over, we enter scenario #3:
This is identical to #1, we again use the full 2s hw buffer, and sleep for 1990ms.
So, depending on what clients are connected, we dynamically change the wakeups. Now, on ALSA (and with a lot of sound hw, as I understood Takashi) you cannot reconfigure the period sizes dynamically without interruptions of audio output. That's why I want to disable the whole period/buffer fill level management of ALSA, and do all myself with system timers, which I thankfully now can due to the advent of hrtimers (at least on Linux/x86). System timers nowadays are a lot more flexibe than the PCM timer, because they can be reconfigured all the time without any drawbacks. They are not dependant on period sizes or other stuff which may only be reconfigured by resetting the audio devices. The only drawback is that we need to determinine how the sound card clock and the system clock deviate.
Does that make sense to you?
Lennart
On Mon, 7 Jan 2008, Lennart Poettering wrote:
On Mon, 07.01.08 19:33, Jaroslav Kysela (perex@perex.cz) wrote:
I assume that I can enable a mode like that with one of the SW params. But quite frankly the docs for it are not enlighening at all.
Set the stop_threshould sw_params to the boundary size.
snd_pcm_sw_params_get_boundary(sw_params, &boundary); snd_pcm_sw_params_set_stop_threshold(pcm, sw_params, boundary);
then the driver behaves in the "freewheel" mode. The dmix plugin uses this technique.
That's not what I was looking for. This will only disable automatic stopping on buffer underrun. I am using that already in PA (however I pass -1 as stop threshold, which should work, too, shouldn't it?)
What I am really looking for is a way to disable that ALSA reports via poll() the buffer fill level, but instead only reports whether an interrupt happened.
Note that you can control fill level using snd_pcm_forward/rewind without any R/W calls (of course if supported in whole chain). ALSA supports only "controlled" I/O not dumb I/O as OSS driver for mmap.
If you look for timing source, use timer API - you may try alsa-lib/test/timer.c:
./timer class=3 card=0 device=0 subdevice=0
How does this timer depend on the PCM clock? Is its wakeup granularity dependant on the period parameters of the matching PCM device? Or am I supposed to first initialize PCM, and chose some period parameters the hw likes and than pass that on to the timer subsystem?
I assume I don't have any guarantee that all alsa devices have such a timer attached? So I'd need some major non-trivial fallback code if I make use of these timers?
The background why I want this is this: As mentioned I am now scheduling audio in PA mostly based on system timers. To be able to do that I need to be able to translate timespans from the sound card clock to the system clock. Which requires me to get the sample time from the sound card from time to time and filter it through some code that estimates how the sound card clock and the system clock deviates. I'd prefer to do that only once or maybe twice everytime the playback buffer is fully played, and only shortly after an IRQ happened, under the assumption that this is the best time to get the most accurate timing information from the sound card.
It's not really necessary. You can use only one timing source (system timer) and use position timestamps to do corrections.
Position timestamps? You mean status->tstamp, right? I'd like to use that. But this still has two problems:
- As mentioned, CLOCK_MONOTONIC support is still missing in ALSA(-lib)
Yes, but it will be added. No other has had this requirement until now.
- I'd like to correct my estimations as quickly as possible, i.e. as soon as a new update is available, and not only when I ask for it. So basically, I want to be able to sleep in a poll() for timing updates.
It is not necessary. It's always good to keep defined behaviour (e.g. use system timers for "decide" times) and use snd_pcm_delay() to get the actual ring buffer position. More task wakeups mean keeping CPU more busy.
But your example does not explain, why you don't move r/w pointer in the ring buffer (use mmap_commit), thus why you don't fullfill the avail_min requirement for poll wakeup. It seems to me that you're trying to do some crazy things with the ring buffer which are not allowed.
As mentioned, when PA starts up it configures the audio hw buffer to 2s or so with the minimal number of periods (2 on my sound cards). Then, clients come and go. Depending on the what the minimal latency constraints of the clients are, I however will only fill up part of the buffer.
Scenario #1:
Only one simple MP3 playing music application is connected. It doesn't have any real latency constraints. We always fill up the whole 2s buffer, then sleep for 1990 ms, and then fill it up again, and so on. If the MP3 player pauses or seeks, we rewrite the audio buffer with _rewind(). Thus alsthough we buffer two full seconds the user interfaces still reacts snappy.
Now, because the user starts and stops applications all the time, we dynamically change into scenario #2:
The MP3 playing application is still running. However, now a VoIP application is running too. It wants a worst case latency of let's say 20ms. When this applications starts up we don't want to interrupt playback of the MP3 application. So from now on we only use 20ms of the previously configured 2s hw buffer. And as soon as we wrote 20ms, we sleep for 10ms, and then fill it up again, and so on.
Now, after a while the VoIP call is over, we enter scenario #3:
This is identical to #1, we again use the full 2s hw buffer, and sleep for 1990ms.
So, depending on what clients are connected, we dynamically change the wakeups. Now, on ALSA (and with a lot of sound hw, as I understood Takashi) you cannot reconfigure the period sizes dynamically without interruptions of audio output. That's why I want to disable the whole period/buffer fill level management of ALSA, and do all myself with system timers, which I thankfully now can due to the advent of hrtimers (at least on Linux/x86). System timers nowadays are a lot more flexibe than the PCM timer, because they can be reconfigured all the time without any drawbacks. They are not dependant on period sizes or other stuff which may only be reconfigured by resetting the audio devices. The only drawback is that we need to determinine how the sound card clock and the system clock deviate.
Does that make sense to you?
Yes, but I don't see any problem to change avail_min dynamically (sw_params can be changed at any time), so each interrupt can be catched via poll(). But I really think that one timing source (system timers) is enough.
I think that "not used" soundcard interrupts in this case are not a big problem (CPU usage etc.).
Jaroslav
----- Jaroslav Kysela perex@perex.cz Linux Kernel Sound Maintainer ALSA Project, Red Hat, Inc.
On Tue, 08.01.08 09:00, Jaroslav Kysela (perex@perex.cz) wrote:
So, depending on what clients are connected, we dynamically change the wakeups. Now, on ALSA (and with a lot of sound hw, as I understood Takashi) you cannot reconfigure the period sizes dynamically without interruptions of audio output. That's why I want to disable the whole period/buffer fill level management of ALSA, and do all myself with system timers, which I thankfully now can due to the advent of hrtimers (at least on Linux/x86). System timers nowadays are a lot more flexibe than the PCM timer, because they can be reconfigured all the time without any drawbacks. They are not dependant on period sizes or other stuff which may only be reconfigured by resetting the audio devices. The only drawback is that we need to determinine how the sound card clock and the system clock deviate.
Does that make sense to you?
Yes, but I don't see any problem to change avail_min dynamically (sw_params can be changed at any time), so each interrupt can be catched via poll(). But I really think that one timing source (system timers) is enough.
OK. I wasn't really aware that you could change sw params dynamically. I will do this now. This should be sufficient for my needs.
Lennart
Lennart Poettering wrote:
Hi!
In PulseAudio I want to schedule on my own when I need to write audio data into the device and when not. To achieve that I want to be notified via poll() whenever a period boundary is passed (i.e. when an IRQ happens), but only then. That's different from the usual mode where you are notified via poll() whether there is space in the playback buffer that needs to be filled up.
On OSS the mmap() mode enables a mode like I described above. After enabling mmap() the application can decide by itself what it considers full and what empty in the dma buffer, and use GETOPTR to query the playback position. poll() on the OSS fd will directly reflect the sound card IRQs and is not influenced if you ever wrote data to device or not.
I assume that I can enable a mode like that with one of the SW params. But quite frankly the docs for it are not enlighening at all.
Lennart
What would you want to do that for? Surely you just want to be told "I need X samples now please", and that is what the current alsa poll/callback method does.
James
On Mon, 07.01.08 18:55, James Courtier-Dutton (James@superbug.co.uk) wrote:
Lennart Poettering wrote:
Hi!
In PulseAudio I want to schedule on my own when I need to write audio data into the device and when not. To achieve that I want to be notified via poll() whenever a period boundary is passed (i.e. when an IRQ happens), but only then. That's different from the usual mode where you are notified via poll() whether there is space in the playback buffer that needs to be filled up.
On OSS the mmap() mode enables a mode like I described above. After enabling mmap() the application can decide by itself what it considers full and what empty in the dma buffer, and use GETOPTR to query the playback position. poll() on the OSS fd will directly reflect the sound card IRQs and is not influenced if you ever wrote data to device or not.
I assume that I can enable a mode like that with one of the SW params. But quite frankly the docs for it are not enlighening at all.
Lennart
What would you want to do that for? Surely you just want to be told "I need X samples now please", and that is what the current alsa poll/callback method does.
Not so "surely". In PA I want to schedule the wakeup frequency dynamically, based on the strongest requirement of all conncteded clients. To achieve that configure ALSA to use a large (2s) hw playback buffer, and then want to disable sound cards interrupts (except for time keeping), and schedule everything with system timers. Because those I can reconfigure without having to fully reset the audio device, and thus without getting any drop outs.
In effect, as long as the user just plays an MP3 or so, the system will wakeup only every 2s or so and I will make use of the full hw playback buffer I previously configured, and thus save power. However as soon as a VoIP application connects which has stronger latency requirements I set my wakeups (with system timers) more often and only use a smaller part of the large hw audio buffer.
Since ALSA doesn't allow me to reconfigure the the audio interrupt frequency dynamically during playback without having to reset the device I use those system timers. And because I do use those, I don't have much use for the buffer fill level management of ALSA -- because it is almost always wrong, because it doesn't know anything about my current latency constraints.
Now, with setting snd_pcm_sw_params_set_avail_min() to something perversly huge I can make sure that ALSA never wakes me up. However, that also has the effect that I can no longer user the sound card IRQ for getting the most accurate timing information from the sound card.
Which sucks.
I only want to use notification via poll() for keeping time, I don't want ALSA's buffer management.
Lennart
Lennart Poettering wrote:
On Mon, 07.01.08 18:55, James Courtier-Dutton (James@superbug.co.uk) wrote:
Lennart Poettering wrote:
Hi!
In PulseAudio I want to schedule on my own when I need to write audio data into the device and when not. To achieve that I want to be notified via poll() whenever a period boundary is passed (i.e. when an IRQ happens), but only then. That's different from the usual mode where you are notified via poll() whether there is space in the playback buffer that needs to be filled up.
On OSS the mmap() mode enables a mode like I described above. After enabling mmap() the application can decide by itself what it considers full and what empty in the dma buffer, and use GETOPTR to query the playback position. poll() on the OSS fd will directly reflect the sound card IRQs and is not influenced if you ever wrote data to device or not.
I assume that I can enable a mode like that with one of the SW params. But quite frankly the docs for it are not enlighening at all.
Lennart
What would you want to do that for? Surely you just want to be told "I need X samples now please", and that is what the current alsa poll/callback method does.
Not so "surely". In PA I want to schedule the wakeup frequency dynamically, based on the strongest requirement of all conncteded clients. To achieve that configure ALSA to use a large (2s) hw playback buffer, and then want to disable sound cards interrupts (except for time keeping), and schedule everything with system timers. Because those I can reconfigure without having to fully reset the audio device, and thus without getting any drop outs.
In effect, as long as the user just plays an MP3 or so, the system will wakeup only every 2s or so and I will make use of the full hw playback buffer I previously configured, and thus save power. However as soon as a VoIP application connects which has stronger latency requirements I set my wakeups (with system timers) more often and only use a smaller part of the large hw audio buffer.
Since ALSA doesn't allow me to reconfigure the the audio interrupt frequency dynamically during playback without having to reset the device I use those system timers. And because I do use those, I don't have much use for the buffer fill level management of ALSA -- because it is almost always wrong, because it doesn't know anything about my current latency constraints.
Now, with setting snd_pcm_sw_params_set_avail_min() to something perversly huge I can make sure that ALSA never wakes me up. However, that also has the effect that I can no longer user the sound card IRQ for getting the most accurate timing information from the sound card.
Which sucks.
I only want to use notification via poll() for keeping time, I don't want ALSA's buffer management.
Lennart
Are you going to be at FOMS. It might be easier to explain to you there. You are making assumptions about sound cards that might be wrong. E.g. 2secs hw buffer for instance.
On Mon, 07.01.08 22:34, James Courtier-Dutton (James@superbug.co.uk) wrote:
Are you going to be at FOMS. It might be easier to explain to you there. You are making assumptions about sound cards that might be wrong. E.g. 2secs hw buffer for instance.
Yes, I will be at FOMS.
I ask for 2s of hw buffer, I take whatever ALSA gives me. If ALSA cannot fulfil this request I am happy and take whatever I get.
Also, if clock_getres() tells me that hrtimers are not available, or when I cannot enable mmap mode for a device, I don't do system-timer-based scheduling at all, instead I fall back to use a fixed period size, the way ALSA suggests it.
I am not aware that I make any assumptions of the ALSA API that are not documented, besides:
- That I can fully (as in "what is in the hw buf and still unplayed") rewind a playback buffer from the front:0, surround40:, surround50:, surround41:, surround51: and surround71: devices when I managed to open them in mmap mode.
- That snd_pcm_sw_params_set_stop_threshold(.. , (snd_pcm_uframes_t) -1)) disables automatic starting of the device.
- That snd_pcm_sw_params_set_start_threshold(.. , (snd_pcm_uframes_t) -1)) disables automatic stopping on underrun.
- That snd_pcm_sw_params_set_avail_min(.. , INT_MAX) will never mark the PCM fd as ready in poll().
- That the mixer device for a device surround51:0 is called hw:0
Which seem all pretty reasonable to me, and based on comments from Takashi or Jaroslav on the ML, or by looking into the source code.
What other assumptions do you believe I am making that might be invalid?
Lennart
participants (4)
-
James Courtier-Dutton
-
Jaroslav Kysela
-
Lennart Poettering
-
Takashi Iwai