Re: [alsa-devel] [alsa-cvslog] alsa-lib: pcm - Limit the avail_min minimum size
On Tue, 20 Nov 2007, Takashi Iwai wrote:
changeset: 2352:39d34d6a4587 tag: tip user: tiwai date: Tue Nov 20 15:29:10 2007 +0100 files: src/pcm/pcm.c description: pcm - Limit the avail_min minimum size
Fix avail_min if it's less than period_size. The too small avail_min is simply useless and the cause of CPU hog with rate plugin.
diff -r b1d1733e52f8 -r 39d34d6a4587 src/pcm/pcm.c --- a/src/pcm/pcm.c Mon Nov 19 08:07:19 2007 +0100 +++ b/src/pcm/pcm.c Tue Nov 20 15:29:10 2007 +0100 @@ -5577,6 +5577,12 @@ int snd_pcm_sw_params_set_avail_min(snd_ #endif { assert(pcm && params);
- /* Fix avail_min if it's below period size. The period_size
* defines the minimal wake-up timing accuracy, so it doesn't
* make sense to set below that.
*/
- if (val < pcm->period_size)
params->avail_min = val; return 0;val = pcm->period_size;
}
I think that this patch is wrong. We may use system timers to increase (fine-tune) "interrupt" latencies for pcm streams - see tick time in driver and library.
Jaroslav
----- Jaroslav Kysela perex@perex.cz Linux Kernel Sound Maintainer ALSA Project
At Tue, 20 Nov 2007 20:36:16 +0100 (CET), Jaroslav Kysela wrote:
On Tue, 20 Nov 2007, Takashi Iwai wrote:
changeset: 2352:39d34d6a4587 tag: tip user: tiwai date: Tue Nov 20 15:29:10 2007 +0100 files: src/pcm/pcm.c description: pcm - Limit the avail_min minimum size
Fix avail_min if it's less than period_size. The too small avail_min is simply useless and the cause of CPU hog with rate plugin.
diff -r b1d1733e52f8 -r 39d34d6a4587 src/pcm/pcm.c --- a/src/pcm/pcm.c Mon Nov 19 08:07:19 2007 +0100 +++ b/src/pcm/pcm.c Tue Nov 20 15:29:10 2007 +0100 @@ -5577,6 +5577,12 @@ int snd_pcm_sw_params_set_avail_min(snd_ #endif { assert(pcm && params);
- /* Fix avail_min if it's below period size. The period_size
* defines the minimal wake-up timing accuracy, so it doesn't
* make sense to set below that.
*/
- if (val < pcm->period_size)
params->avail_min = val; return 0;val = pcm->period_size;
}
I think that this patch is wrong. We may use system timers to increase (fine-tune) "interrupt" latencies for pcm streams - see tick time in driver and library.
The sleep_min is conceptually a misdesign. If we may have a finer irq source, what is the purpose of "period" at all then?
For the apps, it doesn't matter what is damn timer or irq source. The only question for apps is how it can get find and accurate timing. If "period" defines the minimum latency, then it's clear. This is the definition that people understand.
Seriously, let's stop adding more confusion. We have too many double definitions. If the timer is useful for improving latency, then let's implement it in the driver side and don't bother apps.
(And, above all, sleep_min won't help in this case -- dmix + rate cannot use it properly.)
Takashi
participants (2)
-
Jaroslav Kysela
-
Takashi Iwai