[alsa-devel] [PATCH] ASoC: PCM_RATE: Check for KNOT and CONTINUOUS flags
For ASoC, if either CPU or CODEC driver has set the flag, the MACHINE driver should be given a chance to figure out if the dai, that set the flag, can accomodate a rate that it does not explicitly specify but is specified by the dai at the other end of the link.
Signed-off-by: Jassi Brar jassi.brar@samsung.com --- sound/soc/soc-core.c | 12 ++++++++++++ 1 files changed, 12 insertions(+), 0 deletions(-)
diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c index 06c38d1..eb73aab 100644 --- a/sound/soc/soc-core.c +++ b/sound/soc/soc-core.c @@ -404,6 +404,12 @@ static int soc_pcm_open(struct snd_pcm_substream *substream) codec_dai->playback.formats & cpu_dai->playback.formats; runtime->hw.rates = codec_dai->playback.rates & cpu_dai->playback.rates; + if (codec_dai->playback.rates + & (SNDRV_PCM_RATE_KNOT | SNDRV_PCM_RATE_CONTINUOUS)) + runtime->hw.rates |= cpu_dai->playback.rates; + if (cpu_dai->playback.rates + & (SNDRV_PCM_RATE_KNOT | SNDRV_PCM_RATE_CONTINUOUS)) + runtime->hw.rates |= codec_dai->playback.rates; } else { runtime->hw.rate_min = max(codec_dai->capture.rate_min, @@ -421,6 +427,12 @@ static int soc_pcm_open(struct snd_pcm_substream *substream) codec_dai->capture.formats & cpu_dai->capture.formats; runtime->hw.rates = codec_dai->capture.rates & cpu_dai->capture.rates; + if (codec_dai->capture.rates + & (SNDRV_PCM_RATE_KNOT | SNDRV_PCM_RATE_CONTINUOUS)) + runtime->hw.rates |= cpu_dai->capture.rates; + if (cpu_dai->capture.rates + & (SNDRV_PCM_RATE_KNOT | SNDRV_PCM_RATE_CONTINUOUS)) + runtime->hw.rates |= codec_dai->capture.rates; }
snd_pcm_limit_hw_rates(runtime);
On Fri, 2010-03-12 at 13:38 +0900, Jassi Brar wrote:
For ASoC, if either CPU or CODEC driver has set the flag, the MACHINE driver should be given a chance to figure out if the dai, that set the flag, can accomodate a rate that it does not explicitly specify but is specified by the dai at the other end of the link.
Signed-off-by: Jassi Brar jassi.brar@samsung.com
Acked-by: Liam Girdwood lrg@slimlogic.co.uk
On Fri, Mar 12, 2010 at 01:38:52PM +0900, Jassi Brar wrote:
For ASoC, if either CPU or CODEC driver has set the flag, the MACHINE driver should be given a chance to figure out if the dai, that set the flag, can accomodate a rate that it does not explicitly specify but is specified by the dai at the other end of the link.
Signed-off-by: Jassi Brar jassi.brar@samsung.com
Applied, thanks.
We'll have to work out how to manage the sharing of responsibility for the clock configuration between the component drivers and the machine drivers. My first thought is to have the component drivers provide functions machine drivers can use if they want to. It might also be desirable to allow machine drivers to suppress these flags if they just want to use standard rates, though IIRC the core does the right thing anyway.
On Fri, Mar 12, 2010 at 8:16 PM, Mark Brown broonie@opensource.wolfsonmicro.com wrote:
On Fri, Mar 12, 2010 at 01:38:52PM +0900, Jassi Brar wrote:
For ASoC, if either CPU or CODEC driver has set the flag, the MACHINE driver should be given a chance to figure out if the dai, that set the flag, can accomodate a rate that it does not explicitly specify but is specified by the dai at the other end of the link.
Signed-off-by: Jassi Brar jassi.brar@samsung.com
Applied, thanks.
We'll have to work out how to manage the sharing of responsibility for the clock configuration between the component drivers and the machine drivers. My first thought is to have the component drivers provide functions machine drivers can use if they want to. It might also be desirable to allow machine drivers to suppress these flags if they just want to use standard rates, though IIRC the core does the right thing anyway.
Usually even the non-standard rates, those not explicitly mentioned in the chip's manual, are possible to generate given a suitable source of clock and this source clock usually closely depends on the board/machine. That suggests having the MACHINE driver decide which rates would be supported on it (as only it knows cpu/codec dais and the source of clocks). But functions seem overkill, the machine driver could already extract supported rates from cpu_dai and codec_dai members of the dai_link.
So, imho, rates specified by dais should be used only by the machine driver which, after considering the design and purpose of the device, provide a list of supported rates to the ASoC. Of course, soc-core.c would need to be modified.
On Fri, Mar 12, 2010 at 09:32:14PM +0900, jassi brar wrote:
Usually even the non-standard rates, those not explicitly mentioned in the chip's manual, are possible to generate given a suitable source of clock and this source clock usually closely depends on the board/machine.
Immediately catastrophic failure at non-standard sample rates is unusual, but that doesn't mean that it devices are going to run well or reliably if driven out of spec. This is especially true with modern devices where you've got a reasonable amount of DSP going on in the device since that tends to involve sample rate dependant coefficients which degrade performance (often substantially, sometimes in a signal dependant fashion) when misconfigured.
That suggests having the MACHINE driver decide which rates would be supported on it (as only it knows cpu/codec dais and the source of clocks).
You've got to remember that most machine driver developers don't have any current understanding of audio clocking requirements and often find it's more trouble than it's worth to get up to speed with them, never mind that without specialised test equipment or an expert ear it is hard to assess the impact on performance of the configuation decisions that have been taken.
Where there are constraints it's much easier for machine drivers to specify the clocks that are being fed into the devices and have the drivers work out their own constraints as much as possible, that's something that's more directly visible and understandable to users than the resulting sample rates. Currently CODEC drivers could do more of this, though in practice the increased flexibility of modern designs is making the problem go away anyway.
But functions seem overkill, the machine driver could already extract supported rates from cpu_dai and codec_dai members of the dai_link.
There's currently no way to express anything except either a bitmask of the standard rates or a single continuous range so _KNOT is potentially a bit tricky (though actually I think ALSA does the right thing with constraints provided by function at startup so actually we're probably OK with the existing API).
So, imho, rates specified by dais should be used only by the machine driver which, after considering the design and purpose of the device, provide a list of supported rates to the ASoC. Of course, soc-core.c would need to be modified.
I wouldn't be completely opposed to allowing this as an option for anybody who really wants it but I really don't think it'd be a postive move to pull the existing support out, it would at best make more work for machine drivers and I would expect it to increase the number of misconfigured Linux systems out there.
participants (4)
-
Jassi Brar
-
jassi brar
-
Liam Girdwood
-
Mark Brown