[alsa-devel] [PATCH 0/4] ASoC: Do not use core to handle msbits constraint for TI drivers
Hi,
The implementation in the core applies the msbits constraint to all sample size. These codecs, and cpu dai drivers only have msbits constraint when they are configured with 32 bit sample size. Apply the msbits constraint in the drivers for 32 bit samples only.
Regards, Peter --- Peter Ujfalusi (4): Revert "ASoC: tlv320dac33: Use core to set the msbits constraint" Revert "ASoC: twl4030: Use core to set the msbits constraint" Revert "ASoC: omap-dmic: Use core to set the msbits constraint" ASoC: omap-mcpdm: Do not use core to set msbit constraint
sound/soc/codecs/tlv320dac33.c | 3 ++- sound/soc/codecs/twl4030.c | 7 +++---- sound/soc/omap/omap-dmic.c | 7 ++++--- sound/soc/omap/omap-mcpdm.c | 4 ++-- 4 files changed, 11 insertions(+), 10 deletions(-)
The core implementation applies the msbits constraint to samples where it should not. Do not ask core to do this, the original code does the right thing.
This reverts commit 6bb4f827f52a81db10405a4363877140273d669f. --- sound/soc/codecs/tlv320dac33.c | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/sound/soc/codecs/tlv320dac33.c b/sound/soc/codecs/tlv320dac33.c index 21ccf0a..f0aad26 100644 --- a/sound/soc/codecs/tlv320dac33.c +++ b/sound/soc/codecs/tlv320dac33.c @@ -806,6 +806,8 @@ static int dac33_startup(struct snd_pcm_substream *substream, /* Stream started, save the substream pointer */ dac33->substream = substream;
+ snd_pcm_hw_constraint_msbits(substream->runtime, 0, 32, 24); + return 0; }
@@ -1514,7 +1516,6 @@ static struct snd_soc_dai_driver dac33_dai = { .channels_max = 2, .rates = DAC33_RATES, .formats = DAC33_FORMATS,}, - .sig_bits = 24, .ops = &dac33_dai_ops, };
On Thu, Jan 19, 2012 at 07:24:13PM +0100, Peter Ujfalusi wrote:
The core implementation applies the msbits constraint to samples where it should not. Do not ask core to do this, the original code does the right thing.
Peter, grow up. With my last patch (well, a typo fixed version of it) userspace will do what you want.
On 01/19/2012 07:26 PM, Mark Brown wrote:
On Thu, Jan 19, 2012 at 07:24:13PM +0100, Peter Ujfalusi wrote:
The core implementation applies the msbits constraint to samples where it should not. Do not ask core to do this, the original code does the right thing.
Peter, grow up. With my last patch (well, a typo fixed version of it) userspace will do what you want.
If you actually read the thread it was one of my problem. It might not bother you that we are wasting CPU cycles for nothing, but it annoys me. This is the reason we need to update the CPUs every 2 years.
Please take this series. I'm not going to allow that the TI drivers will waste CPU cycles (for nothing).
On Thu, Jan 19, 2012 at 07:33:10PM +0100, Peter Ujfalusi wrote:
It might not bother you that we are wasting CPU cycles for nothing, but it annoys me. This is the reason we need to update the CPUs every 2 years.
Please take this series. I'm not going to allow that the TI drivers will waste CPU cycles (for nothing).
Please take a step back (I know I'm rather annoyed and am trying to do so myself), none of this is getting us anywhere.
We've got real CPU consumption problems in ASoC - the DAPM algorithm is the biggest one, even with the work I did recently to mitigate against it it's still got the ability to explode dramatically to the point where it's user visible as the worst case is well over O(n^2). There's plenty of low hanging fruit that it'd be much better to spend time on here - as a quick example every time we start or stop a stream we do a linear search of all DAPM widgets looking for those that might be affected based on a string match in order to kick DAPM for them. That's O(n) in the number of widgets in the card plus the strcmp() costs.
We'll all get *much* more benefit from working on improvements of things like that (and on the memory consumption which isn't great either and probably manages to burn measurable CPU cycles due to cache misses) than we ever will from anything we do with this code.
Hi Mark,
Sorry for the delay, I'm traveling.
On 01/19/2012 09:07 PM, Mark Brown wrote:
Please take a step back (I know I'm rather annoyed and am trying to do so myself), none of this is getting us anywhere.
Agreed. This was not entertaining for me at least. I'm sure we just overlook each other's point of view. I had some time to think this over with a 'clean' head. AFAIK we only place this constraint to 32bit samples. I have not encountered any other HW limitation. What if we just simply convert the msbit core support to apply the constraint to 32 bit samples only? With the current code we are not able to apply different constraint to different sample size anyway. Locally I have added the support for this, but it looks like over engineering. I have the patch(s) for both way, but I think we are better off if we support 32 samples only (simple, clean, covers 99% of use case, well as it is now it covers 100%).
We've got real CPU consumption problems in ASoC - the DAPM algorithm is the biggest one, even with the work I did recently to mitigate against it it's still got the ability to explode dramatically to the point where it's user visible as the worst case is well over O(n^2). There's plenty of low hanging fruit that it'd be much better to spend time on here - as a quick example every time we start or stop a stream we do a linear search of all DAPM widgets looking for those that might be affected based on a string match in order to kick DAPM for them. That's O(n) in the number of widgets in the card plus the strcmp() costs.
And things aren't getting any simpler. For sure we need to do something about this.
We'll all get *much* more benefit from working on improvements of things like that (and on the memory consumption which isn't great either and probably manages to burn measurable CPU cycles due to cache misses) than we ever will from anything we do with this code.
You know Mark, I'm an engineer. It bothers me. It bothers me more since I _know_ it is there. I just can not help myself... Sorry about that.
On Sat, Jan 21, 2012 at 04:05:06PM +0200, Peter Ujfalusi wrote:
AFAIK we only place this constraint to 32bit samples. I have not encountered any other HW limitation. What if we just simply convert the msbit core support to apply the constraint to 32 bit samples only?
I do think we can safely drop the 8 and 16 bit entries from the array, even where there are lower accuracy devices (like basebands) I can't see anyone caring. However I think if we go and check there's also going to be some cases where 24 bit will be used due to 16-23 bit ADCs and DACs, grep turned up a few non-ASoC devices using 20 bits for example.
With the current code we are not able to apply different constraint to different sample size anyway. Locally I have added the support for this, but it looks like over engineering.
I can't think of any hardware where that would be required, as I as far as I am aware this mostly comes from devices using a fixed internal resolution independant of the audio interface format.
I have the patch(s) for both way, but I think we are better off if we support 32 samples only (simple, clean, covers 99% of use case, well as it is now it covers 100%).
The 32 bit only one is better, yes, though like I say I think will have to add in the 24 bit case again if it gets dropped.
We've got real CPU consumption problems in ASoC - the DAPM algorithm is
And things aren't getting any simpler. For sure we need to do something about this.
Yeah, I will probably actually do something about the stream start/stop soon as it's inconvenient for the CODEC<>CODEC link stuff.
We'll all get *much* more benefit from working on improvements of things like that (and on the memory consumption which isn't great either and probably manages to burn measurable CPU cycles due to cache misses) than we ever will from anything we do with this code.
You know Mark, I'm an engineer. It bothers me. It bothers me more since I _know_ it is there. I just can not help myself...
One way or another I've always done a lot of work on old code and code I'm not familiar with and obviously recently I've been doing a lot of review as well so I do tend to value maintainability really highly, if only from a "do unto others" point of view.
The core implementation applies the msbits constraint to samples where it should not. Do not ask core to do this, the original code does the right thing.
This reverts commit f3da57f0fc092d338fe83cf89e3fe25ca2283bbf. --- sound/soc/codecs/twl4030.c | 7 +++---- 1 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/sound/soc/codecs/twl4030.c b/sound/soc/codecs/twl4030.c index a193f5f..18e7101 100644 --- a/sound/soc/codecs/twl4030.c +++ b/sound/soc/codecs/twl4030.c @@ -1689,6 +1689,7 @@ static int twl4030_startup(struct snd_pcm_substream *substream, struct snd_soc_codec *codec = rtd->codec; struct twl4030_priv *twl4030 = snd_soc_codec_get_drvdata(codec);
+ snd_pcm_hw_constraint_msbits(substream->runtime, 0, 32, 24); if (twl4030->master_substream) { twl4030->slave_substream = substream; /* The DAI has one configuration for playback and capture, so @@ -2174,15 +2175,13 @@ static struct snd_soc_dai_driver twl4030_dai[] = { .channels_min = 2, .channels_max = 4, .rates = TWL4030_RATES | SNDRV_PCM_RATE_96000, - .formats = TWL4030_FORMATS, - .sig_bits = 24,}, + .formats = TWL4030_FORMATS,}, .capture = { .stream_name = "Capture", .channels_min = 2, .channels_max = 4, .rates = TWL4030_RATES, - .formats = TWL4030_FORMATS, - .sig_bits = 24,}, + .formats = TWL4030_FORMATS,}, .ops = &twl4030_dai_hifi_ops, }, {
The core implementation applies the msbits constraint to samples where it should not. Do not ask core to do this, the original code does the right thing.
This reverts commit c50851684cf09444a77f59432f97bbdfb2617908. --- sound/soc/omap/omap-dmic.c | 7 ++++--- 1 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/sound/soc/omap/omap-dmic.c b/sound/soc/omap/omap-dmic.c index 4dcb5a7..0855c1c 100644 --- a/sound/soc/omap/omap-dmic.c +++ b/sound/soc/omap/omap-dmic.c @@ -113,10 +113,12 @@ static int omap_dmic_dai_startup(struct snd_pcm_substream *substream,
mutex_lock(&dmic->mutex);
- if (!dai->active) + if (!dai->active) { + snd_pcm_hw_constraint_msbits(substream->runtime, 0, 32, 24); dmic->active = 1; - else + } else { ret = -EBUSY; + }
mutex_unlock(&dmic->mutex);
@@ -443,7 +445,6 @@ static struct snd_soc_dai_driver omap_dmic_dai = { .channels_max = 6, .rates = SNDRV_PCM_RATE_96000 | SNDRV_PCM_RATE_192000, .formats = SNDRV_PCM_FMTBIT_S32_LE, - .sig_bits = 24, }, .ops = &omap_dmic_dai_ops, };
The core implementation applies the msbits constraint to samples where it should not. Set the 24msbits constraint in the omap-mcpdm driver for 32 bit samples.
Signed-off-by: Peter Ujfalusi peter.ujfalusi@ti.com --- sound/soc/omap/omap-mcpdm.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/sound/soc/omap/omap-mcpdm.c b/sound/soc/omap/omap-mcpdm.c index 3970556..50d758a 100644 --- a/sound/soc/omap/omap-mcpdm.c +++ b/sound/soc/omap/omap-mcpdm.c @@ -265,6 +265,8 @@ static int omap_mcpdm_dai_startup(struct snd_pcm_substream *substream,
mutex_lock(&mcpdm->mutex);
+ snd_pcm_hw_constraint_msbits(substream->runtime, 0, 32, 24); + if (!dai->active) { /* Enable watch dog for ES above ES 1.0 to avoid saturation */ if (omap_rev() != OMAP4430_REV_ES1_0) { @@ -419,14 +421,12 @@ static struct snd_soc_dai_driver omap_mcpdm_dai = { .channels_max = 5, .rates = OMAP_MCPDM_RATES, .formats = OMAP_MCPDM_FORMATS, - .sig_bits = 24, }, .capture = { .channels_min = 1, .channels_max = 3, .rates = OMAP_MCPDM_RATES, .formats = OMAP_MCPDM_FORMATS, - .sig_bits = 24, }, .ops = &omap_mcpdm_dai_ops, };
participants (2)
-
Mark Brown
-
Peter Ujfalusi