[alsa-devel] [PATCH 1/2] ASoC: Allow drivers to specify how many bits are significant on a DAI
Most devices accept data in formats that don't correspond directly to their internal format. ALSA allows us to set a msbits constraint which tells userspace about this in case it finds it useful (for example, in order to avoid wasting effort dithering bits that will be ignored when raising the sample size of data) so provide a mechanism for drivers to specify the number of bits that are actually significant on a DAI and add the appropriate constraints along with all the others.
This is done slightly awkwardly as the constraint is specified per sample size - we loop over every possible sample size, including ones that the device doesn't support and including ones that have fewer bits than are actually used, but this is harmless as the upper layers do the right thing in these cases.
Signed-off-by: Mark Brown broonie@opensource.wolfsonmicro.com --- include/sound/soc.h | 1 + sound/soc/soc-pcm.c | 36 ++++++++++++++++++++++++++++++++++++ 2 files changed, 37 insertions(+), 0 deletions(-)
diff --git a/include/sound/soc.h b/include/sound/soc.h index 0a56767..346a458 100644 --- a/include/sound/soc.h +++ b/include/sound/soc.h @@ -519,6 +519,7 @@ struct snd_soc_pcm_stream { unsigned int rate_max; /* max rate */ unsigned int channels_min; /* min channels */ unsigned int channels_max; /* max channels */ + unsigned int sig_bits; /* number of bits of content */ };
/* SoC audio ops */ diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c index cdc860a..8bb1793 100644 --- a/sound/soc/soc-pcm.c +++ b/sound/soc/soc-pcm.c @@ -63,6 +63,39 @@ static int soc_pcm_apply_symmetry(struct snd_pcm_substream *substream, }
/* + * List of sample sizes that might go over the bus for parameter + * application. There ought to be a wildcard sample size for things + * like the DAC/ADC resolution to use but there isn't right now. + */ +static int sample_sizes[] = { + 8, 16, 24, 32, +}; + +static void soc_pcm_apply_msb(struct snd_pcm_substream *substream, + struct snd_soc_dai *dai) +{ + int ret, i, bits; + + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) + bits = dai->driver->playback.sig_bits; + else + bits = dai->driver->capture.sig_bits; + + if (!bits) + return; + + for (i = 0; i < ARRAY_SIZE(sample_sizes); i++) { + ret = snd_pcm_hw_constraint_msbits(substream->runtime, + 0, sample_sizes[i], + bits); + if (ret != 0) + dev_warn(dai->dev, + "Failed to set MSB %d/%d: %d\n", + bits, sample_sizes[i], ret); + } +} + +/* * Called by ALSA when a PCM substream is opened, the runtime->hw record is * then initialized and any private data can be allocated. This also calls * startup for the cpu DAI, platform, machine and codec DAI. @@ -187,6 +220,9 @@ static int soc_pcm_open(struct snd_pcm_substream *substream) goto config_err; }
+ soc_pcm_apply_msb(substream, codec_dai); + soc_pcm_apply_msb(substream, cpu_dai); + /* Symmetry only applies if we've already got an active stream. */ if (cpu_dai->active) { ret = soc_pcm_apply_symmetry(substream, cpu_dai);
Signed-off-by: Mark Brown broonie@opensource.wolfsonmicro.com --- sound/soc/codecs/wm8996.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/sound/soc/codecs/wm8996.c b/sound/soc/codecs/wm8996.c index d8da10f..691ae4e 100644 --- a/sound/soc/codecs/wm8996.c +++ b/sound/soc/codecs/wm8996.c @@ -3082,6 +3082,7 @@ static struct snd_soc_dai_driver wm8996_dai[] = { .channels_max = 6, .rates = WM8996_RATES, .formats = WM8996_FORMATS, + .sig_bits = 24, }, .capture = { .stream_name = "AIF1 Capture", @@ -3089,6 +3090,7 @@ static struct snd_soc_dai_driver wm8996_dai[] = { .channels_max = 6, .rates = WM8996_RATES, .formats = WM8996_FORMATS, + .sig_bits = 24, }, .ops = &wm8996_dai_ops, }, @@ -3100,6 +3102,7 @@ static struct snd_soc_dai_driver wm8996_dai[] = { .channels_max = 2, .rates = WM8996_RATES, .formats = WM8996_FORMATS, + .sig_bits = 24, }, .capture = { .stream_name = "AIF2 Capture", @@ -3107,6 +3110,7 @@ static struct snd_soc_dai_driver wm8996_dai[] = { .channels_max = 2, .rates = WM8996_RATES, .formats = WM8996_FORMATS, + .sig_bits = 24, }, .ops = &wm8996_dai_ops, },
On 16 January 2012 18:41, Mark Brown broonie@opensource.wolfsonmicro.com wrote:
Most devices accept data in formats that don't correspond directly to their internal format. ALSA allows us to set a msbits constraint which tells userspace about this in case it finds it useful (for example, in order to avoid wasting effort dithering bits that will be ignored when raising the sample size of data) so provide a mechanism for drivers to specify the number of bits that are actually significant on a DAI and add the appropriate constraints along with all the others.
This is done slightly awkwardly as the constraint is specified per sample size - we loop over every possible sample size, including ones that the device doesn't support and including ones that have fewer bits than are actually used, but this is harmless as the upper layers do the right thing in these cases.
Signed-off-by: Mark Brown broonie@opensource.wolfsonmicro.com
Acked-by: Liam Girdwood lrg@ti.com
Btw, will you be reworking Peter's patch ?
Liam
On Mon, Jan 16, 2012 at 10:26:11PM +0000, Girdwood, Liam wrote:
Btw, will you be reworking Peter's patch ?
Probably if he doesn't get around to it, though I figured that the patch is so tiny it's probably going to be less work to just write it when testing rather than to apply a patch from mail so it'd be easier for him to let him do it.
On 01/16/2012 11:42 PM, Mark Brown wrote:
On Mon, Jan 16, 2012 at 10:26:11PM +0000, Girdwood, Liam wrote:
Btw, will you be reworking Peter's patch ?
Probably if he doesn't get around to it, though I figured that the patch is so tiny it's probably going to be less work to just write it when testing rather than to apply a patch from mail so it'd be easier for him to let him do it.
I'll update the drivers (cpu dais, and codecs I know have this feature).
On 01/16/2012 07:41 PM, Mark Brown wrote:
/*
- List of sample sizes that might go over the bus for parameter
- application. There ought to be a wildcard sample size for things
- like the DAC/ADC resolution to use but there isn't right now.
- */
+static int sample_sizes[] = {
- 8, 16, 24, 32,
+};
+static void soc_pcm_apply_msb(struct snd_pcm_substream *substream,
struct snd_soc_dai *dai)
+{
- int ret, i, bits;
- if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
bits = dai->driver->playback.sig_bits;
- else
bits = dai->driver->capture.sig_bits;
- if (!bits)
return;
- for (i = 0; i < ARRAY_SIZE(sample_sizes); i++) {
ret = snd_pcm_hw_constraint_msbits(substream->runtime,
0, sample_sizes[i],
bits);
Should we apply the constraint only if the sample size is bigger than the msbit request: if (sample_sizes[i] > bits) { ret = snd_pcm_hw_constraint_msbits(); }
Might be not an issue to say that we have 24msbit on the 8bit sample, but it does not sound right.
if (ret != 0)
dev_warn(dai->dev,
"Failed to set MSB %d/%d: %d\n",
bits, sample_sizes[i], ret);
- }
+}
+/*
- Called by ALSA when a PCM substream is opened, the runtime->hw record is
- then initialized and any private data can be allocated. This also calls
- startup for the cpu DAI, platform, machine and codec DAI.
@@ -187,6 +220,9 @@ static int soc_pcm_open(struct snd_pcm_substream *substream) goto config_err; }
- soc_pcm_apply_msb(substream, codec_dai);
- soc_pcm_apply_msb(substream, cpu_dai);
- /* Symmetry only applies if we've already got an active stream. */ if (cpu_dai->active) { ret = soc_pcm_apply_symmetry(substream, cpu_dai);
On Tue, Jan 17, 2012 at 09:55:23AM +0100, Peter Ujfalusi wrote:
Peter, I shouldn't need to have to remind you to delete unneeded context from your replies.
Should we apply the constraint only if the sample size is bigger than the msbit request: if (sample_sizes[i] > bits) { ret = snd_pcm_hw_constraint_msbits(); }
Might be not an issue to say that we have 24msbit on the 8bit sample, but it does not sound right.
It shouldn't hurt and it is potentially useful to the application to know that things will be converted up by the hardware; unless there's a great reason to do so I'd rather not hide the information.
On 01/17/2012 12:38 PM, Mark Brown wrote:
Might be not an issue to say that we have 24msbit on the 8bit sample, but it does not sound right.
It shouldn't hurt and it is potentially useful to the application to know that things will be converted up by the hardware; unless there's a great reason to do so I'd rather not hide the information.
I can only speak in behalf of OMAP, twl4030, tlv320dac33 here, but the 24msbit only applies to 32bit samples. In case of 16 bit the samples are not converted in any way, they are processed as 16 bit data. So if we say 24msbit for 16bit sample we are not correct. It is correct for 32bit sample. I would think most of the codecs/cpus are working in a same way.
On Tue, Jan 17, 2012 at 02:06:44PM +0100, Peter Ujfalusi wrote:
I can only speak in behalf of OMAP, twl4030, tlv320dac33 here, but the 24msbit only applies to 32bit samples. In case of 16 bit the samples are not converted in any way, they are processed as 16 bit data. So if we say 24msbit for 16bit sample we are not correct. It is correct for 32bit sample. I would think most of the codecs/cpus are working in a same way.
For the CODECs if you look at what they're doing you'll probably find that the device is actually operating at a fixed sample size internally and converting the data somehow at the interface (zero extension being one option when converting up, but more fancy approaches are also possible). This is fairly obvious when you think about how things are likely to be implemented in hardware, it's going to increase complexity a lot to be able to genuinely switch the entire chip from one sample size to another.
On the CPU side specifying significant bits would normally only be appropriate on PDM interfaces as they have most of a DAC or ADC in them to move between the sampled and PDM formats. I'd be surprised to see anything else setting these flags, most of the hardware is just passing the data straight through.
On 01/17/2012 02:19 PM, Mark Brown wrote:
For the CODECs if you look at what they're doing you'll probably find that the device is actually operating at a fixed sample size internally and converting the data somehow at the interface (zero extension being one option when converting up, but more fancy approaches are also possible). This is fairly obvious when you think about how things are likely to be implemented in hardware, it's going to increase complexity a lot to be able to genuinely switch the entire chip from one sample size to another.
It is mostly true. DAC33 can be configured to operate internally 16bit or 24msbit/32bit for example. But none of this matters really for application. If the codec works internally with 32/64 or whatever, it does not matter for the application. What matters is that if it sends data in 32bit only 24msb will be actually going to be taken, and the rest will be ignored at the interface level. What happens within the codec is out of the scope. Applications like PulseAudio can do digital volume control. For them it is important to know if they can operate on the full 32 bit, or only the 24msbit will be taken into account by the HW at interface level. It does not give any useful information for application that the codec will upsample the 16bit data internally to 24 bits. It does not really matter for them since all 16bit data will be used by the codec.
On the CPU side specifying significant bits would normally only be appropriate on PDM interfaces as they have most of a DAC or ADC in them to move between the sampled and PDM formats. I'd be surprised to see anything else setting these flags, most of the hardware is just passing the data straight through.
True, the CPU side mostly passes the data as it is, it does not care about msbits. For McPDM it is different since the internal FIFO in 24bit long word lines, so if application would use all 32bit we it will loose 8bit lsb. This can make difference for PA when applying the digital gain in SW.
On Tue, Jan 17, 2012 at 03:18:35PM +0100, Peter Ujfalusi wrote:
It is mostly true. DAC33 can be configured to operate internally 16bit or 24msbit/32bit for example.
Well, if it's doing something more complicated that doesn't fit in the framework then it shouldn't be doing that.
It does not give any useful information for application that the codec will upsample the 16bit data internally to 24 bits. It does not really matter for them since all 16bit data will be used by the codec.
Oh, I dunno - I'm sure someone could think of a use for it.
On the CPU side specifying significant bits would normally only be appropriate on PDM interfaces as they have most of a DAC or ADC in them to move between the sampled and PDM formats. I'd be surprised to see anything else setting these flags, most of the hardware is just passing the data straight through.
True, the CPU side mostly passes the data as it is, it does not care about msbits. For McPDM it is different since the internal FIFO in 24bit long word lines, so if application would use all 32bit we it will loose
Right, like I say that's because it's got most of a DAC in it.
8bit lsb. This can make difference for PA when applying the digital gain in SW.
Well, it saves it a bit of effort but that's about it.
On 01/17/2012 03:56 PM, Mark Brown wrote:
On Tue, Jan 17, 2012 at 03:18:35PM +0100, Peter Ujfalusi wrote:
It is mostly true. DAC33 can be configured to operate internally 16bit or 24msbit/32bit for example.
Well, if it's doing something more complicated that doesn't fit in the framework then it shouldn't be doing that.
What do you mean? If user plays 16bit audio we configure the codec in 16bit mode. If it is opened in 24msbit/32 mode it is configured accordingly.
It does not give any useful information for application that the codec will upsample the 16bit data internally to 24 bits. It does not really matter for them since all 16bit data will be used by the codec.
Oh, I dunno - I'm sure someone could think of a use for it.
Sure it might be good to know, but it is something we do not have control over. There's a datasheet if someone is interested. Even if you could select the algorithm for the in HW resampling it could be configured via kcontrol, or via qos. For application it does not matter.
True, the CPU side mostly passes the data as it is, it does not care about msbits. For McPDM it is different since the internal FIFO in 24bit long word lines, so if application would use all 32bit we it will loose
Right, like I say that's because it's got most of a DAC in it.
The McPDM does not have codec, the internal FIFO has this layout which dictates the 24msbit. It just cuts the 8lsb.
8bit lsb. This can make difference for PA when applying the digital gain in SW.
Well, it saves it a bit of effort but that's about it.
This is not a point. If it does it's internal digital volume on the full 32bit resolution from which the HW just discards 8bits we will loose resolution. If PA knows that out of the 32bit only the 24bit msbit is going to be used it can apply the gain correctly. If we tell PA that the codec internally works in 24bit, and we play 16bit audio (in 16bit mode) PA needs to apply the gain in 16bit resolution. The information about the codec working internally in 24bit irrelevant.
On Tue, Jan 17, 2012 at 05:08:02PM +0100, Peter Ujfalusi wrote:
On 01/17/2012 03:56 PM, Mark Brown wrote:
Well, if it's doing something more complicated that doesn't fit in the framework then it shouldn't be doing that.
What do you mean? If user plays 16bit audio we configure the codec in 16bit mode. If it is opened in 24msbit/32 mode it is configured accordingly.
The framework feature is much simpler than that, it just supports a fixed number of bits that the device uses internally regardless of what the bus carries. If the hardware actually does something substantial internally then it doesn't really fit in with that.
It does not give any useful information for application that the codec will upsample the 16bit data internally to 24 bits. It does not really matter for them since all 16bit data will be used by the codec.
Oh, I dunno - I'm sure someone could think of a use for it.
Sure it might be good to know, but it is something we do not have control over. There's a datasheet if someone is interested.
The datasheet isn't terribly machine parseable.
Even if you could select the algorithm for the in HW resampling it could be configured via kcontrol, or via qos. For application it does not matter.
I don't recall suggesting configuring the hardware algorithm here?
Right, like I say that's because it's got most of a DAC in it.
The McPDM does not have codec, the internal FIFO has this layout which dictates the 24msbit. It just cuts the 8lsb.
Look at what a PDM output actually does compared to a sampled interface and compare that to what a CODEC is doing - an awful lot of devices are do the actual D/A and A/D conversions on an oversampled bitstream which is what a PDM output is, generating that from the samples at some point in the chain.
8bit lsb. This can make difference for PA when applying the digital gain in SW.
Well, it saves it a bit of effort but that's about it.
This is not a point. If it does it's internal digital volume on the full 32bit resolution from which the HW just discards 8bits we will loose resolution. If PA knows that out of the 32bit only the 24bit msbit is going to be used it can apply the gain correctly.
This isn't a correctness thing really, it's just that it's doing more work than it needs to because nothing is going to pay attention to the the lower bits it outputs. A gain applied with 24 bits just has an output with a bit less resolution than one applied with 32 bits but it shouldn't be substantially different.
If we tell PA that the codec internally works in 24bit, and we play 16bit audio (in 16bit mode) PA needs to apply the gain in 16bit resolution. The information about the codec working internally in 24bit irrelevant.
I can't think why Pulse might want to use it. On the other hand it's not going to hurt to tell it.
On 01/17/2012 05:44 PM, Mark Brown wrote:
Sure it might be good to know, but it is something we do not have control over. There's a datasheet if someone is interested.
The datasheet isn't terribly machine parseable.
True, and the information that the chip internally works in 24 bit mode is irrelevant. Why would any application care when it plays 16bit sample that the HW will internally convert it to 24 bit?
Even if you could select the algorithm for the in HW resampling it could be configured via kcontrol, or via qos. For application it does not matter.
I don't recall suggesting configuring the hardware algorithm here?
For what other reason application would use the fact that the 16bit sample will converted to 24bit by the HW other that you might want to influence the algorithm used by the HW when it does it?
This is not a point. If it does it's internal digital volume on the full 32bit resolution from which the HW just discards 8bits we will loose resolution. If PA knows that out of the 32bit only the 24bit msbit is going to be used it can apply the gain correctly.
This isn't a correctness thing really, it's just that it's doing more work than it needs to because nothing is going to pay attention to the the lower bits it outputs. A gain applied with 24 bits just has an output with a bit less resolution than one applied with 32 bits but it shouldn't be substantially different.
Yeah, but it is not correct. If it does not know this we have 8bit 'latency' in gain control. Pulse can change the gain which will have no effect. But I'm not arguing against the constraint on the 32bit sample for 24msbits...
If we tell PA that the codec internally works in 24bit, and we play 16bit audio (in 16bit mode) PA needs to apply the gain in 16bit resolution. The information about the codec working internally in 24bit irrelevant.
I can't think why Pulse might want to use it. On the other hand it's not going to hurt to tell it.
If you look at this: Its setup is: stream : PLAYBACK access : RW_INTERLEAVED format : S16_LE subformat : STD channels : 2 rate : 48000 exact rate : 48000 (48000/1) msbits : 24 buffer_size : 24000 period_size : 6000 period_time : 125000
It just does not feel right. What if application takes this literally, goes and applies the digital gain on a 16bit sample with 24bit resolution? I know the application need to be fixed, but no application expect to be told that out of the 16bit they can use 24bit..
I'm not convinced that this is a good idea. We should apply the constraints on the sample size where it actually make sense. IMHO.
On Tue, Jan 17, 2012 at 06:55:29PM +0100, Peter Ujfalusi wrote:
On 01/17/2012 05:44 PM, Mark Brown wrote:
The datasheet isn't terribly machine parseable.
True, and the information that the chip internally works in 24 bit mode is irrelevant. Why would any application care when it plays 16bit sample that the HW will internally convert it to 24 bit?
Off the top of my head it may decide that if it happens to have 24 bit data then passing it straight through was sensible. But frankly I'm just working on the basis that if it's more effort to hide information than to provide it then providing it seems like the way forwards. I've certainly got no intention of writing any code here myself unless there's some issue found.
This is not a point. If it does it's internal digital volume on the full 32bit resolution from which the HW just discards 8bits we will loose resolution. If PA knows that out of the 32bit only the 24bit msbit is going to be used it can apply the gain correctly.
This isn't a correctness thing really, it's just that it's doing more work than it needs to because nothing is going to pay attention to the the lower bits it outputs. A gain applied with 24 bits just has an output with a bit less resolution than one applied with 32 bits but it shouldn't be substantially different.
Yeah, but it is not correct. If it does not know this we have 8bit 'latency' in gain control. Pulse can change the gain which will have no effect.
Which will have the same overall effect as if it doesn't do anything based on knowing the resolution.
On 01/17/2012 07:17 PM, Mark Brown wrote:
Off the top of my head it may decide that if it happens to have 24 bit data then passing it straight through was sensible. But frankly I'm just working on the basis that if it's more effort to hide information than to provide it then providing it seems like the way forwards. I've certainly got no intention of writing any code here myself unless there's some issue found.
We have the sample formats (S16_LE, S32_LE, etc) to tell application about the supported level of bit depth. They can choose to use 16bit if that is better for them, or choose 32bit. They make the decision based on the sample is playing, configuration, whatever. It does not help them if we tell when they use 16bit audio: hey, but you could use 24bit. When they use 32bit on the other hand it make sense to let them know that out of the 32bit only the 24msb will be used, so they can align their processing accordingly (if they care).
Yeah, but it is not correct. If it does not know this we have 8bit 'latency' in gain control. Pulse can change the gain which will have no effect.
Which will have the same overall effect as if it doesn't do anything based on knowing the resolution.
But we still 'loose' 8bits. It might be not a big loss at the end when PA for example applies the gain, but for sure we will loose resolution (8bit worth).
My only problem is to say this to application: "out of 8/16bit you should use 24msb". AFAIK this is the meaning of the constraint. This constraint makes sense for 32bit samples: "out of 32bit you should use 24msb".
On Tue, Jan 17, 2012 at 07:51:29PM +0100, Peter Ujfalusi wrote:
Yeah, but it is not correct. If it does not know this we have 8bit 'latency' in gain control. Pulse can change the gain which will have no effect.
Which will have the same overall effect as if it doesn't do anything based on knowing the resolution.
But we still 'loose' 8bits. It might be not a big loss at the end when PA for example applies the gain, but for sure we will loose resolution (8bit worth).
Whereas if you do the gain to the lower resolution you'll discard the same amount of information...
My only problem is to say this to application: "out of 8/16bit you should use 24msb". AFAIK this is the meaning of the constraint. This constraint makes sense for 32bit samples: "out of 32bit you should use 24msb".
Well, like I say I don't see this as a problem and have no intention of writing any code to handle that myself.
participants (3)
-
Girdwood, Liam
-
Mark Brown
-
Peter Ujfalusi