On 01/17/2012 05:44 PM, Mark Brown wrote:
Sure it might be good to know, but it is something we do not have control over. There's a datasheet if someone is interested.
The datasheet isn't terribly machine parseable.
True, and the information that the chip internally works in 24 bit mode is irrelevant. Why would any application care when it plays 16bit sample that the HW will internally convert it to 24 bit?
Even if you could select the algorithm for the in HW resampling it could be configured via kcontrol, or via qos. For application it does not matter.
I don't recall suggesting configuring the hardware algorithm here?
For what other reason application would use the fact that the 16bit sample will converted to 24bit by the HW other that you might want to influence the algorithm used by the HW when it does it?
This is not a point. If it does it's internal digital volume on the full 32bit resolution from which the HW just discards 8bits we will loose resolution. If PA knows that out of the 32bit only the 24bit msbit is going to be used it can apply the gain correctly.
This isn't a correctness thing really, it's just that it's doing more work than it needs to because nothing is going to pay attention to the the lower bits it outputs. A gain applied with 24 bits just has an output with a bit less resolution than one applied with 32 bits but it shouldn't be substantially different.
Yeah, but it is not correct. If it does not know this we have 8bit 'latency' in gain control. Pulse can change the gain which will have no effect. But I'm not arguing against the constraint on the 32bit sample for 24msbits...
If we tell PA that the codec internally works in 24bit, and we play 16bit audio (in 16bit mode) PA needs to apply the gain in 16bit resolution. The information about the codec working internally in 24bit irrelevant.
I can't think why Pulse might want to use it. On the other hand it's not going to hurt to tell it.
If you look at this: Its setup is: stream : PLAYBACK access : RW_INTERLEAVED format : S16_LE subformat : STD channels : 2 rate : 48000 exact rate : 48000 (48000/1) msbits : 24 buffer_size : 24000 period_size : 6000 period_time : 125000
It just does not feel right. What if application takes this literally, goes and applies the digital gain on a 16bit sample with 24bit resolution? I know the application need to be fixed, but no application expect to be told that out of the 16bit they can use 24bit..
I'm not convinced that this is a good idea. We should apply the constraints on the sample size where it actually make sense. IMHO.