On 01/17/2012 02:19 PM, Mark Brown wrote:
For the CODECs if you look at what they're doing you'll probably find that the device is actually operating at a fixed sample size internally and converting the data somehow at the interface (zero extension being one option when converting up, but more fancy approaches are also possible). This is fairly obvious when you think about how things are likely to be implemented in hardware, it's going to increase complexity a lot to be able to genuinely switch the entire chip from one sample size to another.
It is mostly true. DAC33 can be configured to operate internally 16bit or 24msbit/32bit for example. But none of this matters really for application. If the codec works internally with 32/64 or whatever, it does not matter for the application. What matters is that if it sends data in 32bit only 24msb will be actually going to be taken, and the rest will be ignored at the interface level. What happens within the codec is out of the scope. Applications like PulseAudio can do digital volume control. For them it is important to know if they can operate on the full 32 bit, or only the 24msbit will be taken into account by the HW at interface level. It does not give any useful information for application that the codec will upsample the 16bit data internally to 24 bits. It does not really matter for them since all 16bit data will be used by the codec.
On the CPU side specifying significant bits would normally only be appropriate on PDM interfaces as they have most of a DAC or ADC in them to move between the sampled and PDM formats. I'd be surprised to see anything else setting these flags, most of the hardware is just passing the data straight through.
True, the CPU side mostly passes the data as it is, it does not care about msbits. For McPDM it is different since the internal FIFO in 24bit long word lines, so if application would use all 32bit we it will loose 8bit lsb. This can make difference for PA when applying the digital gain in SW.