On 12/20/2013 07:27 PM, Mark Brown wrote:
On Fri, Dec 20, 2013 at 03:04:08PM +0100, Lars-Peter Clausen wrote:
On 12/20/2013 03:25 PM, Timur Tabi wrote:
Is this new? There are formats that the codec and the SSI support that the DMA controller does NOT support, like packed 24-bit samples. How do we ensure that we never get those?
No, this is how it has always been. If there are restrictions imposed by the DMA controller we need to add support for expressing this inside the ASoC framework. But I think it will probably be more complex than just saying the DMA controller supports format A, B, C and the DAI controller supports format B, C, D and then just do the intersection of both. E.g. the DAI controller probably does not care whether the samples are packed or not if it only sees one sample at a time, while
The most common pattern I've seen is that the DAIs expect to see whole samples at a time get written into their FIFOs since the FIFOs tend to be stored in samples rather than bytes. With that pattern it'd be a bit cleaner to have them advertise sample sizes and transfer sizes and then have the core work out that if you can for example do 24 bit samples with four byte transfers and have a DMA controller that needs a 1:1 mapping between data read and written then we can't do packed format.
Yep. The other one I've seen is where the audio controller expects the DMA to pack samples, that are smaller than the bus width, to one bus width word. E.g. if the bus width is 32 and the sample width is 16 the DMA controller is supposed to write 2 samples at once.
And then there might be different variations of the first one. Some controllers might expect narrow writes/reads while others might expect full bus width access with the upper bytes padded/discarded. But I think most controllers are fine with both.
The audio controller should probably advertise what sample sizes it supports and how it expects them to be written/read. And the DMA controller then based on that list needs to figure out what kind of in memory representations of the audio data it can support.
I think in general we want to be moving the DAIs to a sample size based interface and then mapping that onto the DMA controller when we connect the DMA and CPU DAIs. This would help with clarity on the CODEC side as well.
Yep, something similar should probably be done for format negotiation between DAI and CODEC.