[alsa-devel] Feasibility of adding alternative audio transport besides I2S/PWM/SPDIF, etc

Lars-Peter Clausen lars at metafoo.de
Tue Oct 13 12:43:04 CEST 2015


On 10/13/2015 11:57 AM, Liam Girdwood wrote:
> + Mark
> 
> On Fri, 2015-10-09 at 09:11 -0700, Caleb Crome wrote:
>> On Fri, Oct 9, 2015 at 2:34 AM, Liam Girdwood
>> <liam.r.girdwood at linux.intel.com> wrote:
>>> On Thu, 2015-10-08 at 08:51 -0700, Caleb Crome wrote:
>>>> Hi All,
>>>>    I'm in a constant struggle to bring up many channel audio on each
>>>> separate SoC.
>>>>
>>>> I can easily put a microcontroller in place that will collect and
>>>> distrubute all the TDM channels to the codecs, and connect the
>>>> hardware via an SPI interface to the SoC.
>>>>
>>>> So, instead of:
>>>>
>>>> CODECS <---TDM--->  SoC
>>>>
>>>> It would be
>>>>
>>>> CODECS <---TDM---> uC <---SPI---> SoC
>>>>
>>>> So, my questions are:
>>>>
>>>> * I suspect the SPI interface could be used more universally than each
>>>> individual I2S/TDM interface (like FSL SSI vs. Ti McBSP vs. Ti McASP,
>>>> etc).  and the SPI port would provide a very common API regardless of
>>>> SoC.   Is that true?
>>>
>>> Some SPI ports could probably be used for audio, but this depends on the
>>> SPI port HW capabilities. e.g. the SSP port on minnowboard can be
>>> configured for TDM, I2S and SPI (afaik). I don't think any advantage
>>> could be gained from running in SPI mode unless your HW permits some
>>> special features ?
>>
>> Sorry, I wasn't clear.  The point is to use the 'generic' SPI API in
>> the linux kernel to stream the data, and *not* use an audio format.
>> So, the idea is, the external micro would buffer up a block of data,
>> (in our case, maybe 160 samples * 32 channels  = 10 kBytes), then use
>> the SPI port to read and write to the micro as if it were a memory or
>> something like that to transfer the data.  So the external micro would
>> appear to the CPU as an external register bank, and would do all the
>> audio aggregation.
>>
> 
> Ok, so this sounds like a burst based DAI where the host sends audio
> data in bursts then sleeps. I think this has been done by some codec
> vendors, but I dont know if any code is upstream.

The upstream tlv320dac33 driver does something like this, but it is not
necessarily an implementation I'd recommend to mimic. The whole burst access
mode probably wants some support at the framework level.



More information about the Alsa-devel mailing list