[alsa-devel] Propagating audio properties along the audio path

Marc Gonzalez marc.w.gonzalez at free.fr
Fri Sep 20 11:50:47 CEST 2019


On 17/09/2019 17:33, Marc Gonzalez wrote:

> Disclaimer: I've never worked in the sound/ layer, and it is possible that
> some of my questions are silly or obvious.
> 
> Basically, I'm trying to implement some form of eARC(*) on an arm64 SoC.
> (*) enhanced Audio Return Channel (from HDMI 2.1)
> 
> The setup looks like this:
> 
> A = Some kind of audio source, typically a TV or game console
> B = The arm64 SoC, equipped with some nice speakers
> 
>    HDMI
> A ------> B
> 
> If we look inside B, we actually have
> B1 = an eARC receiver (input = HDMI, output = I2S)
> B2 = an audio DSP (input = I2S, output = speakers)
> 
>     I2S        ?
> B1 -----> B2 -----> speakers
> 
> 
> If I read the standard right, B is supposed to advertise which audio formats
> it supports, and A is supposed to pick "the best". For the sake of argument,
> let's say A picks "PCM, 48 kHz, 8 channels, 16b".
> 
> At some point, B receives audio packets, parses the Channel Status, and
> determines that A is sending "PCM, 48 kHz, 8 channels, 16b". The driver
> then configures the I2S link, and forwards the audio stream over I2S to
> the DSP.
> 
> QUESTION_1:
> How is the DSP supposed to "learn" the properties of the audio stream?
> (AFAIU, they're not embedded in the data, so there must be some side-channel?)
> I assume the driver of B1 is supposed to propagate the info to the driver of B2?
> (Via some call-backs? By calling a function in B2?)
> 
> QUESTION_2:
> Does it ever make sense for B2 to ask B1 to change the audio properties?
> (Not sure if B1 is even allowed to renegotiate.)

I think it boils down to the "Dynamic PCM" abstraction?

	https://www.kernel.org/doc/html/latest/sound/soc/dpcm.html


The downstream driver (7500 lines) is tough to ingest for a noob.

	https://source.codeaurora.org/quic/la/kernel/msm-4.4/tree/sound/soc/msm/msm8998.c?h=LE.UM.1.3.r3.25

I'll keep chipping at whatever docs I can find.


One more concern popped up: if the audio stream changes mid-capture
(for example, a different TV program uses different audio settings),
then I would detect this in the eARC receiver, but it's not clear
(to me) how to propagate the info to the DSP...

I'm not even sure when the HW params actually get applied...
Is it for SNDRV_PCM_IOCTL_PREPARE? SNDRV_PCM_IOCTL_START?

I couldn't find much documentation for the IOCTLs in the kernel:

$ git grep SNDRV_PCM_IOCTL  Documentation/
Documentation/sound/designs/tracepoints.rst:value to these parameters, then execute ioctl(2) with SNDRV_PCM_IOCTL_HW_REFINE
Documentation/sound/designs/tracepoints.rst:or SNDRV_PCM_IOCTL_HW_PARAMS. The former is used just for refining available
Documentation/sound/designs/tracepoints.rst:        SNDRV_PCM_IOCTL_HW_REFINE only. Applications can select which
Documentation/sound/designs/tracepoints.rst:        SNDRV_PCM_IOCTL_HW_PARAMS, this mask is ignored and all of parameters
Documentation/sound/designs/tracepoints.rst:        SNDRV_PCM_IOCTL_HW_REFINE to retrieve this flag, then decide candidates
Documentation/sound/designs/tracepoints.rst:        of parameters and execute ioctl(2) with SNDRV_PCM_IOCTL_HW_PARAMS to


Regards.


More information about the Alsa-devel mailing list