[alsa-devel] ASoC DSP and related status
Liam, Mark,
I was recently talking to our internal audio team, extolling the virtues of writing upstreamable drivers for Tegra's audio HW.
One of the big unknowns here is how to represent the Tegra DAS and AHUB modules[1] in a standard fashion, and allowing configuration via kcontrols that influence DAPM routing, rather than open-coding and/or hard-coding such policy in the ASoC machine driver.
So, my questions are:
* What's the status of the ASoC DSP work. I see that some of the base infra-structure has been merged into ASoC's for-next branch, but I think that's just a small portion of the work. Do you have any kind of estimate for when the whole thing will be merged? I don't see recent updates to e.g. Liams' topic/dsp or topic/dsp-upstream branches.
* Back in March in another DSP-related thread, Mark mentioned that the DSP rework was mainly about configuring stuff within a device, but that Mark was working on some code to support autonomous inter-device links. I assume that Tegra's DAS/AHUB would rely on the DSP work, not the stuff Mark mentioned?
See the last few paragraphs of: http://mailman.alsa-project.org/pipermail/alsa-devel/2011-March/037776.html
Related, Mark also mentioned something about representing the DAS/AHUB as codecs. I'm not sure if that was meant as a stop-gap solution before the DSP work was in place, or if that's part of supporting DAAS/AHUB within the DSP infra-structure.
Thanks for any kind of information! Any information here will simply help us plan for when we might be able to switch from open-coding some of the more advanced Tegra audio support to more standardized solutions.
[1] Here's a very quick overview of the relevant Tegra audio HW:
DAS:
Part of Tegra20. Tegra20 has Digital Audio controllers (DACs) i.e. I2S controllers. It also has Digital Audio Ports (DAPs). The Digital Audio Switch (DAS ) sits between. Each DAP selects its audio output from a particular DAC's or DAP's output. Each DAC selects its audio input from a particular DAP. DAP<-> DAP is supported, with one being the master the other the slave. Note that I2S configuration (channels, sample size, I2S vs. DSP etc) is configured in the DAC not the DAP.
AHUB:
Part of Tegra30. Tegra30 has an interconnect called the Audio HUB (AHUB). Various devices attach to this: FIFOs to send/receive audio to CPU memory using DMA, DAMs that receive n(2?) channels from the AHUB and mix/SRC them sending the result back to the AHUB, and finally various IO controllers such as I2S and SPDIF. The AHUB is I believe a full cross-bar. In this case, the I2S formatting is configured solely within the I2S controllers, not on the other side of the AHUB as is the case with the Tegra20 DAS. FIFOs also independently determine the number of channels/bit they send/ receive. There is some limited support for channel count and bitsize conversion at each attachment point to the AHUB. I2S<->I2S loopback may be supported in HW, at least in some cases.
On Fri, Aug 26, 2011 at 12:44:26PM -0700, Stephen Warren wrote:
- Back in March in another DSP-related thread, Mark mentioned that the
DSP rework was mainly about configuring stuff within a device, but that Mark was working on some code to support autonomous inter-device links. I assume that Tegra's DAS/AHUB would rely on the DSP work, not the stuff Mark mentioned?
This depends on how your DSP and associated blobs appear in the system. If the hardware system integration is done such that the DSP looks like a separate chip between the CPU and the rest of the world which happens to be part of the same package then it looks like my stuff - my stuff is mostly there for things like CODEC<->Baseband links where the CPU isn't involved. You'd wind up with CPU<->DSP links either just using vanillia PCMs or using the DSP stuff Liam has been working on and then have some links at the other side of the DSP which look like baseband or whatever style links.
Related, Mark also mentioned something about representing the DAS/AHUB as codecs. I'm not sure if that was meant as a stop-gap solution before the DSP work was in place, or if that's part of supporting DAAS/AHUB within the DSP infra-structure.
I think that's somethinng which we should do if there's interesting stuff going on inside them that it's useful to represent and doing so doesn't introduce too many contortions due to hardware sharing with the main CPU - it means that we can reuse all the infrastructure that we've got for representing routes and so on within CODECs.
DAS:
Part of Tegra20. Tegra20 has Digital Audio controllers (DACs) i.e. I2S controllers. It also has Digital Audio Ports (DAPs). The Digital Audio Switch (DAS ) sits between. Each DAP selects its audio output from a particular DAC's or DAP's output. Each DAC selects its audio input from a particular DAP. DAP<-> DAP is supported, with one being the master the other the slave. Note that I2S configuration (channels, sample size, I2S vs. DSP etc) is configured in the DAC not the DAP.
This one should be easier with Liam's soc-dsp stuff as the output ports are heavily tied to the CPU side. Might be fun representing DAP<->DAP links but I'm not sure how widely deployed those are.
AHUB:
Part of Tegra30. Tegra30 has an interconnect called the Audio HUB (AHUB). Various devices attach to this: FIFOs to send/receive audio to CPU memory using DMA, DAMs that receive n(2?) channels from the AHUB and mix/SRC them sending the result back to the AHUB, and finally various IO controllers such as I2S and SPDIF. The AHUB is I believe a full cross-bar. In this case, the I2S formatting is configured solely within the I2S controllers, not on the other side of the AHUB as is the case with the Tegra20 DAS. FIFOs also independently determine the number of channels/bit they send/ receive. There is some limited support for channel count and bitsize conversion at each attachment point to the AHUB. I2S<->I2S loopback may be supported in HW, at least in some cases.
To me this sounds like it could usefully be a CODEC - the CPU is pretty well isolated from the actual physical ports (and needn't be involved at all) and functionally it's not much different to the sort of setup we've got in some of the existing CODEC drivers just without the integration of the mixed signal bits. That'd mean that you could reuse all the infrastructure those devices use which ought to make the T30 specific bit simpler.
Mark Brown wrote at Saturday, August 27, 2011 3:37 AM:
On Fri, Aug 26, 2011 at 12:44:26PM -0700, Stephen Warren wrote: ... (questions about soc-dsp and representing Tegra HW to ASoC)
Thanks Mark and Liam for your answers.
(T30) AHUB:
Part of Tegra30. Tegra30 has an interconnect called the Audio HUB (AHUB). Various devices attach to this: FIFOs to send/receive audio to CPU memory using DMA, DAMs that receive n(2?) channels from the AHUB and mix/SRC them sending the result back to the AHUB, and finally various IO controllers such as I2S and SPDIF. The AHUB is I believe a full cross-bar. In this case, the I2S formatting is configured solely within the I2S controllers, not on the other side of the AHUB as is the case with the Tegra20 DAS. FIFOs also independently determine the number of channels/bit they send/ receive. There is some limited support for channel count and bitsize conversion at each attachment point to the AHUB. I2S<->I2S loopback may be supported in HW, at least in some cases.
To me this sounds like it could usefully be a CODEC - the CPU is pretty well isolated from the actual physical ports (and needn't be involved at all) and functionally it's not much different to the sort of setup we've got in some of the existing CODEC drivers just without the integration of the mixed signal bits. That'd mean that you could reuse all the infrastructure those devices use which ought to make the T30 specific bit simpler.
That makes sense.
One question: This would end up with something like:
+------------+ (dai 1) +--------------+ (dai 2) +----------------+ | Tegra I2S |<--------->| AHUB (codec) |<--------->| WM8903 (codec) | +------------+ +--------------+ +----------------+
I think I may have asked this before, but how would we represent that to ASoC; I don't think there's any way to squash both those two DAIs into a single snd_soc_dai_link structure, whereas I think I recall you saying that using two separate snd_soc_dai_links wouldn't really work; I'm not sure what we'd put in dai2's "platform" field, and IIRC the second DAI would end up instantiating extra PCM devices to user-space. Am I way off base here, or would be need to do some extra infra-structure work to get this all working?
Thanks again.
On Wed, Aug 31, 2011 at 04:14:43PM -0700, Stephen Warren wrote:
+------------+ (dai 1) +--------------+ (dai 2) +----------------+ | Tegra I2S |<--------->| AHUB (codec) |<--------->| WM8903 (codec) | +------------+ +--------------+ +----------------+
I think I may have asked this before, but how would we represent that to ASoC; I don't think there's any way to squash both those two DAIs into a single snd_soc_dai_link structure, whereas I think I recall you saying that using two separate snd_soc_dai_links wouldn't really work; I'm not sure what we'd put in dai2's "platform" field, and IIRC the second DAI would end up instantiating extra PCM devices to user-space. Am I way off base here, or would be need to do some extra infra-structure work to get this all working?
More infrastructure is needed but we really need it anyway (it's the same infrastructure as we need for baseband type links). The DAI 2 platform would just be left empty as there's no DMA going on.
We should probably also make sure that there's a way we can configure the Tegra-internal links without requiring every single board to do it and then have the board only configure the DAI 2 links otherwise it'll just get repetitive.
Hi Stephen,
On 26/08/11 20:44, Stephen Warren wrote:
Liam, Mark,
I was recently talking to our internal audio team, extolling the virtues of writing upstreamable drivers for Tegra's audio HW.
One of the big unknowns here is how to represent the Tegra DAS and AHUB modules[1] in a standard fashion, and allowing configuration via kcontrols that influence DAPM routing, rather than open-coding and/or hard-coding such policy in the ASoC machine driver.
So, my questions are:
- What's the status of the ASoC DSP work. I see that some of the base
infra-structure has been merged into ASoC's for-next branch, but I think that's just a small portion of the work. Do you have any kind of estimate for when the whole thing will be merged? I don't see recent updates to e.g. Liams' topic/dsp or topic/dsp-upstream branches.
I have about 10 Dynamic PCM (AKA ASoC DSP) core patches still to make it upstream atm.
It in progress atm, but will probably slow down for the next 2 weeks since I'll be in California.
I am aiming to upstream asap, but other things usually take higher priority than upstream at times.
- Back in March in another DSP-related thread, Mark mentioned that the
DSP rework was mainly about configuring stuff within a device, but that Mark was working on some code to support autonomous inter-device links. I assume that Tegra's DAS/AHUB would rely on the DSP work, not the stuff Mark mentioned?
If the Tegra hardware is on CPU and looks anything like the OMAP4 ABE then you will probably need the Dynamic PCM code.
http://www.omappedia.org/wiki/Audio_Drive_Arch
The ABE is basically a PCM DSP with SRC, ASRC, Mixers, Muxes, Volumes, Mutes and DAIs implemented in FW.
Liam
See the last few paragraphs of: http://mailman.alsa-project.org/pipermail/alsa-devel/2011-March/037776.html
Related, Mark also mentioned something about representing the DAS/AHUB as codecs. I'm not sure if that was meant as a stop-gap solution before the DSP work was in place, or if that's part of supporting DAAS/AHUB within the DSP infra-structure.
Thanks for any kind of information! Any information here will simply help us plan for when we might be able to switch from open-coding some of the more advanced Tegra audio support to more standardized solutions.
[1] Here's a very quick overview of the relevant Tegra audio HW:
DAS:
Part of Tegra20. Tegra20 has Digital Audio controllers (DACs) i.e. I2S controllers. It also has Digital Audio Ports (DAPs). The Digital Audio Switch (DAS ) sits between. Each DAP selects its audio output from a particular DAC's or DAP's output. Each DAC selects its audio input from a particular DAP. DAP<-> DAP is supported, with one being the master the other the slave. Note that I2S configuration (channels, sample size, I2S vs. DSP etc) is configured in the DAC not the DAP.
AHUB:
Part of Tegra30. Tegra30 has an interconnect called the Audio HUB (AHUB). Various devices attach to this: FIFOs to send/receive audio to CPU memory using DMA, DAMs that receive n(2?) channels from the AHUB and mix/SRC them sending the result back to the AHUB, and finally various IO controllers such as I2S and SPDIF. The AHUB is I believe a full cross-bar. In this case, the I2S formatting is configured solely within the I2S controllers, not on the other side of the AHUB as is the case with the Tegra20 DAS. FIFOs also independently determine the number of channels/bit they send/ receive. There is some limited support for channel count and bitsize conversion at each attachment point to the AHUB. I2S<->I2S loopback may be supported in HW, at least in some cases.
On 29/08/11 19:09, Girdwood, Liam wrote:
Hi Stephen,
On 26/08/11 20:44, Stephen Warren wrote:
Liam, Mark,
I was recently talking to our internal audio team, extolling the virtues of writing upstreamable drivers for Tegra's audio HW.
One of the big unknowns here is how to represent the Tegra DAS and AHUB modules[1] in a standard fashion, and allowing configuration via kcontrols that influence DAPM routing, rather than open-coding and/or hard-coding such policy in the ASoC machine driver.
So, my questions are:
- What's the status of the ASoC DSP work. I see that some of the base
infra-structure has been merged into ASoC's for-next branch, but I think that's just a small portion of the work. Do you have any kind of estimate for when the whole thing will be merged? I don't see recent updates to e.g. Liams' topic/dsp or topic/dsp-upstream branches.
I have about 10 Dynamic PCM (AKA ASoC DSP) core patches still to make it upstream atm.
It in progress atm, but will probably slow down for the next 2 weeks since I'll be in California.
I am aiming to upstream asap, but other things usually take higher priority than upstream at times.
Forgot to add the link. My kernel.org branches above are stable but the development is being done on Gitorious :-
https://gitorious.org/omap-audio/linux-audio
I plan to push a new squashed stable to kernel.org in the next few days.
Liam
participants (3)
-
Liam Girdwood
-
Mark Brown
-
Stephen Warren