Audio mem2mem devices aka asymmetric sample rate converters
Hi All,
I am currently looking into getting the asymmetric sample rate converters (ASRC) found on some i.MX SoCs to do something useful.
The ASRC units are completely independent units, i.e. independent of the rest of the audio subsystem. They can read from memory using the SDMA engine, convert sample rates and/or audio formats and write back to memory also using the SDMA engine. The ASRC on the i.MX8MN has four contexts to convert up to four streams simultanously. I am not aware of any other non i.MX SoCs having such a unit, but I am pretty sure they exist on other SoCs as well.
There are likely two usecases for such a unit. First would be to offload sample rate and format conversions to hardware. The other would be to synchronize different audio sources/sinks with different master clocks to each other when the master clocks drift away.
How would such units be integrated into ASoC? I can think of two ways. First would be to create an separate audio card from them which records on one end and plays back with a different sample rate / format on the other end, in the v4l2 world that would be a classical mem2mem device. Is Alsa/ASoc prepared for something like this? Would it be feasible to go into such a direction? I haven't found any examples for this in the tree.
The other way is to attach the ASRC to an existing audio card. That is done with the existing in-tree sound/soc/fsl/fsl_asrc.c and sound/soc/fsl/fsl_easrc.c drivers. This approach feels somehow limited as it's not possible to just do conversions without playing/recording something. OTOH userspace is unaffected which might be an advantage. What nags me with that approach is that it's currently not integrated into the simple-audio-card or audio-graph-card bindings. Currently the driver can only be used in conjunction with the fsl,imx-audio-* card driver. It seems backward to integrate such a generic ASRC unit into a special purpose audio card driver. The ASoC core is fully unaware of the ASRC with this approach currently which also doesn't look very appealing. OTOH I don't know if ASoC could handle this. Can ASoC handle for example a chain of DAIs when there are different sample rates and formats in that chain?
Currently I don't really know how to proceed. It would be great if you could share some thoughts to this topic.
Thanks, Sascha
On Thu, Jun 02, 2022 at 01:21:06PM +0200, Sascha Hauer wrote:
How would such units be integrated into ASoC? I can think of two ways. First would be to create an separate audio card from them which records on one end and plays back with a different sample rate / format on the other end, in the v4l2 world that would be a classical mem2mem device. Is Alsa/ASoc prepared for something like this? Would it be feasible to go into such a direction? I haven't found any examples for this in the tree.
You could certainly do that, though I'd expect userspace wouldn't know what to do with it without specific configuration. It also feels like it's probably not what users really want - generally the use case is for rewriting an audio stream without going back to memory, going back to memory means chopping things up into periods which would tend to introduce additional latency and/or fragility which is undesirable even if the devices were DMAing directly to memory.
The other way is to attach the ASRC to an existing audio card. That is done with the existing in-tree sound/soc/fsl/fsl_asrc.c and sound/soc/fsl/fsl_easrc.c drivers. This approach feels somehow limited as it's not possible to just do conversions without playing/recording something. OTOH userspace is unaffected which might be an advantage. What nags me with that approach is that it's currently not integrated into the simple-audio-card or audio-graph-card bindings. Currently the driver can only be used in conjunction with the fsl,imx-audio-* card driver. It seems backward to integrate such a generic ASRC unit into a special purpose audio card driver. The ASoC core is fully unaware of the ASRC with this approach currently which also doesn't look very appealing. OTOH I don't know if ASoC could handle this. Can ASoC handle for example a chain of DAIs when there are different sample rates and formats in that chain?
This is essentially the general problem with DPCM not really scaling at all well, we need to rework the core so that it understands tracking information about the digital parameters of signals through the system like it tracks simple analog on/off information. At the minute the core doesn't really understand what's going on with the digital routing within the SoC at all, it's all done with manual fixups.
If you search for talks from Lars-Peter Clausen at ELC-E you should find some good overviews of the general direction. This is broadly what all the stuff about converting everything to components is going towards, we're removing the distinction between CPU and CODEC components so that everything is interchangable. The problem is that someone (ideally people with systems with this sort of hardware!) needs to do a bunch of heavy lifting in the framework and nobody's had the time to work on the main part of the problem yet. Once it's done then things like the audio-graph-card should be able to handle things easily.
In theory right now you should implement the ASRC as a component driver. You can then set it up as a standalone card if you want to, or integrate into a custom card as you do now.
On Thu, Jun 02, 2022 at 02:32:23PM +0200, Mark Brown wrote:
On Thu, Jun 02, 2022 at 01:21:06PM +0200, Sascha Hauer wrote:
How would such units be integrated into ASoC? I can think of two ways. First would be to create an separate audio card from them which records on one end and plays back with a different sample rate / format on the other end, in the v4l2 world that would be a classical mem2mem device. Is Alsa/ASoc prepared for something like this? Would it be feasible to go into such a direction? I haven't found any examples for this in the tree.
You could certainly do that, though I'd expect userspace wouldn't know what to do with it without specific configuration. It also feels like it's probably not what users really want - generally the use case is for rewriting an audio stream without going back to memory, going back to memory means chopping things up into periods which would tend to introduce additional latency and/or fragility which is undesirable even if the devices were DMAing directly to memory.
The other way is to attach the ASRC to an existing audio card. That is done with the existing in-tree sound/soc/fsl/fsl_asrc.c and sound/soc/fsl/fsl_easrc.c drivers. This approach feels somehow limited as it's not possible to just do conversions without playing/recording something. OTOH userspace is unaffected which might be an advantage. What nags me with that approach is that it's currently not integrated into the simple-audio-card or audio-graph-card bindings. Currently the driver can only be used in conjunction with the fsl,imx-audio-* card driver. It seems backward to integrate such a generic ASRC unit into a special purpose audio card driver. The ASoC core is fully unaware of the ASRC with this approach currently which also doesn't look very appealing. OTOH I don't know if ASoC could handle this. Can ASoC handle for example a chain of DAIs when there are different sample rates and formats in that chain?
This is essentially the general problem with DPCM not really scaling at all well, we need to rework the core so that it understands tracking information about the digital parameters of signals through the system like it tracks simple analog on/off information. At the minute the core doesn't really understand what's going on with the digital routing within the SoC at all, it's all done with manual fixups.
If you search for talks from Lars-Peter Clausen at ELC-E you should find some good overviews of the general direction.
You likely mean https://www.youtube.com/watch?v=6oQF2TzCYtQ. That indeed gives a good overview where we are and where we want to go.
This is broadly what all the stuff about converting everything to components is going towards, we're removing the distinction between CPU and CODEC components so that everything is interchangable. The problem is that someone (ideally people with systems with this sort of hardware!) needs to do a bunch of heavy lifting in the framework and nobody's had the time to work on the main part of the problem yet. Once it's done then things like the audio-graph-card should be able to handle things easily.
In theory right now you should implement the ASRC as a component driver. You can then set it up as a standalone card if you want to, or integrate into a custom card as you do now.
Thanks for your input. I'll see how far I get.
Sascha
On Wed, Jun 08, 2022 at 11:28:02AM +0200, Sascha Hauer wrote:
On Thu, Jun 02, 2022 at 02:32:23PM +0200, Mark Brown wrote:
If you search for talks from Lars-Peter Clausen at ELC-E you should find some good overviews of the general direction.
You likely mean https://www.youtube.com/watch?v=6oQF2TzCYtQ. That indeed gives a good overview where we are and where we want to go.
Yup, that's the one.
In theory right now you should implement the ASRC as a component driver. You can then set it up as a standalone card if you want to, or integrate into a custom card as you do now.
Thanks for your input. I'll see how far I get.
Good luck! :P
participants (2)
-
Mark Brown
-
Sascha Hauer