[alsa-devel] [RFC] A question regarding open ioctl for user space in ASoC driver.
Hi all,
I have an ASoC driver in my own branch and I'm going to get it upstream in the months ahead. The driver is designed for Freescale ASRC module, Asynchronous Sample Rate Converter. It has the capability to convert audio data without involving software conversion like SRC function in ALSA-lib. But before I try to revise and send the driver, I've a few questions here.
We currently have two main functions for ASRC: P2P's and M2M's.
1) P2P, peripheral to peripheral, regards ASRC as a FE (front end link) corresponding to a BE (back end link like ESAI<->CS42888). The P2P function resolves a problem of unsupported sample rates for BE that we can convert the unsupported audio format to a supported one before we send the data into BE. It also has better quality and less CPU loading comparing to ALSA-lib's SRC function.
* For this function, I don't see any critical problem here. But...
2) M2M, memory to memory, can simply make ASRC as a sample rate converter without playback via any BE sound card, just like using any software to convert a WAV file in one sample rate to another WAV in a required sample rate. The driver has its self-designed application to compelte this function by sending the audio data from a wave file into a misc device via non-generic ioctl:
static long asrc_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { struct asrc_pair_params *params = file->private_data; void __user *user = (void __user *)arg; long ret = 0;
switch (cmd) { case ASRC_REQ_PAIR: ret = asrc_ioctl_req_pair(params, user); break; case ASRC_CONFIG_PAIR: ret = asrc_ioctl_config_pair(params, user); break; case ASRC_START_CONV: [...]
So I'm wondering if this kind of ioctl would be allowed or there's any alternative approach?
Not only having application in user space, but we also have an ASRC plugin for ALSA-lib so that it can involve M2M function by using the plugin before sending data via pcm_write().
And here's the another question -- to share the ioctl and the struct asrc_pair_params with user space, there's a header file need to be put into include/uapi/. Is it possible?
There's a reference code (drivers/mxc/asrc/mxc_asrc.c) located in: http://git.freescale.com/git/cgit.cgi/imx/linux-2.6-imx.git/tree/?h=imx_3.10...
In our internal branch, the ASRC driver currently is divided into two drivers for M2M and P2P separately. For upstream version, I'm trying to merge them into a single ASoC driver with both M2M and P2P, just not sure how to handle the M2M part appropriately.
Thank you, Nicolin Chen
On Fri, May 16, 2014 at 06:59:41PM +0800, Nicolin Chen wrote:
I have an ASoC driver in my own branch and I'm going to get it upstream in the months ahead. The driver is designed for Freescale ASRC module, Asynchronous Sample Rate Converter. It has the capability to convert audio data without involving software conversion like SRC function in ALSA-lib. But before I try to revise and send the driver, I've a few questions here.
So, clearly custom ioctl() calls are not a good idea.
- P2P, peripheral to peripheral, regards ASRC as a FE (front end link) corresponding to a BE (back end link like ESAI<->CS42888). The P2P function resolves a problem of unsupported sample rates for BE that we can convert the unsupported audio format to a supported one before we send the data into BE. It also has better quality and less CPU loading comparing to ALSA-lib's SRC function.
- For this function, I don't see any critical problem here. But...
This is moderately common in mainline already. You shouldn't need to have any custom code - we should just be able to figure out that the SRC is needed (and a quick glance at your out of tree code shows now ioctl() so that's fine as you say).
- M2M, memory to memory, can simply make ASRC as a sample rate converter without playback via any BE sound card, just like using any software to convert a WAV file in one sample rate to another WAV in a required sample rate. The driver has its self-designed application to compelte this function by sending the audio data from a wave file into a misc device via non-generic ioctl:
I would expect this to be handled by just routing the audio between the two front ends using DAPM/DPCM rather than by having a new kernel API. DPCM is mostly Liam's area so I'll defer to him on exactly how DPCM would figure that out - even with pure DAPM it's something we should do better with than we do. I don't know if there are examples in the Intel code people could refer to?
If nothing else representing the ASRC block as a CODEC would do the trick though that doesn't entirely play well with DPCM, it is kind of doing the right thing though in that the ASRC block terminates two digital paths. I'm not loving the elegence there though.
Thanks for the ideas!
On Tue, May 20, 2014 at 01:18:34AM +0100, Mark Brown wrote:
On Fri, May 16, 2014 at 06:59:41PM +0800, Nicolin Chen wrote:
- P2P, peripheral to peripheral, regards ASRC as a FE (front end link) corresponding to a BE (back end link like ESAI<->CS42888). The P2P function resolves a problem of unsupported sample rates for BE that we can convert the unsupported audio format to a supported one before we send the data into BE. It also has better quality and less CPU loading comparing to ALSA-lib's SRC function.
- For this function, I don't see any critical problem here. But...
This is moderately common in mainline already. You shouldn't need to have any custom code - we should just be able to figure out that the SRC is needed (and a quick glance at your out of tree code shows now ioctl() so that's fine as you say).
I was thinking about just leaving P2P as an initial version and reserving some essential interfaces for M2M function so that I can add it up later in our internal branch or we can figure out a better approach in community style. And now, it looks like a good idea for me to try this way so as to move on quickly.
- M2M, memory to memory, can simply make ASRC as a sample rate converter without playback via any BE sound card, just like using any software to convert a WAV file in one sample rate to another WAV in a required sample rate. The driver has its self-designed application to compelte this function by sending the audio data from a wave file into a misc device via non-generic ioctl:
I would expect this to be handled by just routing the audio between the two front ends using DAPM/DPCM rather than by having a new kernel API.a
Two front ends: one for ASRC and the other is...? And how does it transfer audio data with user space?
DPCM is mostly Liam's area so I'll defer to him on exactly how DPCM would figure that out - even with pure DAPM it's something we should do better with than we do. I don't know if there are examples in the Intel code people could refer to?
If nothing else representing the ASRC block as a CODEC would do the trick though that doesn't entirely play well with DPCM, it is kind of doing the right thing though in that the ASRC block terminates two digital paths. I'm not loving the elegence there though.
Sorry I can't understand this CODEC approach either...
Yes, ASRC has two digital ends but each of them is just a FIFO register , input FIFO and output FIFO, waiting for DMA to handle the Tx/Rx job, not like a normal CODEC that has DAI outwards.
Thank you, Nicolin
On Tue, May 20, 2014 at 12:43:24PM +0800, Nicolin Chen wrote:
On Tue, May 20, 2014 at 01:18:34AM +0100, Mark Brown wrote:
On Fri, May 16, 2014 at 06:59:41PM +0800, Nicolin Chen wrote:
This is moderately common in mainline already. You shouldn't need to have any custom code - we should just be able to figure out that the SRC is needed (and a quick glance at your out of tree code shows now ioctl() so that's fine as you say).
I was thinking about just leaving P2P as an initial version and reserving some essential interfaces for M2M function so that I can add it up later in our internal branch or we can figure out a better approach in community style. And now, it looks like a good idea for me to try this way so as to move on quickly.
Yes, that's going to make progress faster and it's probably going to be the most commonly used bit anyway.
- M2M, memory to memory, can simply make ASRC as a sample rate converter without playback via any BE sound card, just like using any software to convert a WAV file in one sample rate to another WAV in a required sample rate. The driver has its self-designed application to compelte this function by sending the audio data from a wave file into a misc device via non-generic ioctl:
I would expect this to be handled by just routing the audio between the two front ends using DAPM/DPCM rather than by having a new kernel API.a
Two front ends: one for ASRC and the other is...? And how does it transfer audio data with user space?
The front ends are the PCMs userspace actually sees and not usually tied to anything except by routing constraints in the firmware/hardware - I'd expect to see one available for each side of the ASRC, I guess at least one of these is just a normal audio streaming FE used to route to the back ends.
If nothing else representing the ASRC block as a CODEC would do the trick though that doesn't entirely play well with DPCM, it is kind of doing the right thing though in that the ASRC block terminates two digital paths. I'm not loving the elegence there though.
Sorry I can't understand this CODEC approach either...
Yes, ASRC has two digital ends but each of them is just a FIFO register , input FIFO and output FIFO, waiting for DMA to handle the Tx/Rx job, not like a normal CODEC that has DAI outwards.
Right, the DAIs in this case are just connected back to back inside the device like if you looped line input and line output together in a normal CODEC but without the DAC and ADC.
participants (2)
-
Mark Brown
-
Nicolin Chen