[alsa-devel] Different codecs for playback and capture?
Hi,
I'm developing a machine driver for a system where we have two different codecs for input and output, connected to the same I2S interface. Normally a machine driver specifies which codec the I2S interface is connected to, but in this case there should really be two different ones, one for record and one for playback. I'm working on how to put this together, and just thought I'd try and get some input before going too far down the wrong track. The options I see are:
1. Specify two different codecs in the machine driver. Since one can specify several dai_links this would seem to be doable, however, I'm not sure ALSA can handle two codecs sharing a single CPU DAI in this way?
2. Use two separate machine drivers, one for each codec? Again, sharing a single CPU DAI would seem to be an issue here?
3. Write a special codec driver that is a merge between the two codecs, one for input and one for output. Seems like a bit of a hack, but should be possible.
Any spontaneous thoughts?
/Ricard
On 06/03/2015 01:06 PM, Ricard Wanderlof wrote:
Hi,
I'm developing a machine driver for a system where we have two different codecs for input and output, connected to the same I2S interface. Normally a machine driver specifies which codec the I2S interface is connected to, but in this case there should really be two different ones, one for record and one for playback. I'm working on how to put this together, and just thought I'd try and get some input before going too far down the wrong track. The options I see are:
- Specify two different codecs in the machine driver. Since one can
specify several dai_links this would seem to be doable, however, I'm not sure ALSA can handle two codecs sharing a single CPU DAI in this way?
- Use two separate machine drivers, one for each codec? Again, sharing a
single CPU DAI would seem to be an issue here?
- Write a special codec driver that is a merge between the two codecs,
one for input and one for output. Seems like a bit of a hack, but should be possible.
Any spontaneous thoughts?
Hi,
There has been support for multiple CODECs on the same DAI link for a while now. Have a look at http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=88....
Instead of setting the codec_name/codec_dai_name fields in the dai_link create a snd_soc_link_component array with a entry for each of the CODECs and assign that to the codecs field in the DAI link.
I'm not too sure how well it works if one CODEC is playback only and the other is capture only and there might be some issues. But this is the way to go and if there are problems fix them.
- Lars
On Wed, 3 Jun 2015, Lars-Peter Clausen wrote:
There has been support for multiple CODECs on the same DAI link for a while now. Have a look at http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=88....
Instead of setting the codec_name/codec_dai_name fields in the dai_link create a snd_soc_link_component array with a entry for each of the CODECs and assign that to the codecs field in the DAI link.
I'm not too sure how well it works if one CODEC is playback only and the other is capture only and there might be some issues. But this is the way to go and if there are problems fix them.
Ok, thanks.
Actually, in the case in question, there is no real codec for capture, it's a MEMS microphone with an I2S output. So patching the codec (which is playback only) driver to simply allow capture actually accomplishes the desired result, but that's not a viable solution of course, and is only proof-of-hardware.
I can't seem to find a simple driver for a MEMS microphone though, although since there is actually nothing really to configure on the device itself (it has no configuration port, just an I2S output), perhaps I should simply be using a dummy driver (snd-soc-dummy) that basically enables capture?
/Ricard
On 06/04/2015 10:06 AM, Ricard Wanderlof wrote:
On Wed, 3 Jun 2015, Lars-Peter Clausen wrote:
There has been support for multiple CODECs on the same DAI link for a while now. Have a look at http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=88....
Instead of setting the codec_name/codec_dai_name fields in the dai_link create a snd_soc_link_component array with a entry for each of the CODECs and assign that to the codecs field in the DAI link.
I'm not too sure how well it works if one CODEC is playback only and the other is capture only and there might be some issues. But this is the way to go and if there are problems fix them.
Ok, thanks.
Actually, in the case in question, there is no real codec for capture, it's a MEMS microphone with an I2S output. So patching the codec (which is playback only) driver to simply allow capture actually accomplishes the desired result, but that's not a viable solution of course, and is only proof-of-hardware.
I can't seem to find a simple driver for a MEMS microphone though, although since there is actually nothing really to configure on the device itself (it has no configuration port, just an I2S output), perhaps I should simply be using a dummy driver (snd-soc-dummy) that basically enables capture?
snd-soc-dummy is only meant to be used for links which do not have any CODEC at all. It is kind of a stop-gap solution until the framework gains the capability to support such links natively.
Even if your device does not have any configuration registers it will still have constraints like the supported sample rates, sample-widths, etc. You should create a driver describing these capabilities. This ensures that the driver will work when the device is connected to a host side CPU DAI that supports e.g. sample-rates outside the microphones range. The AK4554 driver is an example of such a driver.
- Lars
On Wed, 3 Jun 2015, Lars-Peter Clausen wrote:
There has been support for multiple CODECs on the same DAI link for a while now. Have a look at http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=88....
Instead of setting the codec_name/codec_dai_name fields in the dai_link create a snd_soc_link_component array with a entry for each of the CODECs and assign that to the codecs field in the DAI link.
I'm not too sure how well it works if one CODEC is playback only and the other is capture only and there might be some issues. But this is the way to go and if there are problems fix them.
It doesn't seem as if snd_soc_dai_link_component is used in any (in-tree) driver; a grep in sound/soc just returns soc-core.c . Perhaps some out-of-tree driver has been used to test it?
How are the different component codecs accessed when accessing the device? Or does this happen automatically? For instance, normally I would register one card with the single dai and coec, which would come up as #0, so I could access the resulting device with hw:0,0 . But when I have two codecs on the same dai_link, what mechanism does ALSA use to differentiate between the two? Or is it supposed to happen automatically depending on the capabilities of the respective codecs.
/Ricard
On 06/04/2015 01:46 PM, Ricard Wanderlof wrote:
On Wed, 3 Jun 2015, Lars-Peter Clausen wrote:
There has been support for multiple CODECs on the same DAI link for a while now. Have a look at http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=88....
Instead of setting the codec_name/codec_dai_name fields in the dai_link create a snd_soc_link_component array with a entry for each of the CODECs and assign that to the codecs field in the DAI link.
I'm not too sure how well it works if one CODEC is playback only and the other is capture only and there might be some issues. But this is the way to go and if there are problems fix them.
It doesn't seem as if snd_soc_dai_link_component is used in any (in-tree) driver; a grep in sound/soc just returns soc-core.c . Perhaps some out-of-tree driver has been used to test it?
This is the only example I'm aware of: http://wiki.analog.com/resources/tools-software/linux-drivers/sound/ssm4567#...
How are the different component codecs accessed when accessing the device? Or does this happen automatically? For instance, normally I would register one card with the single dai and coec, which would come up as #0, so I could access the resulting device with hw:0,0 . But when I have two codecs on the same dai_link, what mechanism does ALSA use to differentiate between the two? Or is it supposed to happen automatically depending on the capabilities of the respective codecs.
It will be exposed as a single card with one capture and one playback PCM. So it will be the same as if the CODEC side was only a single device supporting both.
On Thu, 4 Jun 2015, Lars-Peter Clausen wrote:
I'm not too sure how well it works if one CODEC is playback only and the other is capture only and there might be some issues. But this is the way to go and if there are problems fix them.
It doesn't seem as if snd_soc_dai_link_component is used in any (in-tree) driver; a grep in sound/soc just returns soc-core.c . Perhaps some out-of-tree driver has been used to test it?
This is the only example I'm aware of: http://wiki.analog.com/resources/tools-software/linux-drivers/sound/ssm4567#...
Ok, thanks. As you mentioned previously this is an example of a left-right split codec configuration.
Even if your device does not have any configuration registers it will still have constraints like the supported sample rates, sample-widths, etc. You should create a driver describing these capabilities. This ensures that the driver will work when the device is connected to a host side CPU DAI that supports e.g. sample-rates outside the microphones range. The AK4554 driver is an example of such a driver.
Yes, makes sense.
An mildly interesting aspect is that the resulting device doesn't belong to anything in the device tree, it just floats around by itself, as the device tree doesn't model I2S as a bus. A minor observation, don't know if it should be done differently.
How are the different component codecs accessed when accessing the device? Or does this happen automatically? For instance, normally I would register one card with the single dai and coec, which would come up as #0, so I could access the resulting device with hw:0,0 . But when I have two codecs on the same dai_link, what mechanism does ALSA use to differentiate between the two? Or is it supposed to happen automatically depending on the capabilities of the respective codecs.
It will be exposed as a single card with one capture and one playback PCM. So it will be the same as if the CODEC side was only a single device supporting both.
Ok.
I've experimented with this.
The first problem is that the framework intersects the two codec drivers' capabilities, and since one of them supports playback only and the other capture only, the intersected rates and formats are always 0.
I've fixed this by jumping out of the loop early in soc_pcm_init_runtime_hw() if the codec in question doesn't seem to support the mode (playback vs. capture) that's being considered, indicating that it doesn't care about the rate or format for that mode.
Ideally it would have been some sort of 'if (!codec_stream->defined)' but there isn't such a member in struct snd_soc_dai . I've gone with 'if (!codec_stream->rates && !codec_stream->formats)', thinking that if a codec doesn't support any rates or formats, it probably doesn't support that mode at all (else it's rather meaningless). In fact, one of these (rates or formats) would probably suffice, with a comment explaining what we're really trying to do.
The next problem is that when trying to set hw params, something in the framework or the individual codec driver hw_params() bails out saying it can't set the intended parameters. Looking at that right now to see if it can be solved in a similar way.
/Ricard
On 06/04/2015 04:22 PM, Ricard Wanderlof wrote:
On Thu, 4 Jun 2015, Lars-Peter Clausen wrote:
I'm not too sure how well it works if one CODEC is playback only and the other is capture only and there might be some issues. But this is the way to go and if there are problems fix them.
It doesn't seem as if snd_soc_dai_link_component is used in any (in-tree) driver; a grep in sound/soc just returns soc-core.c . Perhaps some out-of-tree driver has been used to test it?
This is the only example I'm aware of: http://wiki.analog.com/resources/tools-software/linux-drivers/sound/ssm4567#...
Ok, thanks. As you mentioned previously this is an example of a left-right split codec configuration.
Even if your device does not have any configuration registers it will still have constraints like the supported sample rates, sample-widths, etc. You should create a driver describing these capabilities. This ensures that the driver will work when the device is connected to a host side CPU DAI that supports e.g. sample-rates outside the microphones range. The AK4554 driver is an example of such a driver.
Yes, makes sense.
An mildly interesting aspect is that the resulting device doesn't belong to anything in the device tree, it just floats around by itself, as the device tree doesn't model I2S as a bus. A minor observation, don't know if it should be done differently.
How are the different component codecs accessed when accessing the device? Or does this happen automatically? For instance, normally I would register one card with the single dai and coec, which would come up as #0, so I could access the resulting device with hw:0,0 . But when I have two codecs on the same dai_link, what mechanism does ALSA use to differentiate between the two? Or is it supposed to happen automatically depending on the capabilities of the respective codecs.
It will be exposed as a single card with one capture and one playback PCM. So it will be the same as if the CODEC side was only a single device supporting both.
Ok.
I've experimented with this.
The first problem is that the framework intersects the two codec drivers' capabilities, and since one of them supports playback only and the other capture only, the intersected rates and formats are always 0.
I've fixed this by jumping out of the loop early in soc_pcm_init_runtime_hw() if the codec in question doesn't seem to support the mode (playback vs. capture) that's being considered, indicating that it doesn't care about the rate or format for that mode.
Ideally it would have been some sort of 'if (!codec_stream->defined)' but there isn't such a member in struct snd_soc_dai . I've gone with 'if (!codec_stream->rates && !codec_stream->formats)', thinking that if a codec doesn't support any rates or formats, it probably doesn't support that mode at all (else it's rather meaningless). In fact, one of these (rates or formats) would probably suffice, with a comment explaining what we're really trying to do.
The next problem is that when trying to set hw params, something in the framework or the individual codec driver hw_params() bails out saying it can't set the intended parameters. Looking at that right now to see if it can be solved in a similar way.
The best way to solve this is probably to introduce a helper function bool snd_soc_dai_stream_valid(struct snd_soc_dai *dai, int stream) that implements the logic for detecting whether a DAI supports a playback or capture stream. And then whenever iterating over the codec_dais field for stream operations skip those which don't support the stream.
- Lars
On Thu, 4 Jun 2015, Lars-Peter Clausen wrote:
Ideally it would have been some sort of 'if (!codec_stream->defined)' but there isn't such a member in struct snd_soc_dai . I've gone with 'if (!codec_stream->rates && !codec_stream->formats)', thinking that if a codec doesn't support any rates or formats, it probably doesn't support that mode at all (else it's rather meaningless). In fact, one of these (rates or formats) would probably suffice, with a comment explaining what we're really trying to do.
The next problem is that when trying to set hw params, something in the framework or the individual codec driver hw_params() bails out saying it can't set the intended parameters. Looking at that right now to see if it can be solved in a similar way.
The best way to solve this is probably to introduce a helper function bool snd_soc_dai_stream_valid(struct snd_soc_dai *dai, int stream) that implements the logic for detecting whether a DAI supports a playback or capture stream. And then whenever iterating over the codec_dais field for stream operations skip those which don't support the stream.
Yes. The question is, exactly what should the helper function use to determine stream support. Is it sufficient to check for dai->driver->playback/capture->rates != 0 ?
I'm a bit worried at the 'whenever iterating over the codec_dais field'. There are 50+ places which iterate over codec_dais, mostly in soc-pcm.c and soc-core.c . Of course code could be added to all locations, but it starts to feel like the wrong approach, i.e. like the iteration itself should be done in a separate function (which doesn't really seem viable in itself; this isn't glib).
For the hardware I have here, there are two places where it is necessary; admittedly, this particular hardware is pretty simple (on the other hand, I would suppose that most situations with this type of split playback/capture would be for cases where there are simple codecs or I2S microphones that only support one of the modes).
What I'm getting at is that, disregarding the helper function which is needed anyway, perhaps I should just fix the two locations needed for now, and leave the rest to be fixed when the need arises? I'm not sure they all need to be modified in this way.
/Ricard
participants (2)
-
Lars-Peter Clausen
-
Ricard Wanderlof