[alsa-devel] [PATCH] ASoC: Add optional pointer to machine audio routes to snd_soc_card
This is targeted mostly to cross-device setups where single audio map defining routes around and between the codecs looks cleaner than defining and registering maps to each codec separately from machine init callbacks.
This could be also used to reduce simple machine init callbacks where only snd_soc_dapm_add_routes and snd_soc_dapm_sync are called. However, this does work only if the widgets names are unique in the system.
Idea of common audio map came from Peter Ujfalusi peter.ujfalusi@nokia.com.
Signed-off-by: Jarkko Nikula jhnikula@gmail.com Cc: Peter Ujfalusi peter.ujfalusi@nokia.com --- include/sound/soc.h | 8 ++++++++ sound/soc/soc-core.c | 21 +++++++++++++++++++++ 2 files changed, 29 insertions(+), 0 deletions(-)
diff --git a/include/sound/soc.h b/include/sound/soc.h index 74921f2..d037996 100644 --- a/include/sound/soc.h +++ b/include/sound/soc.h @@ -654,6 +654,14 @@ struct snd_soc_card { struct snd_soc_pcm_runtime *rtd_aux; int num_aux_rtd;
+ /* + * optional machine audio map. Can be used as an alternative for + * snd_soc_dapm_add_routes call in machine init in single codec setup + * or as a common audio map in cross-device setup + */ + const struct snd_soc_dapm_route *route; + int num_routes; + struct work_struct deferred_resume_work;
/* lists of probed devices belonging to this card */ diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c index a233607..66591e3 100644 --- a/sound/soc/soc-core.c +++ b/sound/soc/soc-core.c @@ -1720,6 +1720,7 @@ static void snd_soc_instantiate_card(struct snd_soc_card *card) struct snd_soc_codec *codec; struct snd_soc_codec_conf *codec_conf; enum snd_soc_compress_type compress_type; + const char *temp; int ret, i;
mutex_lock(&card->mutex); @@ -1813,6 +1814,26 @@ static void snd_soc_instantiate_card(struct snd_soc_card *card) } }
+ if (card->route) { + /* + * use the last codec and dapm context of it when setting up + * the common audio map + */ + codec = list_first_entry(&card->codec_dev_list, + struct snd_soc_codec, card_list); + temp = codec->name_prefix; + codec->name_prefix = NULL; + ret = snd_soc_dapm_add_routes(&codec->dapm, + card->route, card->num_routes); + if (ret < 0) { + pr_err("asoc: failed to add routes %s: %d\n", + card->name, ret); + goto probe_aux_dev_err; + } + codec->name_prefix = temp; + snd_soc_dapm_sync(&codec->dapm); + } + snprintf(card->snd_card->shortname, sizeof(card->snd_card->shortname), "%s", card->name); snprintf(card->snd_card->longname, sizeof(card->snd_card->longname),
On Fri, Dec 17, 2010 at 07:39:40PM +0200, Jarkko Nikula wrote:
This is targeted mostly to cross-device setups where single audio map defining routes around and between the codecs looks cleaner than defining and registering maps to each codec separately from machine init callbacks.
This could be also used to reduce simple machine init callbacks where only snd_soc_dapm_add_routes and snd_soc_dapm_sync are called. However, this does work only if the widgets names are unique in the system.
Idea of common audio map came from Peter Ujfalusi peter.ujfalusi@nokia.com.
Good idea. We should probably just do this for CODECs as well, they're often doing similar boiler plate type stuff.
On Fri, 2010-12-17 at 22:37 +0000, Mark Brown wrote:
On Fri, Dec 17, 2010 at 07:39:40PM +0200, Jarkko Nikula wrote:
This is targeted mostly to cross-device setups where single audio map defining routes around and between the codecs looks cleaner than defining and registering maps to each codec separately from machine init callbacks.
This could be also used to reduce simple machine init callbacks where only snd_soc_dapm_add_routes and snd_soc_dapm_sync are called. However, this does work only if the widgets names are unique in the system.
Idea of common audio map came from Peter Ujfalusi peter.ujfalusi@nokia.com.
Good idea. We should probably just do this for CODECs as well, they're often doing similar boiler plate type stuff.
Acked-by: Liam Girdwood lrg@slimlogic.co.uk
On Friday 17 December 2010 19:39:40 ext Jarkko Nikula wrote:
This is targeted mostly to cross-device setups where single audio map defining routes around and between the codecs looks cleaner than defining and registering maps to each codec separately from machine init callbacks.
This could be also used to reduce simple machine init callbacks where only snd_soc_dapm_add_routes and snd_soc_dapm_sync are called. However, this does work only if the widgets names are unique in the system.
Idea of common audio map came from Peter Ujfalusi peter.ujfalusi@nokia.com.
Signed-off-by: Jarkko Nikula jhnikula@gmail.com Cc: Peter Ujfalusi peter.ujfalusi@nokia.com
For sure we need to have the card level DAPM map, when we have more than one codec in the system. What I was thinking is more like to move the DAPM map from codec domain up to card level. What I mean is, that when you build up your ASoC card, the DAPM map/routes are going to be attached to the card, and not to the codec. If you have one codec on one card, then it is not going to change much, but, if you have multiple codecs you will still have one single DAPM map for the card, which stretches through multiple codecs.
When you have more codecs, you specify the prefix for the codecs, so the card DAPM map/route will have prefixed widgets. You use the prefixed widget names to make the connections needed in your system.
When you debug such a system you are anyway interested in the global DAPM status at every given time.
One thing that might need to be changed in DAPM is how we handle the connected/not connected endpoints. Currently if you have DAPM_OUTPUT, and you do not connect it to speaker/HP/line, it is handled like it has some connection. In multi comp this might be not what we want. Also the treatment of the inputs might need to be changed.
In this example: codec1: DAPM_OUTPUT("to speaker") DAPM_OUTPUT("to line out")
codec2: DAPM_INPUT("line in") DAPM_SWITCH("bypass") DAPM_OUTPUT("to speaker")
{"bypass", "Switch", "Line in"}, {"to speaker", "Switch", "bypass"},
In machine you would have something like this:
DAPM_SPK("Speaker1") DAPM_SPK("Speaker2")
{"Speaker1", NULL, "codec1 to speaker"},
{"Speaker2", NULL, "codec2 to speaker"}, {"codec2 Line in", NULL, "codec1 to line out"},
Now if you enable the bypass on codec2, it shall not bring up the codec2 bias, since we do not have full route. If one starts the playback on codec1, and the codec2 bypass is enabled, than we need to power the codec1, and codec2 DAPM path. If we do not have the: {"codec2 Line in", NULL, "codec1 to line out"}, in the machine driver, we shall not power the bypass path in codec2, even if the switch is on, since the machine does not specified any connection. IMHO this is more logical, than treating non connected OUTPUTs/INPUTs as connected. But this is a different issue.
I think we shall try to move the DAPM mapping/routing from codec to card domain, and also change the behavior of not connected pins.
On Mon, Dec 20, 2010 at 01:22:49PM +0200, Peter Ujfalusi wrote:
For sure we need to have the card level DAPM map, when we have more than one codec in the system.
That's already possible - this is more a convenience for defining it.
What I was thinking is more like to move the DAPM map from codec domain up to card level. What I mean is, that when you build up your ASoC card, the DAPM map/routes are going to be attached to the card, and not to the codec.
At runtime everyting is treated (or should be treated) as one map - the
One thing that might need to be changed in DAPM is how we handle the connected/not connected endpoints. Currently if you have DAPM_OUTPUT, and you do not connect it to speaker/HP/line, it is handled like it has some connection. In multi comp this might be not what we want. Also the treatment of the inputs might need to be changed.
I don't see anything different here with multi-component? The general idea is that by default a floating output or input should be treated as connected.
Now if you enable the bypass on codec2, it shall not bring up the codec2 bias, since we do not have full route.
This isn't abundantly clear - there may potentially be analogue interface issues if two devices which aren't connected aren't both powered up and down together. Robust system design should avoid these issues but I'd prefer it if software were conservative and at the very least always had an option to ensure everything is powered on at once.
If we do not have the: {"codec2 Line in", NULL, "codec1 to line out"}, in the machine driver, we shall not power the bypass path in codec2, even if the switch is on, since the machine does not specified any connection.
IMHO this is more logical, than treating non connected OUTPUTs/INPUTs as connected. But this is a different issue.
I don't see any relationship between multi-component and the state of the pins here? If there's a path through the device the path would be there even if there's only one device in the system.
Hi,
sorry for the late reply, but I have constant mail reception problems, and I do not receive mails, or randomly receiving them...
On Monday 20 December 2010 14:02:01 ext Mark Brown wrote:
What I was thinking is more like to move the DAPM map from codec domain up to card level. What I mean is, that when you build up your ASoC card, the DAPM map/routes are going to be attached to the card, and not to the codec.
At runtime everyting is treated (or should be treated) as one map - the
That's good.
One thing that might need to be changed in DAPM is how we handle the connected/not connected endpoints. Currently if you have DAPM_OUTPUT, and you do not connect it to speaker/HP/line, it is handled like it has some connection. In multi comp this might be not what we want. Also the treatment of the inputs might need to be changed.
I don't see anything different here with multi-component? The general idea is that by default a floating output or input should be treated as connected.
As I said, it is another issue. It is just that for me it is more logical to treat the floating inputs/outputws as not connected. It is not that big problem, since from machine driver I can mark the unsused pins as nc.
Now if you enable the bypass on codec2, it shall not bring up the codec2 bias, since we do not have full route.
This isn't abundantly clear - there may potentially be analogue interface issues if two devices which aren't connected aren't both powered up and down together. Robust system design should avoid these issues but I'd prefer it if software were conservative and at the very least always had an option to ensure everything is powered on at once.
If you look at things in card level, I still think that in the case I have described the codec2 shall not change bias level. If it does before the playback starts on the codec1, we might end up hearing the pop from codec1 powering up, since the codec2 path has been already up (with the speaker enabled). If the DAPM treats this situation in a same way, if it was within one codec, than we are able to reduce the pop noise on the output. It is not a black magic to do so, and since we have single DAPM map/routing it can be done with small effort. I think.
If we do not have the: {"codec2 Line in", NULL, "codec1 to line out"}, in the machine driver, we shall not power the bypass path in codec2, even if the switch is on, since the machine does not specified any connection.
IMHO this is more logical, than treating non connected OUTPUTs/INPUTs as connected. But this is a different issue.
I don't see any relationship between multi-component and the state of the pins here? If there's a path through the device the path would be there even if there's only one device in the system.
Yeah, it is different issue. If the card has been treated as one device, I see no problems. But if we treat a codec as device within a card, we might have some issues, but it can be solved with small effort.
On Tue, Dec 21, 2010 at 09:42:15AM +0200, Peter Ujfalusi wrote:
On Monday 20 December 2010 14:02:01 ext Mark Brown wrote:
I don't see anything different here with multi-component? The general idea is that by default a floating output or input should be treated as connected.
As I said, it is another issue. It is just that for me it is more logical to treat the floating inputs/outputws as not connected. It is not that big problem, since from machine driver I can mark the unsused pins as nc.
The reason it's this way round is that this way we default to allowing audio to pass through the system. Generally when people are bringing up a new system they're much more worried about the hardware being at all functional than power optimisation - in many systems power isn't a concern at all. This way there's less obstacles to getting audio up, and people can go through and optimise later if they need to.
Now if you enable the bypass on codec2, it shall not bring up the codec2 bias, since we do not have full route.
This isn't abundantly clear - there may potentially be analogue interface issues if two devices which aren't connected aren't both powered up and down together. Robust system design should avoid these issues but I'd prefer it if software were conservative and at the very least always had an option to ensure everything is powered on at once.
If you look at things in card level, I still think that in the case I have described the codec2 shall not change bias level. If it does before the playback starts on the codec1, we might end up hearing the pop from codec1 powering up, since the codec2 path has been already up (with the speaker enabled).
Like I say if the speaker is unconditionally enabled DAPM will keep the speaker output path up anyway so it's going to be up regardless of anything else. If the bias management functions are enabling output drivers they're doing something wrong anyway.
On Mon, 20 Dec 2010 13:22:49 +0200 Peter Ujfalusi peter.ujfalusi@nokia.com wrote:
For sure we need to have the card level DAPM map, when we have more than one codec in the system. What I was thinking is more like to move the DAPM map from codec domain up to card level. What I mean is, that when you build up your ASoC card, the DAPM map/routes are going to be attached to the card, and not to the codec.
Actually paths and widgets are already decoupled from codec and moved to card domain. Map/route is initialization time thing which still has dependency to codec. I.e. snd_soc_dapm_add_route favors a widget from a calling dapm context (codec).
This is due if there are machines with multiple codecs that are not cross-connected but if they have widgets with a same name and I didn't want to force them to use name prefixing and thus causing userspace-kernel space breakage (kcontrol names changed) compared to current implementation.
On Monday 20 December 2010 14:17:00 ext Jarkko Nikula wrote:
On Mon, 20 Dec 2010 13:22:49 +0200
Peter Ujfalusi peter.ujfalusi@nokia.com wrote:
For sure we need to have the card level DAPM map, when we have more than one codec in the system. What I was thinking is more like to move the DAPM map from codec domain up to card level. What I mean is, that when you build up your ASoC card, the DAPM map/routes are going to be attached to the card, and not to the codec.
Actually paths and widgets are already decoupled from codec and moved to card domain. Map/route is initialization time thing which still has dependency to codec. I.e. snd_soc_dapm_add_route favors a widget from a calling dapm context (codec).
I see.
This is due if there are machines with multiple codecs that are not cross-connected but if they have widgets with a same name and I didn't want to force them to use name prefixing and thus causing userspace-kernel space breakage (kcontrol names changed) compared to current implementation.
If you have multiple codecs in your device, it is likely that those codecs will have colliding widget names for sure. IMHO using prefixes for multi codec cards shall not break the userspace-kernel interface, since with old ASoC we had one codec per one card. So if you move your machine to real multi-comp, then you anyway break the userspace-kernel interface by moving the PCMs to one card from the original PCM per card way. If you have one codec in your system you do not need to prefix that, so you break nothing. I would use prefixes in real multi codec cases... Are we using the prefixes for the non DAPM kcontrols as well? I think we shall, if the prefix is defined.
On Tue, 21 Dec 2010 09:30:46 +0200 Peter Ujfalusi peter.ujfalusi@nokia.com wrote:
This is due if there are machines with multiple codecs that are not cross-connected but if they have widgets with a same name and I didn't want to force them to use name prefixing and thus causing userspace-kernel space breakage (kcontrol names changed) compared to current implementation.
If you have multiple codecs in your device, it is likely that those codecs will have colliding widget names for sure. IMHO using prefixes for multi codec cards shall not break the userspace-kernel interface, since with old ASoC we had one codec per one card. So if you move your machine to real multi-comp, then you anyway break the userspace-kernel interface by moving the PCMs to one card from the original PCM per card way. If you have one codec in your system you do not need to prefix that, so you break nothing. I would use prefixes in real multi codec cases...
It's not entirely sure. Before cross-device set, i.e. in 2.6.37 it's possible to build a machine driver with multiple codecs that can have the same widgets names since the paths and widgets are per codec. Collision could come only from kcontrol names.
But I agree. Prefixing new machines sounds safer in long run. Then it's more unlikely that changes in one codec driver causes hard to see problems to machines with multiple codecs.
Are we using the prefixes for the non DAPM kcontrols as well? I think we shall, if the prefix is defined.
Yep, prefix is put to all codec kcontrols, widget and route names.
participants (4)
-
Jarkko Nikula
-
Liam Girdwood
-
Mark Brown
-
Peter Ujfalusi