Re: [alsa-devel] [PATCH 07/11] ASoC: tegra: Add tegra-das driver
Mark Brown wrote:
On Fri, Dec 17, 2010 at 02:41:26PM -0700, Stephen Warren wrote:
/*
- Terminology:
- DAS: Digital audio switch (HW module controlled by this driver)
- DAP: Digital audio port (port/pins on Tegra device)
- DAC: Digital audio controller (e.g. I2S or AC97 controller elsewhere)
- The Tegra DAS is a mux/cross-bar which can connect each DAP to a specific
- DAC, or another DAP. When DAPs are connected, one must be the master and
- one the slave. Each DAC allows selection of a specific DAP for input, to
- cater for the case where N DAPs are connected to 1 DAC for broadcast
- output.
- This driver is dumb; no attempt is made to ensure that a valid routing
- configuration is programmed.
*/
Reading this it's essentially just exporting point to point switches within the DAS block. This really feels like it ought to be represented as a CODEC driver to give runtime flexibility to userspace to control the mux - it looks like the AP part of the system streams data into this block which then has very flexible switching on to the externally connected audio devices.
Hmm. I don't see any examples of anything like Tegra's DAS in the existing ASoC code-base. So, I'm slightly unsure how to recast the DAS driver as a codec. In particular, in the machine driver, would I do something like:
static struct snd_soc_dai_link harmony_i2s_das_link = { .name = "WM8903", .stream_name = "WM8903 PCM", .cpu_dai_name = "tegra-i2s-dai.0", .platform_name = "tegra-pcm-audio", .codec_name = "tegra-das.0", .codec_dai_name = "tegra-das.0-in", .ops = &harmony_i2s_das_ops, };
static struct snd_soc_dai_link harmony_das_wm8903_link = { .cpu_dai_name = "tegra-das.0-out", .codec_name = "wm8903-codec.0-001a", .codec_dai_name = "wm8903-hifi", .ops = &harmony_das_codec_ops, };
static struct snd_soc_dai_link *harmony_links[] = { & harmony_i2s_das_link, & harmony_das_wm8903_link, };
static struct snd_soc_card snd_soc_harmony = { .name = "tegra-harmony", .dai_link = &harmony_links, .num_links = ARRAY_SIZE(harmony_links), };
And then re-write the tegra-das driver to expose two DAIs somehow?
Wouldn't that end up exposing two streams to user-space, one for each entry in harmony_links[]?
On Wed, Dec 22, 2010 at 10:30:15AM -0800, Stephen Warren wrote:
Mark Brown wrote:
Reading this it's essentially just exporting point to point switches within the DAS block. This really feels like it ought to be represented as a CODEC driver to give runtime flexibility to userspace to control the mux - it looks like the AP part of the system streams data into this block which then has very flexible switching on to the externally connected audio devices.
Hmm. I don't see any examples of anything like Tegra's DAS in the existing ASoC code-base. So, I'm slightly unsure how to recast the DAS driver as a codec. In particular, in the machine driver, would I do something like:
The closest thing in the tree at the minute is WM8994 and WM8995 - both of these have multiple audio interfaces plus a DAC/ADC/DMIC block connected together via a digital mixing block.
static struct snd_soc_dai_link *harmony_links[] = { & harmony_i2s_das_link, & harmony_das_wm8903_link, };
Essentially, yes.
Wouldn't that end up exposing two streams to user-space, one for each entry in harmony_links[]?
Yeah, that's the unfortunate bit - it doesn't work ideally well right now. We actually need to take care of this anyway (it's also an issue for things like phones as you get the same structure for the link between a digital baseband and the CODEC it's connected to) and something like this that really strongly benefits from the feature would be a good thing to use to get that working.
I guess for now setting up a 1:1 mapping between the CPU DAIs and the physical ports would get the CPU driver going, then I can do the framework work required to make sure DAPM propagates nicely over the mux. Does that sound reasonable?
Mark Brown wrote:
On Wed, Dec 22, 2010 at 10:30:15AM -0800, Stephen Warren wrote:
Mark Brown wrote:
Reading this it's essentially just exporting point to point switches within the DAS block. This really feels like it ought to be represented as a CODEC driver to give runtime flexibility to userspace to control the mux - it looks like the AP part of the system streams data into this block which then has very flexible switching on to the externally connected audio devices.
Hmm. I don't see any examples of anything like Tegra's DAS in the existing ASoC code-base. So, I'm slightly unsure how to recast the DAS driver as a codec. In particular, in the machine driver, would I do something like:
The closest thing in the tree at the minute is WM8994 and WM8995 - both of these have multiple audio interfaces plus a DAC/ADC/DMIC block connected together via a digital mixing block.
static struct snd_soc_dai_link *harmony_links[] = { & harmony_i2s_das_link, & harmony_das_wm8903_link, };
Essentially, yes.
Wouldn't that end up exposing two streams to user-space, one for each entry in harmony_links[]?
Yeah, that's the unfortunate bit - it doesn't work ideally well right now. We actually need to take care of this anyway (it's also an issue for things like phones as you get the same structure for the link between a digital baseband and the CODEC it's connected to) and something like this that really strongly benefits from the feature would be a good thing to use to get that working.
I guess for now setting up a 1:1 mapping between the CPU DAIs and the physical ports would get the CPU driver going, then I can do the framework work required to make sure DAPM propagates nicely over the mux. Does that sound reasonable?
So I'm clear, I'm interpreting that as meaning the existing tegra_das.c driver that I sent to the list is acceptable for now. Then, once that's merged in, we can add in the infra-structure to recast it as a codec/... Or, were you asking for some kind of structural change?
Thanks.
On Wed, Jan 05, 2011 at 02:58:02PM -0800, Stephen Warren wrote:
Mark Brown wrote:
I guess for now setting up a 1:1 mapping between the CPU DAIs and the physical ports would get the CPU driver going, then I can do the framework work required to make sure DAPM propagates nicely over the mux. Does that sound reasonable?
So I'm clear, I'm interpreting that as meaning the existing tegra_das.c driver that I sent to the list is acceptable for now. Then, once that's merged in, we can add in the infra-structure to recast it as a codec/... Or, were you asking for some kind of structural change?
No, I don't think this should be made visible to machine drivers at all - they should just see a straight through mapping from the DMA channels to the ports in the first instance.
Mark Brown wrote:
On Wed, Jan 05, 2011 at 02:58:02PM -0800, Stephen Warren wrote:
Mark Brown wrote:
I guess for now setting up a 1:1 mapping between the CPU DAIs and the physical ports would get the CPU driver going, then I can do the framework work required to make sure DAPM propagates nicely over the mux. Does that sound reasonable?
So I'm clear, I'm interpreting that as meaning the existing tegra_das.c driver that I sent to the list is acceptable for now. Then, once that's merged in, we can add in the infra-structure to recast it as a codec/... Or, were you asking for some kind of structural change?
No, I don't think this should be made visible to machine drivers at all
- they should just see a straight through mapping from the DMA channels
to the ports in the first instance.
Oh. So what should set up this 1:1 mapping then; what module should the DAS register writes be contained in? And later, what module should configure the DAS with the mux configuration that is appropriate for the board? It seems like the machine driver is the only place with the knowledge to define what the routing should be. Whether the machine driver calls tegra_das_* vs. some codec/mux API to set this up seems like a different issue to whether the machine driver or something else should contain this knowledge.
In the short-term, are you expecting the I2S driver to expose a CPU DAI for each audio controller and port? The number of audio controllers and ports isn't equal, and hence it wouldn't be possible to support a board using just port 5 since there's no controller 5 (and even audio controller 3 I think is SPDIF not I2S)...
On Wed, Jan 05, 2011 at 03:27:17PM -0800, Stephen Warren wrote:
Mark Brown wrote:
No, I don't think this should be made visible to machine drivers at all
- they should just see a straight through mapping from the DMA channels
to the ports in the first instance.
Oh. So what should set up this 1:1 mapping then; what module should the DAS register writes be contained in? And later, what module should
I guess the I2S driver?
configure the DAS with the mux configuration that is appropriate for the board? It seems like the machine driver is the only place with the
My suggestion would only work for very simple boards like Harmony.
knowledge to define what the routing should be. Whether the machine driver calls tegra_das_* vs. some codec/mux API to set this up seems like a different issue to whether the machine driver or something else should contain this knowledge.
The end result would be that this would all be done in the application layer, potentially dynamically.
In the short-term, are you expecting the I2S driver to expose a CPU DAI for each audio controller and port? The number of audio controllers and ports isn't equal, and hence it wouldn't be possible to support a board using just port 5 since there's no controller 5 (and even audio controller 3 I think is SPDIF not I2S)...
Oh, hrm. That wasn't clear from your code. Would mapping controller n to port n work for Harmony?
Mark Brown wrote:
On Wed, Jan 05, 2011 at 03:27:17PM -0800, Stephen Warren wrote:
Mark Brown wrote:
No, I don't think this should be made visible to machine drivers at all
- they should just see a straight through mapping from the DMA channels
to the ports in the first instance.
Oh. So what should set up this 1:1 mapping then; what module should the DAS register writes be contained in? And later, what module should
I guess the I2S driver?
configure the DAS with the mux configuration that is appropriate for the board? It seems like the machine driver is the only place with the
My suggestion would only work for very simple boards like Harmony.
knowledge to define what the routing should be. Whether the machine driver calls tegra_das_* vs. some codec/mux API to set this up seems like a different issue to whether the machine driver or something else should contain this knowledge.
The end result would be that this would all be done in the application layer, potentially dynamically.
Hmmm.
One of the nice things about ALSA on desktop is that everything just works without having to fiddle around setting up routing etc. from user-space e.g. using alsamixer or custom code.
Isn't ASoC attempting to set up a reasonably good routing/configuration on a per-board basis, driven by knowledge in the machine (board?) driver? Then, let people tweak it /if/ they want? It kinda sounds like you're pushing for a model where simply loading the driver may or may not have things set up right, then one has to configure everything from user-space before audio will work. Am I grossly misunderstanding?
Something else I was holding off on asking, but may as well now:
Related to this, there's a lot of code in the-original-Tegra-driver-that-I- based-my-submission-on which pokes all kinds of registers in the codec from the machine driver to set up recording; e.g. mic mux selection, mic volume or gain, bias on/off, mono/stereo etc. based off detailed knowledge of the Harmony board. Seaboard, Ventana, ... would have similar but subtly different configuration logic. I stripped this out to simplify things initially. Is such configuration not meant to be part of the machine driver, but provided by manually setting everything up with alsamixer for example (and perhaps shipping a saved alsamixer state file)?
In the short-term, are you expecting the I2S driver to expose a CPU DAI for each audio controller and port? The number of audio controllers and ports isn't equal, and hence it wouldn't be possible to support a board using just port 5 since there's no controller 5 (and even audio controller 3 I think is SPDIF not I2S)...
Oh, hrm. That wasn't clear from your code. Would mapping controller n to port n work for Harmony?
Yes, Harmony just happens to use controller ID 1 and port ID 1.
On Wed, Jan 05, 2011 at 04:20:04PM -0800, Stephen Warren wrote:
One of the nice things about ALSA on desktop is that everything just works without having to fiddle around setting up routing etc. from user-space e.g. using alsamixer or custom code.
If you look at the desktop case on a modern system a fair chunk of the configuration is actually being handled by PulseAudio rather than the driver - the driver exposes a standard view of the cards but Pulse figures out how it should be configured. Probably at some point after the media controller API (or something like it) makes it into mainline we'll start to see some efforts at similar managment for embedded systems, but the diversity of the hardware makes this a much more difficult problem.
Isn't ASoC attempting to set up a reasonably good routing/configuration on a per-board basis, driven by knowledge in the machine (board?) driver? Then, let people tweak it /if/ they want? It kinda sounds like you're pushing for a
No, it's delegating everything to userspace.
model where simply loading the driver may or may not have things set up right, then one has to configure everything from user-space before audio will work. Am I grossly misunderstanding?
No, that's exactly it. The bulk of the reason is that the hardware is very diverse, plus the fact that the edit/compile/run cycle for dropping the configuration into the kernel is pretty slow and painful and it's very hard to maintain stuff in mainline when you can't test the settings.
You can't do stuff in the CODEC driver because everything depends on the system and what physical configuration it has, plus user taste. Doing stuff in the machine driver means you're very fragile in the face of any changes in the CODEC driver (and CPU drivers as CPUs get more functionality) and means that stuff is still in the kernel.
Things get even more entertaining when you consider that things like smartphones can have vastly different configurations depending on the use case they're using - it's often not just a case of tweaks, it can be a substantially different setup.
Worse, if you hard code stuff into the kernel you end up causing pain for userspace people when they want to change use cases - they own part of the configuration but not all of it, which increases the coupling between the software components in the system. Adding a new use case becomes much more likey to require kernel changes (and consider what happens on a board that's used by multiple userspace groups, all of who may have diverging ideas of what's the optimal setup for any given use case).
Related to this, there's a lot of code in the-original-Tegra-driver-that-I- based-my-submission-on which pokes all kinds of registers in the codec from the machine driver to set up recording; e.g. mic mux selection, mic volume or gain, bias on/off, mono/stereo etc. based off detailed knowledge of the Harmony board. Seaboard, Ventana, ... would have similar but subtly different configuration logic. I stripped this out to simplify things initially. Is such configuration not meant to be part of the machine driver, but provided by manually setting everything up with alsamixer for example (and perhaps shipping a saved alsamixer state file)?
The intention is that userspace would ship a bunch of different configuration files here; you should never be configuring any of this stuff in kernel space. When used in conjunction with mainline drivers such register writes are often actively disruptive - they overwite settings applications might try to change, confuse the internal state of the CODEC driver or otherwise upset things. Stage one when people run into issues is often to strip such code from drivers.
Bias management in particular should never be being done from the machine driver - the ASoC core may well get confused if you do that by hand.
Both basic alsa-lib and UCM have support for loading different configurations based on board name, and userspace can in general do whatever it likes to handle per-board setup - what makes sense is going to vary depending on how the application layer is set up.
Oh, hrm. That wasn't clear from your code. Would mapping controller n to port n work for Harmony?
Yes, Harmony just happens to use controller ID 1 and port ID 1.
OK, then the straight through mapping sounds like a good bet for initial submission. If we don't get better support in before 2.6.39 goes out then we can always expose the API to machine drivers as a stopgap.
Mark Brown wrote:
On Wed, Jan 05, 2011 at 04:20:04PM -0800, Stephen Warren wrote:
One of the nice things about ALSA on desktop is that everything just works without having to fiddle around setting up routing etc. from user-space e.g. using alsamixer or custom code.
If you look at the desktop case on a modern system a fair chunk of the configuration is actually being handled by PulseAudio rather than the driver - the driver exposes a standard view of the cards but Pulse figures out how it should be configured. Probably at some point after the media controller API (or something like it) makes it into mainline we'll start to see some efforts at similar managment for embedded systems, but the diversity of the hardware makes this a much more difficult problem.
Isn't ASoC attempting to set up a reasonably good routing/configuration on a per-board basis, driven by knowledge in the machine (board?) driver? Then, let people tweak it /if/ they want? It kinda sounds like you're pushing for a
No, it's delegating everything to userspace. ... [more detailed explanation]
OK. I understand having the use-cases controlled from user-space. However, I'm still having a little trouble mapping your explanation to what's actually in the code for various existing ASoC drivers.
As a specific example, look at sound/soc/atmel/playpaq_wm8510.c (for no other reason than being alphabetically first in the source tree):
static const struct snd_soc_dapm_widget playpaq_dapm_widgets[] = { SND_SOC_DAPM_MIC("Int Mic", NULL), SND_SOC_DAPM_SPK("Ext Spk", NULL), };
static const struct snd_soc_dapm_route intercon[] = { /* speaker connected to SPKOUT */ {"Ext Spk", NULL, "SPKOUTP"}, {"Ext Spk", NULL, "SPKOUTN"},
{"Mic Bias", NULL, "Int Mic"}, {"MICN", NULL, "Mic Bias"}, {"MICP", NULL, "Mic Bias"}, };
This is the kind of thing I'm talking about when saying that the machine drivers define the routing - or more precisely, they define all the possible legal routes, but then allow the driver to actually select which of those routes to actually use in the face of multiple options, such as both headphone and speaker outputs from a codec connected to a single CPU DAI.
Is this a level below the cases you were describing; i.e. in ASoC currently, the machine driver does specify possible routes, but let's user-space decide which of those to activate based on use-cases?
If the latter is true, then I'd argue that the mapping of CPU audio controller to CPU audio port (i.e. Tegra DAS configuration) is the same kind of thing. I suppose that feeds into your argument that the DAS driver should be a codec, albeit ASoC doesn't fully support chaining of codecs right now IIUC.
So, long term in Tegra's case, there would be a DAS codec with 3 controller- side DAIs and 5 port-side DAIs. The machine driver for Harmony would connect (using snd_soc_dapm_route data) just one of those port-side DAIs to the WM8903, and presumably only one of the controller-side DAIs to just the one controller, since it wouldn't make sense to actually expose N controllers when at most one could be used.
Or, would you expect Harmony to hook up both I2S controllers to the DAS codec, and let the user decide which one to use? Personally, I'd expect only a single controller to be exposed since only one can be used at a time, and hence toggling between using different controllers is pointless, and hence so is setting up two controllers.
On a system with two plus ports connected to a codec or modem, I imagine the machine driver would expose and connect enough controllers as makes sense based on the number of ports used and max number available in HW, and then define (again, using snd_soc_dapm_route data)either a 1:1 mapping between controllers and ports, or a more complex mux, based on whether #controllers==#ports, or #controllers<#ports.
So again, it's that kind of thing that I had envisaged the machine driver dictating; how many I2S controllers to "enable", and which I2S ports to map them to.
Finally, all the above is really talking about what a machine driver "allows". It'd still be nice if the default (in the absence of any alsamixer or UCM setup) was that opening e.g. hw:0,0 would allow some simple use of the audio, e.g. playback to primary output, capture from primary input, so that simple testing could be performed without having to set up a bunch of infra-structure in user-space. At least for playback, this is the case today for the Tegra audio driver contained in the patches I sent to the mailing list, and also for capture with the internal version with all that backdoor codec register hacking.
I wonder if such default setup is equivalent of something like the following from the atmel driver:
/* always connected pins */ snd_soc_dapm_enable_pin(dapm, "Int Mic"); snd_soc_dapm_enable_pin(dapm, "Ext Spk"); snd_soc_dapm_sync(dapm);
Thanks again for all the help.
On Thu, Jan 06, 2011 at 09:46:10AM -0800, Stephen Warren wrote:
Mark Brown wrote:
On Wed, Jan 05, 2011 at 04:20:04PM -0800, Stephen Warren wrote:
OK. I understand having the use-cases controlled from user-space. However, I'm still having a little trouble mapping your explanation to what's actually in the code for various existing ASoC drivers.
As a specific example, look at sound/soc/atmel/playpaq_wm8510.c (for no other reason than being alphabetically first in the source tree):
static const struct snd_soc_dapm_widget playpaq_dapm_widgets[] = { SND_SOC_DAPM_MIC("Int Mic", NULL), SND_SOC_DAPM_SPK("Ext Spk", NULL), };
This is the kind of thing I'm talking about when saying that the machine drivers define the routing - or more precisely, they define all the possible legal routes, but then allow the driver to actually select which of those routes to actually use in the face of multiple options, such as both headphone and speaker outputs from a codec connected to a single CPU DAI.
Note that in the case of the mic this is also plumbing the bias in to the jack itself so the core can figure out when that needs powering up. With the speaker the widget is mostly documentation, it doesn't actually *do* anything here. Some systems have GPIOs or whatever that need updating to power things externally, but that's not the case here, and some will make a SND_SOC_DAPM_PIN_SWITCH() to allow userspace to control stereo paths as one or otherwise manage the power for the widgets.
Is this a level below the cases you were describing; i.e. in ASoC currently, the machine driver does specify possible routes, but let's user-space decide which of those to activate based on use-cases?
Yes. The kernel says what's possible, userspace says what is.
If the latter is true, then I'd argue that the mapping of CPU audio controller to CPU audio port (i.e. Tegra DAS configuration) is the same kind of thing. I suppose that feeds into your argument that the DAS driver should be a codec, albeit ASoC doesn't fully support chaining of codecs right now IIUC.
Exactly, yes. The kernel lets the user do whatever is possible and treats the decision about what's needed at any given moment as a policy decision in line with the general kernel policy.
Chaining of CODECs will work at the minute, it's just really painful for userspace right now as it has to manually configure and start each CODEC<->CODEC link which isn't terribly reasonable, we only do it in cases like digital basebands where there's no other option.
Or, would you expect Harmony to hook up both I2S controllers to the DAS codec, and let the user decide which one to use? Personally, I'd expect only a single controller to be exposed since only one can be used at a time, and hence toggling between using different controllers is pointless, and hence so is setting up two controllers.
I don't think that'd be useful unless the CPU is capable of doing mixing. Similiarly, if both the CPU and CODEC could consume multiple streams using TDM then it might be useful. I don't think either of those cases applies here, though.
On a system with two plus ports connected to a codec or modem, I imagine the machine driver would expose and connect enough controllers as makes sense based on the number of ports used and max number available in HW, and then define (again, using snd_soc_dapm_route data)either a 1:1 mapping between controllers and ports, or a more complex mux, based on whether #controllers==#ports, or #controllers<#ports.
Or just let the user flip them over at runtime.
So again, it's that kind of thing that I had envisaged the machine driver dictating; how many I2S controllers to "enable", and which I2S ports to map them to.
It's the which I2S ports to map them onto that I'd like to see controllable at runtime - in the case where we have got multiple options (especially if you had all three physical ports live). Looking at your code it seemed like it was also possible to do things like route the external ports directly to each other so this sort of flexibility could be used to, say, take the CPU out of the audio path between two external devices for some use cases.
This isn't terribly likely to be used for things like Harmony but in things like smartphones my general experience is that if there's flexibility in the system someone's going to be able to think up a way of exploiting it to add a feature.
Finally, all the above is really talking about what a machine driver "allows". It'd still be nice if the default (in the absence of any alsamixer or UCM setup) was that opening e.g. hw:0,0 would allow some simple use of the audio, e.g. playback to primary output, capture from primary input, so that simple testing could be performed without having
Define "primary output" and "primary input" and what a sensible setup for them is; it's not terribly clear what they are always. Besides, it's not like ALSA supports *any* use without alsa-lib or equivalent and if you've got that far you've already got the infrastructure you need to do configuration anyway more or less. You'd need to do that to develop the default settings anyway, and once you've done that you've then got to translate them back into kernel space.
Probably the only people who'd get much from this are people who are using a reference system with no userspace provided, which is fairly unusual. If a userspace is being shipped then including config there generally isn't much of an issue, and if you're bringing up a board then you're in the situation where you need to work out what the setup that's needed is.
This also comes back to the issues with fragility from dependency on the internals of the CODEC and to a lesser extent CPU drivers - it means the machine driver needs to know much more about what's going on inside the devices it's connecting which isn't great, and it means that if you change the defualts in the machine driver you change the defaults userspace sees which may break setups done by setting the values of individual controls.
One other thing I should've mentioned is that from a subsystem maintainer point of view we've got a clear and definite answer to what the setup is so we don't need to worry about any debate on the issue, that's all punted to userspace.
to set up a bunch of infra-structure in user-space. At least for playback, this is the case today for the Tegra audio driver contained in the patches I sent to the mailing list, and also for capture with the internal version with all that backdoor codec register hacking.
Right, the WM8903 comes up with a DAC to headphone path by default.
One thing I'd say with most of the attempts I've seen to do the backdoor write approach what ends up happening is that some of the configuration done by userspace gets overwritten in the process of setting up the route, which isn't always useful.
I wonder if such default setup is equivalent of something like the following from the atmel driver:
/* always connected pins */ snd_soc_dapm_enable_pin(dapm, "Int Mic"); snd_soc_dapm_enable_pin(dapm, "Ext Spk"); snd_soc_dapm_sync(dapm);
Thanks again for all the help.
Sort of; it's a combination of that and the fact that the default setup of the WM8903 happens to be one that routes the DAC to the headphone.
Mark Brown wrote:
On Thu, Jan 06, 2011 at 09:46:10AM -0800, Stephen Warren wrote:
Finally, all the above is really talking about what a machine driver "allows". It'd still be nice if the default (in the absence of any alsamixer or UCM setup) was that opening e.g. hw:0,0 would allow some simple use of the audio, e.g. playback to primary output, capture from primary input, so that simple testing could be performed without having
Define "primary output" and "primary input" and what a sensible setup for them is; it's not terribly clear what they are always. Besides, it's not like ALSA supports *any* use without alsa-lib or equivalent and if you've got that far you've already got the infrastructure you need to do configuration anyway more or less. You'd need to do that to develop the default settings anyway, and once you've done that you've then got to translate them back into kernel space.
Probably the only people who'd get much from this are people who are using a reference system with no userspace provided, which is fairly unusual. If a userspace is being shipped then including config there generally isn't much of an issue, and if you're bringing up a board then you're in the situation where you need to work out what the setup that's needed is.
People who're bringup up a board without expecting they need to know anything about alsa-lib would get a bit out of this:-) :-)
So I guess what'd help me through this the most is knowing exactly how the user-space is configured; is this all stuff in /etc/asound.conf or some other config file? When you were talking about user-space earlier, I was thinking of writing a custom application per board/machine to configure the system at e.g. boot time (or libraries that apps would use), which didn't sound appealing; maybe that's part of my misunderstanding.
Are there any public examples for ASoC in particular that I can read in concert with the driver source to get the idea? I did find alsa-lib.git/src/conf but that seems to only cover desktop PCs, not ASoC drivers.
Thanks.
On Thu, Jan 06, 2011 at 01:58:22PM -0800, Stephen Warren wrote:
Mark Brown wrote:
On Thu, Jan 06, 2011 at 09:46:10AM -0800, Stephen Warren wrote:
"allows". It'd still be nice if the default (in the absence of any alsamixer or UCM setup) was that opening e.g. hw:0,0 would allow some simple use of the audio, e.g. playback to primary output, capture from primary input, so that simple testing could be performed without having
Probably the only people who'd get much from this are people who are using a reference system with no userspace provided, which is fairly
People who're bringup up a board without expecting they need to know anything about alsa-lib would get a bit out of this:-) :-)
The trouble is they're in a recursive problem where it's pretty difficult to know what a sane setup is for their system/
So I guess what'd help me through this the most is knowing exactly how the user-space is configured; is this all stuff in /etc/asound.conf or some other config file? When you were talking about user-space earlier,
This is up to userspace. Some systems use asound.conf and /usr/share/alsa/cards. Some systems have init scripts which run amixer commands. Some systems use UCM or their own implementations of that idea. Some systems restore an 'alsactl store' dump in an init script.
Partly it depends on how much flexibility is needed - if you've only got one input and one output (as is very common) then things are much like on a PC, the bigger more complicated the system the more fun things get.
I was thinking of writing a custom application per board/machine to configure the system at e.g. boot time (or libraries that apps would use), which didn't sound appealing; maybe that's part of my misunderstanding.
I was talking about a single application that can choose multiple configuration files using some algorithm.
Are there any public examples for ASoC in particular that I can read in concert with the driver source to get the idea? I did find alsa-lib.git/src/conf but that seems to only cover desktop PCs, not ASoC drivers.
Beagleboard is the most obvious example, and pretty close to the systems you're working on in terms of features. Not that I know off the top of my head what exactly the various software stacks for that are doing or where to find them.
Really there's no difference to PCs here, the userspace API is exactly the same and anything that works on PCs should work on an embedded system. You'll generally get more knobs to twiddle but the mechanics of doing so are identical.
participants (2)
-
Mark Brown
-
Stephen Warren