[alsa-devel] soc-dsp questions
To prepare next week's ALSA-Asoc meeting, I reviewed Liam's dsp-upstream code, and I am a bit unclear on the 'no_host_mode' supported by some DAIs. Apparently these are regular ALSA PCM substreams, except that there are no data exchanges to/from the host. - If these substreams are known, an application can in theory open/close them. How would it know that no data is to be written/read? I would think additional changes are required in alsa-lib? - is this really important that such PCM devices be known in userspace? Or are they declared only so that ALSA controls are enabled for these streams? - along the same lines, how useful is the .pointer routine for such devices? Since no data will be provided by the host, is this really needed? I see it's only meaningful for the low-power playback mode, but I am unsure if the information provided would actually be used by anyone? - what is the meaning of the SND_SOC_DAI_LINK_OPT_HOST definition? It doesn't seem to be updated depending on which back-end is used, so it this really needed? Thanks -Pierre
On Mon, 2011-04-25 at 17:01 -0500, pl bossart wrote:
To prepare next week's ALSA-Asoc meeting, I reviewed Liam's dsp-upstream code, and I am a bit unclear on the 'no_host_mode' supported by some DAIs. Apparently these are regular ALSA PCM substreams, except that there are no data exchanges to/from the host.
Ah, this part is WIP and will not be part of the initial submission. We are currently using this for audio between the MODEM and ABE that does not pass through the CPU.
- If these substreams are known, an application can in theory
open/close them. How would it know that no data is to be written/read? I would think additional changes are required in alsa-lib?
- is this really important that such PCM devices be known in
userspace? Or are they declared only so that ALSA controls are enabled for these streams?
The intention is that userspace apps would know there is no data, but this is probably better discussed at conference. It may also require an alsa-lib update too.
- along the same lines, how useful is the .pointer routine for such
devices? Since no data will be provided by the host, is this really needed? I see it's only meaningful for the low-power playback mode, but I am unsure if the information provided would actually be used by anyone?
I think in most cases pointer() would not be required here.
- what is the meaning of the SND_SOC_DAI_LINK_OPT_HOST definition? It
doesn't seem to be updated depending on which back-end is used, so it this really needed?
Again, this part is WIP and was used to signal that the DAI could be optionally be used in no host mode.
Liam
Thanks -Pierre _______________________________________________ Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
On Tue, Apr 26, 2011 at 10:41:05AM +0100, Liam Girdwood wrote:
On Mon, 2011-04-25 at 17:01 -0500, pl bossart wrote:
To prepare next week's ALSA-Asoc meeting, I reviewed Liam's dsp-upstream code, and I am a bit unclear on the 'no_host_mode' supported by some DAIs. Apparently these are regular ALSA PCM substreams, except that there are no data exchanges to/from the host.
Ah, this part is WIP and will not be part of the initial submission. We are currently using this for audio between the MODEM and ABE that does not pass through the CPU.
This is roughly the same thing I've been talking about for digital DAPM links. I've got code which runs at the minute but the implementation sucks too much, should be able to pull out some of the preparation work in the next day or so.
- is this really important that such PCM devices be known in
userspace? Or are they declared only so that ALSA controls are enabled for these streams?
The intention is that userspace apps would know there is no data, but this is probably better discussed at conference. It may also require an alsa-lib update too.
Personally I think we should just hide them, but it's fairly painful to do that immediately due to the ALSA core infrastructure we're using. If we refactor to reduce this or to support masking PCMs in the core this should be less of an issue.
On 4/26/2011 3:18 AM, Mark Brown wrote:
On Tue, Apr 26, 2011 at 10:41:05AM +0100, Liam Girdwood wrote:
On Mon, 2011-04-25 at 17:01 -0500, pl bossart wrote:
To prepare next week's ALSA-Asoc meeting, I reviewed Liam's dsp-upstream code, and I am a bit unclear on the 'no_host_mode' supported by some DAIs. Apparently these are regular ALSA PCM substreams, except that there are no data exchanges to/from the host.
Ah, this part is WIP and will not be part of the initial submission. We are currently using this for audio between the MODEM and ABE that does not pass through the CPU.
This mode is also deployed on QC MSM as well.
This is roughly the same thing I've been talking about for digital DAPM links. I've got code which runs at the minute but the implementation sucks too much, should be able to pull out some of the preparation work in the next day or so.
Is it possible I can take an early glimpse of implementation? I do have use cases that PCM is exchanged between two back-ends. Right now, I need to define DUMMY hostless front-end DAI links to bring up the back-ends. Another query is how hardware parameters is passed to back-end with your design? Not able to choose back-end channel mode independent of front-end channel mode is a big problem especially if channel mode is more than stereo. DSP is handles upmixing/downmixing happens in our design. Right now, we force channel mode to stereo. So, for the scenario which we just want mono input, we have codec configured to pick up single mic input to both left and right channel. DSP takes average of two channels into mono stream. Once we need to support > 2 channel recording, it's wasteful to go with the same approach if all we want is mono input. Did we talk about this topic during workshop? Why is my problem also a concern on OMAP4/ABE? Suggestion?
On Thu, Jun 09, 2011 at 11:58:23PM -0700, Patrick Lai wrote:
On 4/26/2011 3:18 AM, Mark Brown wrote:
This is roughly the same thing I've been talking about for digital DAPM links. I've got code which runs at the minute but the implementation sucks too much, should be able to pull out some of the preparation work in the next day or so.
Is it possible I can take an early glimpse of implementation? I do have
No, the code ended up colliding with something someone else had done and wasn't worth saving so I just threw it away. However I did get the external interface upstream before I did that - that was the code to support not providing a platform driver for a DAI. What the code did was to look to see if there was a platform driver and if there wasn't it'd add some DAPM nodes and links which would bring up the DAI with no userspace involvement.
use cases that PCM is exchanged between two back-ends. Right now, I need to define DUMMY hostless front-end DAI links to bring up the back-ends. Another query is how hardware parameters is passed to back-end with your design? Not able to choose back-end channel mode
For the initial code they were just inferred from the capabilities of the DAI - all the cases I'm interested in are for interoperation with something that's fixed format at one end of the link so I could punt on that issue. I'd thought about allowing the dai_link to have a set of hw_params settings stored in it which the user would be given an enumeration to select from but hadn't actually done anything concrete with it.
independent of front-end channel mode is a big problem especially if channel mode is more than stereo. DSP is handles upmixing/downmixing happens in our design. Right now, we force channel mode to stereo. So, for the scenario which we just want mono input, we have codec configured to pick up single mic input to both left and right channel. DSP takes average of two channels into mono stream. Once we need to support > 2 channel recording, it's wasteful to go with the same approach if all we want is mono input. Did we talk about this topic
That might be handlable by either of the methods I was suggesting above. Of course depending on the algorithms you're running the DSP may want more mics than it's producing output channels - beam forming or noise cancellation are the obvious examples there.
during workshop? Why is my problem also a concern on OMAP4/ABE?
Do you mean not also a concern? I *believe* OMAP is passing the configuration through to the external DAI using the front end/back end connection so the format gets selected by the app when it does a record, possibly with rewriting through the hook functions in the machine driver.
On 6/10/2011 2:42 AM, Mark Brown wrote:
That might be handlable by either of the methods I was suggesting above. Of course depending on the algorithms you're running the DSP may want more mics than it's producing output channels - beam forming or noise cancellation are the obvious examples there.
Yes, exactly.
Do you mean not also a concern? I *believe* OMAP is passing the configuration through to the external DAI using the front end/back end connection so the format gets selected by the app when it does a record,
Can you elaborate more on "configuration through external DAI" ? Is there an example?
possibly with rewriting through the hook functions in the machine driver.
Are you referring to fixup function in the machine driver? It works for hardware parameter that is fixed per machine. For example, regardless sample rate of front-ends that are routed to same back-end, back-end sample rate is fixed to 48KHz. I am already taking advantage of the hook.
Another query I have is how to handle back-end error. The audio bus which is running on my machine requires close coordination between CPU and CODEC. Essentially, if one side is unable to respond to incoming data in time, data exchange halts. I am looking for way to reset both CPU and CODEC back to fresh state. One approach I am thinking is to generate XRUN error(snd_pcm_stop(SNDRV_PCM_STATE_XRUN) and have application call prepare() to reset CPU and CODEC back to good state. I see each back-end is registered as PCM device so it's possible that application can read /dev/snd/timer to get notified. However, do I call prepare() on one of FE PCM devices that are routed to the back end in question? Would this approach work? Any suggestion?
Thanks Patrick
On Fri, Jun 10, 2011 at 06:19:57PM -0700, Patrick Lai wrote:
On 6/10/2011 2:42 AM, Mark Brown wrote:
Do you mean not also a concern? I *believe* OMAP is passing the configuration through to the external DAI using the front end/back end connection so the format gets selected by the app when it does a record,
Can you elaborate more on "configuration through external DAI" ? Is there an example?
That's the whole soc-dsp front end/back end connection thing so the OMAP drivers should provide an example.
possibly with rewriting through the hook functions in the machine driver.
Are you referring to fixup function in the machine driver? It works for hardware parameter that is fixed per machine. For example, regardless sample rate of front-ends that are routed to same back-end, back-end sample rate is fixed to 48KHz. I am already taking advantage of the hook.
Yes. Since it's code it *could* do conditional things based on some setting if it needs to.
Another query I have is how to handle back-end error. The audio bus which is running on my machine requires close coordination between CPU and CODEC. Essentially, if one side is unable to respond to incoming data in time, data exchange halts. I am looking for way to reset both CPU and CODEC back to fresh state. One approach I am thinking is to generate XRUN error(snd_pcm_stop(SNDRV_PCM_STATE_XRUN) and have application call prepare() to reset CPU and CODEC back to good state. I see each back-end is registered as PCM device so it's possible that application can read /dev/snd/timer to get notified. However, do I call prepare() on one of FE PCM devices that are routed to the back end in question? Would this approach work? Any suggestion?
I'd expect that from an application point of view this will just work already? The application will just operate on the PCM it's operating on and will notice a stall in the same way it does for any other device then the front/back end machinery will connect everything up in the same way it does for every operation when (if!) the application tries to recover.
Are you referring to fixup function in the machine driver? It works for hardware parameter that is fixed per machine. For example, regardless sample rate of front-ends that are routed to same back-end, back-end sample rate is fixed to 48KHz. I am already taking advantage of the hook.
Yes. Since it's code it *could* do conditional things based on some setting if it needs to.
Yes, I don't see any better way unless propagating knowledge of front-end/back-end all the way to user-space. I plan on adding enumeration of channel mode in machine driver which sets variable read by the fix up function. So, CPU and CODEC drivers do not need to provide enumeration of channel modes. However, same control would have to be added to every machine driver.
Another query I have is how to handle back-end error. The audio bus which is running on my machine requires close coordination between CPU and CODEC. Essentially, if one side is unable to respond to incoming data in time, data exchange halts. I am looking for way to reset both CPU and CODEC back to fresh state. One approach I am thinking is to generate XRUN error(snd_pcm_stop(SNDRV_PCM_STATE_XRUN) and have application call prepare() to reset CPU and CODEC back to good state. I see each back-end is registered as PCM device so it's possible that application can read /dev/snd/timer to get notified. However, do I call prepare() on one of FE PCM devices that are routed to the back end in question? Would this approach work? Any suggestion?
I'd expect that from an application point of view this will just work already? The application will just operate on the PCM it's operating on and will notice a stall in the same way it does for any other device then the front/back end machinery will connect everything up in the same way it does for every operation when (if!) the application tries to recover.
Yes, I suppose existing mechanism would work if the front-end PCM is not hostless. Eventually, underrun/overrun would be detected. If front-end PCM is hostless, host processor for my case would not even know that Front-end is stalled even though back-end knows. I suppose we could enhance framework to propagate XRUN error from back-end to front-end and have front end generate XRUN signal back to user-space. However, for multiple Front-end to same back end case, all clients of front-end PCM get notified of XRUN error and all of them call prepare(). I can see there can be adverse effect with such behavior. Maybe it's easier to recover under the hood without user-space knowing it.
Thanks Patrick
On 13/06/11 05:55, Patrick Lai wrote:
Are you referring to fixup function in the machine driver? It works for hardware parameter that is fixed per machine. For example, regardless sample rate of front-ends that are routed to same back-end, back-end sample rate is fixed to 48KHz. I am already taking advantage of the hook.
Yes. Since it's code it *could* do conditional things based on some setting if it needs to.
Yes, I don't see any better way unless propagating knowledge of front-end/back-end all the way to user-space. I plan on adding enumeration of channel mode in machine driver which sets variable read by the fix up function. So, CPU and CODEC drivers do not need to provide enumeration of channel modes. However, same control would have to be added to every machine driver.
Ok, sounds like this may be useful code for other DSPs too.
Fwiw, OMAP4 fixups are based on physical BE DAI so we always use the same config for each BE DAI atm, but flexibility would be good.
Liam
On 11/06/11 12:48, Mark Brown wrote:
On Fri, Jun 10, 2011 at 06:19:57PM -0700, Patrick Lai wrote:
Another query I have is how to handle back-end error. The audio bus which is running on my machine requires close coordination between CPU and CODEC. Essentially, if one side is unable to respond to incoming data in time, data exchange halts. I am looking for way to reset both CPU and CODEC back to fresh state. One approach I am thinking is to generate XRUN error(snd_pcm_stop(SNDRV_PCM_STATE_XRUN) and have application call prepare() to reset CPU and CODEC back to good state. I see each back-end is registered as PCM device so it's possible that application can read /dev/snd/timer to get notified. However, do I call prepare() on one of FE PCM devices that are routed to the back end in question? Would this approach work? Any suggestion?
I'd expect that from an application point of view this will just work already? The application will just operate on the PCM it's operating on and will notice a stall in the same way it does for any other device then the front/back end machinery will connect everything up in the same way it does for every operation when (if!) the application tries to recover.
This is how it works on OMAP4.
The ABE handles and recovers most BE errors internally and can also interrupt the CPU for serious errors (to signal XRUNs etc).
Liam
On 11/06/11 02:19, Patrick Lai wrote:
On 6/10/2011 2:42 AM, Mark Brown wrote:
That might be handlable by either of the methods I was suggesting above. Of course depending on the algorithms you're running the DSP may want more mics than it's producing output channels - beam forming or noise cancellation are the obvious examples there.
Yes, exactly.
Do you mean not also a concern? I *believe* OMAP is passing the configuration through to the external DAI using the front end/back end connection so the format gets selected by the app when it does a record,
This is correct. We pass configuration from the FE PCM to all the connected BE DAIs.
Liam
participants (5)
-
Liam Girdwood
-
Liam Girdwood
-
Mark Brown
-
Patrick Lai
-
pl bossart