[alsa-devel] Using UCM with PulseAudio
Hello, I've been working on porting PulseAudio to Android on the OMAP4-based Galaxy Nexus, and have recently looking at the policy-bits. The basic porting has been greatly simplified with Wei Feng's work to renew PulseAudio UCM integration and Liam's help with fixing up some of the UCM config for the Galaxy Nexus. I am, however, facing some trouble mapping how the hardware is used to how UCM presents it (or, perhaps, the UCM-PA mapping does).
Some simplified background: the devices of interest on the OMAP4 SoC are the main hifi PCM which can be routed to various outputs, the modem PCM which is not used for actual output but to enable use of the modem during calls, and the tones PCM which is intended to be used for playing ringtones, aiui.
The first problem is mutual exclusivity of verbs. From what I can understand, verbs are intended to be mutually exclusive -- if you have a HiFi verb and a VoiceCall verb, only one may be used at a time. We have mapped verbs to card profiles, which offer the same guarantee. However, on Android (which is a fair example of the kind of audio policy we might want), the HiFi verb PCMs maybe used while the VoiceCall PCMs are open. This is done, for example, to play an end-of-call tone from the CPU while the modem PCMs are still held open. Is there some way to do this with UCM?
The second problem is having separate PCMs for modifiers. In the OMAP4 profile, ringtone playback is exposed via a PlayTone modifier which corresponds to a separate PCM from regular HiFi playback. In the UCM-PA mapping we decided on, modifiers were implemented as device intended roles on a sink, so that when a stream with that role came in, we could enable the modifier, and disable it when such a stream ends. However, this doesn't account for switching the PCM on which playback is occurring. Should we be creating a separate sink for such modifiers (with lower priority, so they're not routed to unless there's a stream with the required role coming in)? Or should we be reopening the PCM for this?
Finally, a question also related to modifiers -- is it expected that there will never be a case where a stream that requires no modifier is being played while a stream that does require a modifier also exists? If not, what kind of policy should we have for enabling the modifier or not?
Cheers, Arun
On Fri, 2012-06-15 at 08:23 +0530, Arun Raghavan wrote:
Hello, I've been working on porting PulseAudio to Android on the OMAP4-based Galaxy Nexus, and have recently looking at the policy-bits. The basic porting has been greatly simplified with Wei Feng's work to renew PulseAudio UCM integration and Liam's help with fixing up some of the UCM config for the Galaxy Nexus. I am, however, facing some trouble mapping how the hardware is used to how UCM presents it (or, perhaps, the UCM-PA mapping does).
Some simplified background: the devices of interest on the OMAP4 SoC are the main hifi PCM which can be routed to various outputs, the modem PCM which is not used for actual output but to enable use of the modem during calls, and the tones PCM which is intended to be used for playing ringtones, aiui.
Sorry for hijacking the thread. The following wall of text won't help Arun with his immediate problems, but I wanted to share my thoughts about the planned routing system in pulseaudio and how OMAP4-style hardware affects it.
I asked in IRC for a clarification about what "is not used for actual output but to enable use of the modem during calls" means. Arun means that opening the modem playback PCM opens a direct audio path from the modem to the earpiece. Pulseaudio doesn't see the audio data at all. There's a corresponding capture PCM that opens a direct audio path to the other direction.
This sort of hardware causes trouble for the planned routing system, at least as I have envisioned it to behave. My vision has been that the routing logic in pulseaudio would enable and disable ports based on the streams that exist and their properties. In the OMAP4 case, there aren't any streams created when a cellular call starts, and therefore the routing logic doesn't know that it should do something.
Arun has made an "Android policy module", which I suppose activates the VoiceCall profile when a call starts and prevents the associated sink and source from suspending even though they appear to be idle. In the brave new generic routing policy world, the ports/profiles should not be touched by anything else than the generic routing logic. Therefore, Arun's solution will have to be eventually replaced somehow.
As I described, my vision for the routing logic falls short with OMAP4, if there are no streams associated with the cellular audio. So, what's the solution? One solution would be to create virtual streams for the in-hardware audio paths. Another would be to introduce entirely new concepts in pulseaudio for the in-hardware audio paths.
Regarding the virtual streams solution, it's still unclear what would trigger the virtual streams to be created. I think they should be created by module-alsa-card when the relevant ports are activated, but what activates the ports, if the routing system only does anything when streams are created? The cellular adaptation module should tell the routing system that "a call is starting, it would be nice to open the call audio path now". I was hoping that the virtual stream solution would avoid adding any further complexity to the routing system, but it seems that the routing system must anyway be aware of the special in-hardware audio paths so that other modules can request them to be activated.
So, maybe it's not worth the effort to create virtual streams, since they don't actually help with the routing.
After spending a while thinking about this, my proposal is that the routing logic should play with more abstract inputs and outputs than streams and ports. All streams and ports, and the cellular modem that is not directly accessible by Pulseaudio, would be just inputs and outputs from the routing system point of view.
For example, when a music player creates a new playback stream, the server-side stream implementation would create a new input and register it in the routing system. After that, the stream implementation would submit a "routing request" to the routing system: "connect input id:4 to somewhere". The application didn't specify where the stream should be routed, so based on the routing request properties (like media role), the routing system would select the best output that is available.
Routing requests would exist as long as they are needed. In case of normal streams, that would be as long as the stream exists. If it becomes impossible to fulfill the request at some point, the stream gets removed.
Routing requests would have a priority - in the music player's case, the priority would probably be derived from the stream media role. Phone stream routing requests would have a higher priority.
Another example: there's a cellular call on an OMAP4 platform. The cellular adaptation module notices that, and creates two routing requests: "connect input type:cellular_modem to somewhere" and "connect output type:cellular_modem to somewhere". The alsa card would have created, based on the UCM data, an input and an output with label "type:cellular_modem". Those would have very limited routing options: for example, the cellular modem input might be routable only to the earpiece or the main speaker. module-alsa-card would inform the routing system about the limitations, and the routing system would pick the best alternative from the two available choices.
On some other platform where the cellular modem audio interface is directly accessible, "type:cellular_modem" would point to an input or output that is implemented by a normal port. I don't know if UCM provides the information about whether the VoiceCall PCMs implement a real audio interface or are they just a hack to enable the in-hardware audio path. If that information is not available, I'd say that it's then something that needs to be fixed.
After the routing system has decided what input will be routed to what output, it will have to somehow make the connection happen. The problem is that only the alsa module knows how to connect the cellular modem to the earpiece. Maybe the input and output structs could have a type that said whether the input or output is a stream, a port, or something custom. If it's something custom, the struct would have a function pointer for doing the connection, otherwise normal stream connection code can be used.
On Fri, 2012-06-15 at 15:11 +0300, Tanu Kaskinen wrote:
On Fri, 2012-06-15 at 08:23 +0530, Arun Raghavan wrote:
Hello, I've been working on porting PulseAudio to Android on the OMAP4-based Galaxy Nexus, and have recently looking at the policy-bits. The basic porting has been greatly simplified with Wei Feng's work to renew PulseAudio UCM integration and Liam's help with fixing up some of the UCM config for the Galaxy Nexus. I am, however, facing some trouble mapping how the hardware is used to how UCM presents it (or, perhaps, the UCM-PA mapping does).
Some simplified background: the devices of interest on the OMAP4 SoC are the main hifi PCM which can be routed to various outputs, the modem PCM which is not used for actual output but to enable use of the modem during calls, and the tones PCM which is intended to be used for playing ringtones, aiui.
Sorry for hijacking the thread. The following wall of text won't help Arun with his immediate problems, but I wanted to share my thoughts about the planned routing system in pulseaudio and how OMAP4-style hardware affects it.
No worries :)
I asked in IRC for a clarification about what "is not used for actual output but to enable use of the modem during calls" means. Arun means that opening the modem playback PCM opens a direct audio path from the modem to the earpiece. Pulseaudio doesn't see the audio data at all. There's a corresponding capture PCM that opens a direct audio path to the other direction.
The OMAP4 audio PCM is used to configure and initiate the transfer of PCM data to and from the MODEM. The audio PCM is basically routed from the CODEC via the ABE to the MODEM and vice versa. The host CPU does not move any PCM data around in this use case (except when another PCM is opened to record the call etc.).
This sort of hardware causes trouble for the planned routing system, at least as I have envisioned it to behave. My vision has been that the routing logic in pulseaudio would enable and disable ports based on the streams that exist and their properties. In the OMAP4 case, there aren't any streams created when a cellular call starts, and therefore the routing logic doesn't know that it should do something.
This is quite common in the smartphone voice call use case. A lot of phones will not route voice PCM data across the CPU in order to save power. The host CPU will typically setup the call (i.e. mixers, rate, etc) and then start the call. In this case PA would configure and initiate the call and then sleep. However PA would still be "active" wrt the voice call as it would be required to perform any voice call actions like capture the call or play music during the call.
Regarding the virtual streams solution, it's still unclear what would trigger the virtual streams to be created. I think they should be created by module-alsa-card when the relevant ports are activated, but what activates the ports, if the routing system only does anything when streams are created? The cellular adaptation module should tell the routing system that "a call is starting, it would be nice to open the call audio path now". I was hoping that the virtual stream solution would avoid adding any further complexity to the routing system, but it seems that the routing system must anyway be aware of the special in-hardware audio paths so that other modules can request them to be activated.
UCM can help here a little since it can be used to give information about the underlying hardware to PA. i.e. if there is a sink for the voicecall, and if the sink is virtual, etc.
On some other platform where the cellular modem audio interface is directly accessible, "type:cellular_modem" would point to an input or output that is implemented by a normal port. I don't know if UCM provides the information about whether the VoiceCall PCMs implement a real audio interface or are they just a hack to enable the in-hardware audio path. If that information is not available, I'd say that it's then something that needs to be fixed.
As mentioned above we can add this to UCM to help with this sort of logic.
Regards
Liam
On Fri, Jun 15, 2012 at 06:28:51PM +0100, Liam Girdwood wrote:
On Fri, 2012-06-15 at 15:11 +0300, Tanu Kaskinen wrote:
This sort of hardware causes trouble for the planned routing system, at least as I have envisioned it to behave. My vision has been that the routing logic in pulseaudio would enable and disable ports based on the streams that exist and their properties. In the OMAP4 case, there aren't any streams created when a cellular call starts, and therefore the routing logic doesn't know that it should do something.
This is quite common in the smartphone voice call use case. A lot of
It's more than common, I'm only aware of one phone design which does route the audio via the CPU on call (the N900). Anything that can't cope with bypassed on call audio is going to struggle.
On Fri, 2012-06-15 at 18:34 +0100, Mark Brown wrote:
On Fri, Jun 15, 2012 at 06:28:51PM +0100, Liam Girdwood wrote:
On Fri, 2012-06-15 at 15:11 +0300, Tanu Kaskinen wrote:
This sort of hardware causes trouble for the planned routing system, at least as I have envisioned it to behave. My vision has been that the routing logic in pulseaudio would enable and disable ports based on the streams that exist and their properties. In the OMAP4 case, there aren't any streams created when a cellular call starts, and therefore the routing logic doesn't know that it should do something.
This is quite common in the smartphone voice call use case. A lot of
It's more than common, I'm only aware of one phone design which does route the audio via the CPU on call (the N900). Anything that can't cope with bypassed on call audio is going to struggle.
N9 also routes the audio via the CPU. Now you know two phone designs :)
[not sure if this is still interesting to alsa-devel, but keeping the CC anyway]
On Fri, 2012-06-15 at 15:11 +0300, Tanu Kaskinen wrote:
On Fri, 2012-06-15 at 08:23 +0530, Arun Raghavan wrote:
Hello, I've been working on porting PulseAudio to Android on the OMAP4-based Galaxy Nexus, and have recently looking at the policy-bits. The basic porting has been greatly simplified with Wei Feng's work to renew PulseAudio UCM integration and Liam's help with fixing up some of the UCM config for the Galaxy Nexus. I am, however, facing some trouble mapping how the hardware is used to how UCM presents it (or, perhaps, the UCM-PA mapping does).
Some simplified background: the devices of interest on the OMAP4 SoC are the main hifi PCM which can be routed to various outputs, the modem PCM which is not used for actual output but to enable use of the modem during calls, and the tones PCM which is intended to be used for playing ringtones, aiui.
Sorry for hijacking the thread. The following wall of text won't help Arun with his immediate problems, but I wanted to share my thoughts about the planned routing system in pulseaudio and how OMAP4-style hardware affects it.
I asked in IRC for a clarification about what "is not used for actual output but to enable use of the modem during calls" means. Arun means that opening the modem playback PCM opens a direct audio path from the modem to the earpiece. Pulseaudio doesn't see the audio data at all. There's a corresponding capture PCM that opens a direct audio path to the other direction.
This sort of hardware causes trouble for the planned routing system, at least as I have envisioned it to behave. My vision has been that the routing logic in pulseaudio would enable and disable ports based on the streams that exist and their properties. In the OMAP4 case, there aren't any streams created when a cellular call starts, and therefore the routing logic doesn't know that it should do something.
Arun has made an "Android policy module", which I suppose activates the VoiceCall profile when a call starts and prevents the associated sink and source from suspending even though they appear to be idle. In the brave new generic routing policy world, the ports/profiles should not be touched by anything else than the generic routing logic. Therefore, Arun's solution will have to be eventually replaced somehow.
As I described, my vision for the routing logic falls short with OMAP4, if there are no streams associated with the cellular audio. So, what's the solution? One solution would be to create virtual streams for the in-hardware audio paths. Another would be to introduce entirely new concepts in pulseaudio for the in-hardware audio paths.
Regarding the virtual streams solution, it's still unclear what would trigger the virtual streams to be created. I think they should be created by module-alsa-card when the relevant ports are activated, but what activates the ports, if the routing system only does anything when streams are created? The cellular adaptation module should tell the routing system that "a call is starting, it would be nice to open the call audio path now". I was hoping that the virtual stream solution would avoid adding any further complexity to the routing system, but it seems that the routing system must anyway be aware of the special in-hardware audio paths so that other modules can request them to be activated.
So, maybe it's not worth the effort to create virtual streams, since they don't actually help with the routing.
After spending a while thinking about this, my proposal is that the routing logic should play with more abstract inputs and outputs than streams and ports. All streams and ports, and the cellular modem that is not directly accessible by Pulseaudio, would be just inputs and outputs from the routing system point of view.
For example, when a music player creates a new playback stream, the server-side stream implementation would create a new input and register it in the routing system. After that, the stream implementation would submit a "routing request" to the routing system: "connect input id:4 to somewhere". The application didn't specify where the stream should be routed, so based on the routing request properties (like media role), the routing system would select the best output that is available.
Routing requests would exist as long as they are needed. In case of normal streams, that would be as long as the stream exists. If it becomes impossible to fulfill the request at some point, the stream gets removed.
Routing requests would have a priority - in the music player's case, the priority would probably be derived from the stream media role. Phone stream routing requests would have a higher priority.
Another example: there's a cellular call on an OMAP4 platform. The cellular adaptation module notices that, and creates two routing requests: "connect input type:cellular_modem to somewhere" and "connect output type:cellular_modem to somewhere". The alsa card would have created, based on the UCM data, an input and an output with label "type:cellular_modem". Those would have very limited routing options: for example, the cellular modem input might be routable only to the earpiece or the main speaker. module-alsa-card would inform the routing system about the limitations, and the routing system would pick the best alternative from the two available choices.
On some other platform where the cellular modem audio interface is directly accessible, "type:cellular_modem" would point to an input or output that is implemented by a normal port. I don't know if UCM provides the information about whether the VoiceCall PCMs implement a real audio interface or are they just a hack to enable the in-hardware audio path. If that information is not available, I'd say that it's then something that needs to be fixed.
This likely means we need to add a mechanism to UCM to signal that a PCM is a virtual/fake PCM. This would be needed for whatever solution I devise to solve the problem immediately as well.
After the routing system has decided what input will be routed to what output, it will have to somehow make the connection happen. The problem is that only the alsa module knows how to connect the cellular modem to the earpiece. Maybe the input and output structs could have a type that said whether the input or output is a stream, a port, or something custom. If it's something custom, the struct would have a function pointer for doing the connection, otherwise normal stream connection code can be used.
The system you describe sounds generic enough to catch most things we'd want to do, but I think it makes sense to enumerate the kinds of uses that we are not able to elegantly handle right now. Here's what I have:
1. Virtual devices that represent paths that are not in the CPU domain (like the phone PCMs above)
2. Devices that require a loopback to be useful, like an FM radio or A2DP source
3. Devices that have a loopback path in hardware/DSP (this is a request I've seen, but I'm not aware of hardware that does this)
I'm sure there's more stuff, so please do add to this list.
While (1) is probably easy to take care of cleanly with whatever we have currently, having some routing awareness in the PA ALSA modules, and an ability to express this to the routing logic seems to be necessary to deal with (2) and (3).
-- Arun
On Mon, 2012-07-23 at 09:08 +0530, Arun Raghavan wrote:
- Devices that have a loopback path in hardware/DSP (this is a request
I've seen, but I'm not aware of hardware that does this)
Isn't this very common with desktop hardware? For example, my laptop's sound card has a playback volume and mute control for the "Mic" element, which supposedly could be used for looping the mic audio to the speakers or headphones inside the hardware.
The N9 hardware also has several possibilities for loopbacks. We used one for the sidetone in calls (sidetone means playing back your own voice back to you, and it's a mandatory feature in mobile phones, AFAIK).
On Mon, Jul 23, 2012 at 09:33:27AM +0300, Tanu Kaskinen wrote:
The N9 hardware also has several possibilities for loopbacks. We used one for the sidetone in calls (sidetone means playing back your own voice back to you, and it's a mandatory feature in mobile phones, AFAIK).
It's not mandatory but it's certainly pretty standard. Some manufacturers don't do it though.
On Mon, Jul 23, 2012 at 09:08:40AM +0530, Arun Raghavan wrote:
Guys, please delete unneeded context from messages - makes it much easier to find the new content.
- Devices that have a loopback path in hardware/DSP (this is a request
I've seen, but I'm not aware of hardware that does this)
Essentially all basebands in cellphones are connected like this; usually on a phone call the processor running Linux can go into suspend. There are cases where the loopback happens in the main SoC but from the point of view of thinking about it the physical location doesn't usually make a big difference.
Most mobile CODECs also have the ability to route their inputs to their outputs without involving the processor for use in sidetone paths and similar. PCs always used to have the ability to do this too, though I guess it's possible that got dropped at some point.
On Fri, Jun 15, 2012 at 08:23:33AM +0530, Arun Raghavan wrote:
Finally, a question also related to modifiers -- is it expected that there will never be a case where a stream that requires no modifier is being played while a stream that does require a modifier also exists? If not, what kind of policy should we have for enabling the modifier or not?
Yes, music playback plus ringtone where at least ringtone would probably be a modifier.
On Fri, 2012-06-15 at 08:23 +0530, Arun Raghavan wrote:
Hello, I've been working on porting PulseAudio to Android on the OMAP4-based Galaxy Nexus, and have recently looking at the policy-bits. The basic porting has been greatly simplified with Wei Feng's work to renew PulseAudio UCM integration and Liam's help with fixing up some of the UCM config for the Galaxy Nexus. I am, however, facing some trouble mapping how the hardware is used to how UCM presents it (or, perhaps, the UCM-PA mapping does).
Some simplified background: the devices of interest on the OMAP4 SoC are the main hifi PCM which can be routed to various outputs, the modem PCM which is not used for actual output but to enable use of the modem during calls, and the tones PCM which is intended to be used for playing ringtones, aiui.
You understanding is correct :)
The first problem is mutual exclusivity of verbs. From what I can understand, verbs are intended to be mutually exclusive -- if you have a HiFi verb and a VoiceCall verb, only one may be used at a time. We have mapped verbs to card profiles, which offer the same guarantee. However, on Android (which is a fair example of the kind of audio policy we might want), the HiFi verb PCMs maybe used while the VoiceCall PCMs are open. This is done, for example, to play an end-of-call tone from the CPU while the modem PCMs are still held open. Is there some way to do this with UCM?
A verb is not tied to any specific PCM device here. It's intended to be the highest level of audio use case where it can configure any audio resource (including multiple cards) to enable the use case action.
So for OMAP4 we would have the HiFi verb for all use case where we are playing or capturing HiFi quality and the voice call verb where we are making a telephone voice call.
UCM provides the "modifier" to allow ad-hoc modifications to the audio use case like above where we want to play an end of call tone. In this case the verb is still VoiceCall, but PA would enable the "PlayTone" modifer to play the tone (UCM can also tell Pulseaudio the sink PCM and volume control for the tone data).
The second problem is having separate PCMs for modifiers. In the OMAP4 profile, ringtone playback is exposed via a PlayTone modifier which corresponds to a separate PCM from regular HiFi playback. In the UCM-PA mapping we decided on, modifiers were implemented as device intended roles on a sink, so that when a stream with that role came in, we could enable the modifier, and disable it when such a stream ends. However, this doesn't account for switching the PCM on which playback is occurring. Should we be creating a separate sink for such modifiers (with lower priority, so they're not routed to unless there's a stream with the required role coming in)? Or should we be reopening the PCM for this?
The intention for modifiers that use separate ALSA PCM sinks/sources from the verb is to keep the main stream on the verb PCM source/sink and the modifier stream will use the modifier PCM sink/source (this can be the same PCM for some hardware).
e.g. MP3 will be played to pcm 0 sink and ringtone to pcm 1 sink. The HW will then mix both streams before they are rendered.
Finally, a question also related to modifiers -- is it expected that there will never be a case where a stream that requires no modifier is being played while a stream that does require a modifier also exists? If not, what kind of policy should we have for enabling the modifier or not?
Some verbs will not specify modifiers. In this case we should mix any tones etc within the Pulseaudio stream and render on the main sink/source.
It's intended that we will try and enable a modifier for a verb (if the modifier exists) when a new PA client stream is opened and it's type can be matched to a modifier. Otherwise I would just mix the streams in PA and render to the current sink.
Regards
Liam
Cheers, Arun
Liam, Thanks for clearing things up.
On Fri, 2012-06-15 at 18:08 +0100, Liam Girdwood wrote: [...]
The first problem is mutual exclusivity of verbs. From what I can understand, verbs are intended to be mutually exclusive -- if you have a HiFi verb and a VoiceCall verb, only one may be used at a time. We have mapped verbs to card profiles, which offer the same guarantee. However, on Android (which is a fair example of the kind of audio policy we might want), the HiFi verb PCMs maybe used while the VoiceCall PCMs are open. This is done, for example, to play an end-of-call tone from the CPU while the modem PCMs are still held open. Is there some way to do this with UCM?
A verb is not tied to any specific PCM device here. It's intended to be the highest level of audio use case where it can configure any audio resource (including multiple cards) to enable the use case action.
So for OMAP4 we would have the HiFi verb for all use case where we are playing or capturing HiFi quality and the voice call verb where we are making a telephone voice call.
UCM provides the "modifier" to allow ad-hoc modifications to the audio use case like above where we want to play an end of call tone. In this case the verb is still VoiceCall, but PA would enable the "PlayTone" modifer to play the tone (UCM can also tell Pulseaudio the sink PCM and volume control for the tone data).
Just as an observation, I think the Android HAL just uses PCM 0 and not the PCM 3, but in the PA case, I'll do it the way you describe since that makes more sense.
The second problem is having separate PCMs for modifiers. In the OMAP4 profile, ringtone playback is exposed via a PlayTone modifier which corresponds to a separate PCM from regular HiFi playback. In the UCM-PA mapping we decided on, modifiers were implemented as device intended roles on a sink, so that when a stream with that role came in, we could enable the modifier, and disable it when such a stream ends. However, this doesn't account for switching the PCM on which playback is occurring. Should we be creating a separate sink for such modifiers (with lower priority, so they're not routed to unless there's a stream with the required role coming in)? Or should we be reopening the PCM for this?
The intention for modifiers that use separate ALSA PCM sinks/sources from the verb is to keep the main stream on the verb PCM source/sink and the modifier stream will use the modifier PCM sink/source (this can be the same PCM for some hardware).
e.g. MP3 will be played to pcm 0 sink and ringtone to pcm 1 sink. The HW will then mix both streams before they are rendered.
Okay, so what this means is that we need to extend the current UCM work to create a second, lower priority sink for each modifier that has a distinct PlaybackPCM, and let the role-based routing pick that in cases where it makes sense. It's good that this doesn't change how we've mapped existing concepts -- just adds to it.
Cheers, Arun
On 06/20/2012 07:30 AM, Arun Raghavan wrote:
UCM provides the "modifier" to allow ad-hoc modifications to the audio use case like above where we want to play an end of call tone. In this case the verb is still VoiceCall, but PA would enable the "PlayTone" modifer to play the tone (UCM can also tell Pulseaudio the sink PCM and volume control for the tone data).
Just as an observation, I think the Android HAL just uses PCM 0 and not the PCM 3, but in the PA case, I'll do it the way you describe since that makes more sense.
Yes, the Galaxy Nexus pretty much only uses PCM 0 -- but that's mostly a reflection of AudioFlinger's design and limitations.
-gabriel
On Wed, 2012-06-20 at 21:50 -0500, Gabriel M. Beddingfield wrote:
On 06/20/2012 07:30 AM, Arun Raghavan wrote:
UCM provides the "modifier" to allow ad-hoc modifications to the audio use case like above where we want to play an end of call tone. In this case the verb is still VoiceCall, but PA would enable the "PlayTone" modifer to play the tone (UCM can also tell Pulseaudio the sink PCM and volume control for the tone data).
Just as an observation, I think the Android HAL just uses PCM 0 and not the PCM 3, but in the PA case, I'll do it the way you describe since that makes more sense.
Yes, the Galaxy Nexus pretty much only uses PCM 0 -- but that's mostly a reflection of AudioFlinger's design and limitations.
Ah, that's good to know. :)
-- Arun
participants (5)
-
Arun Raghavan
-
Gabriel M. Beddingfield
-
Liam Girdwood
-
Mark Brown
-
Tanu Kaskinen