UCM ConflictingDevice/Priority concepts

Jaroslav Kysela perex at perex.cz
Thu Mar 19 19:40:02 CET 2020


Dne 19. 03. 20 v 15:25 Pierre-Louis Bossart napsal(a):
> [fixing alsa-devel email and rejoining threads]
> 
> On 3/19/20 4:06 AM, Jaroslav Kysela wrote:
>> Dne 18. 03. 20 v 22:46 Pierre-Louis Bossart napsal(a):
>>> Hi,
>>>
>>> Traditionally on most PC or mobile platforms, we have one audio output
>>> that can be routed to either speakers or headphone, and likewise we can
>>> record from either internal mics or headset mic. We signal with UCM that
>>> the headphone/speakers and internal mic/headset conflict so hopefully
>>> PulseAudio/CRAS switch auto-magically.
>>>
>>> For SoundWire-based platforms, we typically have a headphone/headset
>>> codec on one link, and one or more amplifiers on the other. Functionally
>>> it's supported to capture from local mics and headset mic at the same
>>> time, or play different streams on speakers and headphones. Recent
>>> Intel-based Chromebooks have in theory the same capabilities at the
>>> hardware level even with I2S/TDM + DMIC connections.
>>>
>>> So for UCM, should we use the notion of 'ConflictingDevice' to fall-back
>>> to a more traditional single-endpoint user experience, or is this
>>> concept only indented to model hardware restrictions? I just checked
>>> that Chrome/adhd does not seem to use this concept at all, while it's
>>> prevalent in alsa-ucm-conf
>>>
>>> Or should we instead only use the concept of Playback/CapturePriority,
>>> which is also used in a lot of alsa-ucm-conf files, but again not at all
>>> in Chrome/adhd?
>>>
>>> I did find some UCM files relying both on the concept of
>>> ConflictingDevices and PlaybackPriorities, which seems rather
>>> odd/overkill to me.
>>
>> ConflictingDevices/SupportedDevices should be used only if there's a
>> hardware restriction which prevents the simultaneous usage of devices.
>> The application can decide how to use those devices.
>>
>> The priority describes the preference. Usually, headphones has higher
>> priority than build-in speakers etc.
> 
> I may be thick on this one, but how would an application use both types
> of information?
> 
> Does it e.g.
> 
> a) revisit the list all devices currently available when an event occurs
> (uevent card creation, jack detection, etc)

The jack detection / hw mute just handle the device (I/O) availability.

> b) pick the device with the highest priority for the 'default' stream

Yes, but the priority is just a hint for the application. The user may be 
override this. It's another layer.

> c) allow for simultaneous use of devices not marked at 'Conflicting',
> e.g. use the internal microphone for assistant while using the headset
> mic for a call as suggested by Dylan.

Yes.

> In other words the priority is the first key, and additional devices are
> filtered with the ConflictingDevice information.
> 
> Did I get this right?

Basically, yes.

> 
>>
>> In my opinion, it's not part of UCM if the application will use one or
>> multiple devices. The application must decide. It's another upper
>> usage / abstraction layer.
> 
> I tend to agree, but I wanted to make sure the use of
> 'ConflictingDevices' was not expected outside of true hardware limitations.
> 
>>
>> Also, we need to consider this to have the whole picture:
>>
>> Tanu (the pulseaudio maintainer) has also good question how to ensure,
>> that the stream can be re-used for the multiple devices. Actually, PA
>> does not re-open PCM device when the PCM device name and parameters
>> are similar for the switched devices. I also think that this is also
>> missing in the UCM specification to resolve this requirement. Usually,
>> the stream transfer mechanism is separate from the routing control.
>> But I can assume, that we may have the hardware which will need extra
>> setup for the streaming (not routing) when the devices are switched.
>>
>> I think that adding something like "PlaybackStream" to "PlaybackPCM"
>> for the stream identification might be sufficient to cover those
>> cases. So, keep "PlaybackPCM" usage and if "PlaybackStream" exists,
>> use this value to determine the stream identification. Similar
>> situation is for the capture direction, of course.
> 
> I am not sure I understand the notion of stream and stream transfer. Is
> there a pointer to this so that I could understand the problem statement?

Example:

Device1:
   ... some enable sequence ...
   PlaybackPCM "hw:0"
   PlaybackStream "DAC1"

Device2:
   ... another enable sequence ...
   PlaybackPCM "hw:0"
   PlaybackStream "DAC2"

In this case, PCM names for alsa-lib are same, but there's a different setup 
to route signal to different DAC which cannot be executed without the PCM 
re-open task (when the PCM "hw:0" is active).

						Jaroslav

-- 
Jaroslav Kysela <perex at perex.cz>
Linux Sound Maintainer; ALSA Project; Red Hat, Inc.


More information about the Alsa-devel mailing list