On Wed, Sep 08, 2010 at 10:19:45AM +0200, Jaroslav Kysela wrote:
I understand the motivation to create a layer for phone or similar embedded devices which uses usually all streams from the one process.
But what about more concurrent processes? The state handling in the current implementation is per process, so another process just overwrite blindly the control values set by the first process.
You really need one single thing to own the physical audio device configuration, even on a desktop. On a desktop system that'd be PulseAudio normally since applications end up talking to PulseAudio rather than the hardware directly so only PulseAudio is actually directly concerned with the hardware setup. On an embedded system it'd quite frequently be PulseAudio as well but obviously it will differ on some systems.
Another question is how to handle collisions.
This is the core issue which forces some central thing owning the policy decisions on a system wide basis - something needs to take policy decisions about what's happening.
The focus for UCM is providing a model for thinking about configurations and the mechanics of applying them, which are the areas of the problem which are well understood and shared by all implementations. Mechanisms for actually deciding on what the policy is and dealing with contention for the hardware are very much open questions at the minute even on desktops so keeping the solutions in that area separate to the bits that are well understood allows flexibility in these more contentious areas.
Again, assuming that there's a substantial difference between embedded and desktop systems isn't really reflecting the reality of actual systems these days - they're all on a continuum and in many cases the standardisation of hardware in desktop systems means they will be more simple rather than less simple than embedded systems.