[alsa-devel] UCM: How does its api relate to ALSA lib api?

Hi,
I am a newcomer if it goes to following topics: ALSA, audio in Linux, audio in embedded Linux. However several years of experience in audio development on other platforms. As next a development of audio applications in embedded Linux is the task. However have few gaps in understanding of the audio system architecture. It would be helpful to have answers to following questions. For any hints lot of thanks in advance!
* There is well-known handshaking application-ALSA lib based on amonst others snd_pcm_open snd_pcm_hw_params_alloca snd_pcm_hw_params_any snd_pcm_hw_params_set_... snd_pcm_hw_params snd_pcm_hw_params_get_... snd_pcm_hw_params_is_... snd_pcm_hw_params_can_... snd_pcm_close * Does the UCM and its api complement the api and handshaking as above. Or is it rather a question of replacement? * If neither nor, how to describe best the relation between UCM api and ALSA lib api? * How does the entire handshaking application-audio system look like if to use ALSA and UCM api? * An use case seems to be just a container for verb, device, modifier. Is an use case introducing any functionality to be found neither in verb, nor in device, nor in modifier? * For what reason does the term / definition of use case exists at all. Will verb, device, modifier not be enough? * What is the difference between switching to other use case and modifying a use case? Why not to switch to other use case in situations where modifier is used? * QoS api seems to introduce some redundancy. If voice, or music, or other - this information can be gathered from use case. A use case does already read voice, music, other! Why is qos api necessary?
Best Regards kaweka
participants (1)
-
kaweka