[alsa-devel] Master vs. Front/Rear/LFE/... elements
Heya!
Some cards expose 'Master' volume sliders. Others expose seperate (stereo) sliders for 'Front', 'Rear' and so on. I have trouble dealing with them properly in PulseAudio:
First of all, on some cards 'Master' seems not to have any effect on the actual analog output, only 'Front' and friends do. Is this a bug or intended behaviour? Can I assume that 'Master' and 'Front' are always independant?
Secondly, I have trouble supporting the 'Front'/'Rear'/'Side'/... elements properly, since they split up the surround channels into seperate elements. Now, this is confusing in many ways, even for "amixer" which will then show channels such as "Rear Front Left" and so on, which obviously make no sense. snd_mixer_selem_has_playback_channel() just returns bogus data for these cases. Why are those elements seperate anyway? Why aren't they combined into a single multi-channel event? Looking at the APIs I get the idea that the problem appears to be that elements can only control all channels the same are all independantly which doesn't really match 1:1 on my multichannel sound cards. However, wouldn't it be possible to use the 'index' value of a selem_id for this? I.e. have a series of controls by the same name but different indexes which would then implement snd_mixer_selem_has_playback_channel() correctly? i.e. foo,0 would do front-left/right, foo,1 would do rear, foo,2 would do lfe, and so? I have no clue how this implemented internally, so not sure how feasible this might be.
Lennart
At Wed, 6 May 2009 19:58:24 +0200, Lennart Poettering wrote:
Heya!
Some cards expose 'Master' volume sliders. Others expose seperate (stereo) sliders for 'Front', 'Rear' and so on. I have trouble dealing with them properly in PulseAudio:
First of all, on some cards 'Master' seems not to have any effect on the actual analog output, only 'Front' and friends do. Is this a bug or intended behaviour?
It's an old feature. AC97 spec gives the "master" volume control only for front channels. Thus, old boards with AC97 may inherit this policy. (The problem of emu10k1 is partly this.)
Fixing it isn't too difficult with vmaster stuff in the driver side, but this breaks the compatibility, and hard to find the real test machines nowadays. In short, "don't touch a working system unless it gets broken" phase.
Can I assume that 'Master' and 'Front' are always independant?
No.
Secondly, I have trouble supporting the 'Front'/'Rear'/'Side'/... elements properly, since they split up the surround channels into seperate elements. Now, this is confusing in many ways, even for "amixer" which will then show channels such as "Rear Front Left" and so on, which obviously make no sense. snd_mixer_selem_has_playback_channel() just returns bogus data for these cases. Why are those elements seperate anyway? Why aren't they combined into a single multi-channel event?
That's mainly a historical reason. In old days, there are no mixer apps supporting really multiple channels because the behavior of OSS. A stereo pair is easier to handle for apps.
Looking at the APIs I get the idea that the problem appears to be that elements can only control all channels the same are all independantly which doesn't really match 1:1 on my multichannel sound cards. However, wouldn't it be possible to use the 'index' value of a selem_id for this? I.e. have a series of controls by the same name but different indexes which would then implement snd_mixer_selem_has_playback_channel() correctly? i.e. foo,0 would do front-left/right, foo,1 would do rear, foo,2 would do lfe, and so? I have no clue how this implemented internally, so not sure how feasible this might be.
This breaks the existing apps. That's the biggest problem we face now. We can't change the stuff simply because PA isn't the only app using that API.
IMO, the best would be a total rewrite of the current mixer API, as I mentioned some times. Right now it's more complicated than needed, but not powerful enough to handle exceptional cases.
I know designing a generic and fully-working API is pretty difficult, though...
thanks,
Takashi
On Thu, May 07, 2009 at 10:49:22AM +0200, Takashi Iwai wrote:
IMO, the best would be a total rewrite of the current mixer API, as I mentioned some times. Right now it's more complicated than needed, but not powerful enough to handle exceptional cases.
Indeed - for example, something that allowed audio routing to be expressed in the mixing API would be a very big win for embedded systems too.
I know designing a generic and fully-working API is pretty difficult, though...
Certainly non-trivial :)
At Thu, 7 May 2009 11:09:16 +0100, Mark Brown wrote:
On Thu, May 07, 2009 at 10:49:22AM +0200, Takashi Iwai wrote:
IMO, the best would be a total rewrite of the current mixer API, as I mentioned some times. Right now it's more complicated than needed, but not powerful enough to handle exceptional cases.
Indeed - for example, something that allowed audio routing to be expressed in the mixing API would be a very big win for embedded systems too.
Right. But this would also require some changes in the driver side, and it could be complicated.
Actually, we had this kind of information in the time of ALSA 0.5. However, it ended up with too burden to the driver code because one had to write a comprehensive static graph in the driver code itself (generated by hand!). Also, some mixer elements are tightly coupled with certain audio components, but some are pretty abstract and hard to put into a graph. So, we reduced that in the newer API and implemented a straight array of control elements instead.
Nevertheless, a sort of linking would be useful in addition to the current form. For example, coupling between the control element and the PCM stream is missing, too.
Alternatively, we may have an external data outside the kernel driver. In that case, the data can be expressed more flexibly (XML? Oh yeah :)
Takashi
On Thu, May 07, 2009 at 12:30:07PM +0200, Takashi Iwai wrote:
Mark Brown wrote:
Indeed - for example, something that allowed audio routing to be expressed in the mixing API would be a very big win for embedded systems too.
Right. But this would also require some changes in the driver side, and it could be complicated.
It'd need to be optional, I think - similar to the approach ASoC is using for the power routing (which is essentially the same information, though it's not as joined up with the mixer controls yet as it should be).
Actually, we had this kind of information in the time of ALSA 0.5. However, it ended up with too burden to the driver code because one had to write a comprehensive static graph in the driver code itself (generated by hand!). Also, some mixer elements are tightly coupled
IME maintaining the data isn't that much of an issue, it's a bit fiddly but not intractible. Of course, I'm always working from comprehensive datasheets which isn't the case for everyone. If the routing information is optional people can always skip doing it if it's too hard.
with certain audio components, but some are pretty abstract and hard to put into a graph. So, we reduced that in the newer API and implemented a straight array of control elements instead.
Oh, you'll always need chip wide controls outside a graph. Not all controls have anything to do with any particular bit of the audio stream - bias control is an obvious example.
Nevertheless, a sort of linking would be useful in addition to the current form. For example, coupling between the control element and the PCM stream is missing, too.
That comes down to the same problem, really.
Alternatively, we may have an external data outside the kernel driver. In that case, the data can be expressed more flexibly (XML? Oh yeah :)
I'm not really enthusiastic about that prospect - getting the external data onto the system and making sure it's in sync with the driver seems like an error prone process.
On Thu, 07.05.09 12:30, Takashi Iwai (tiwai@suse.de) wrote:
At Thu, 7 May 2009 11:09:16 +0100, Mark Brown wrote:
On Thu, May 07, 2009 at 10:49:22AM +0200, Takashi Iwai wrote:
IMO, the best would be a total rewrite of the current mixer API, as I mentioned some times. Right now it's more complicated than needed, but not powerful enough to handle exceptional cases.
Indeed - for example, something that allowed audio routing to be expressed in the mixing API would be a very big win for embedded systems too.
Right. But this would also require some changes in the driver side, and it could be complicated.
Actually, we had this kind of information in the time of ALSA 0.5. However, it ended up with too burden to the driver code because one had to write a comprehensive static graph in the driver code itself (generated by hand!). Also, some mixer elements are tightly coupled with certain audio components, but some are pretty abstract and hard to put into a graph. So, we reduced that in the newer API and implemented a straight array of control elements instead.
Nevertheless, a sort of linking would be useful in addition to the current form. For example, coupling between the control element and the PCM stream is missing, too.
Alternatively, we may have an external data outside the kernel driver. In that case, the data can be expressed more flexibly (XML? Oh yeah :)
That would actually work for me and I wouldn't even be that disgusted by this usage of XML ;-)
_From the PA perspective I actually don't really need the full routing of the sound card exposed. I always want to focus on actual end-user use cases instead of exposing the full mixer capabilities. All I need to know is which elements are in the pipeline from my PCM streams to a specific output, resp. from a specific input to my PCM stream and a more high-level idea what those elements actually mean. i.e. all I need would be an API like this:
int snd_pcm_get_mixer_path(snd_pcm_t *pcm, snd_mixer_selem_id_t path[], unsigned n);
This would simply return an array of mixer element ids that are in the pipeline to the output, resp. from the input, ordered.
Then, a trivial API that allows me to identify what a mixer element's use is would be all I need.
Lennart
On Thu, May 07, 2009 at 02:56:54PM +0200, Lennart Poettering wrote:
From the PA perspective I actually don't really need the full routing of the sound card exposed. I always want to focus on actual end-user use cases instead of exposing the full mixer capabilities. All I need to know is which elements are in the pipeline from my PCM streams to a specific output, resp. from a specific input to my PCM stream and a more high-level idea what those elements actually mean. i.e. all I need would be an API like this:
To a good approximation that is a fairly simple query on the full routing information (modulo any bypass paths, I guess). I'm not sure the drivers would be able to answer your question without also being able to answer any other routing question.
Lennart Poettering wrote:
On Thu, 07.05.09 12:30, Takashi Iwai (tiwai@suse.de) wrote:
At Thu, 7 May 2009 11:09:16 +0100, Mark Brown wrote:
On Thu, May 07, 2009 at 10:49:22AM +0200, Takashi Iwai wrote:
IMO, the best would be a total rewrite of the current mixer API, as I mentioned some times. Right now it's more complicated than needed, but not powerful enough to handle exceptional cases.
Indeed - for example, something that allowed audio routing to be expressed in the mixing API would be a very big win for embedded systems too.
Right. But this would also require some changes in the driver side, and it could be complicated.
Actually, we had this kind of information in the time of ALSA 0.5. However, it ended up with too burden to the driver code because one had to write a comprehensive static graph in the driver code itself (generated by hand!). Also, some mixer elements are tightly coupled with certain audio components, but some are pretty abstract and hard to put into a graph. So, we reduced that in the newer API and implemented a straight array of control elements instead.
Nevertheless, a sort of linking would be useful in addition to the current form. For example, coupling between the control element and the PCM stream is missing, too.
Alternatively, we may have an external data outside the kernel driver. In that case, the data can be expressed more flexibly (XML? Oh yeah :)
That would actually work for me and I wouldn't even be that disgusted by this usage of XML ;-)
From the PA perspective I actually don't really need the full routing of the sound card exposed. I always want to focus on actual end-user use cases instead of exposing the full mixer capabilities. All I need to know is which elements are in the pipeline from my PCM streams to a specific output, resp. from a specific input to my PCM stream and a more high-level idea what those elements actually mean. i.e. all I need would be an API like this:
int snd_pcm_get_mixer_path(snd_pcm_t *pcm, snd_mixer_selem_id_t path[], unsigned n);
This would simply return an array of mixer element ids that are in the pipeline to the output, resp. from the input, ordered.
Then, a trivial API that allows me to identify what a mixer element's use is would be all I need.
Lennart
If you guys decide to go the user-space route, please consider including an optional text description of each control element, preferrably with localization support.
Thanks,
Pavel.
Mark Brown wrote:
On Thu, May 07, 2009 at 10:49:22AM +0200, Takashi Iwai wrote:
IMO, the best would be a total rewrite of the current mixer API, as I mentioned some times. Right now it's more complicated than needed, but not powerful enough to handle exceptional cases.
Indeed - for example, something that allowed audio routing to be expressed in the mixing API would be a very big win for embedded systems too.
I know designing a generic and fully-working API is pretty difficult, though...
Certainly non-trivial :)
I just dumped my notes here on other systems that have some concept of routing/connectivity/topology
http://bigblen.wordpress.com/2009/05/08/linux-audio-control-topology/
Comments welcome
Still very much a work in progress...
-- Eliot
On Thu, 07.05.09 10:49, Takashi Iwai (tiwai@suse.de) wrote:
It's an old feature. AC97 spec gives the "master" volume control only for front channels. Thus, old boards with AC97 may inherit this policy. (The problem of emu10k1 is partly this.)
Fixing it isn't too difficult with vmaster stuff in the driver side, but this breaks the compatibility, and hard to find the real test machines nowadays. In short, "don't touch a working system unless it gets broken" phase.
Breaks compatibility with what exactly? OSS? We have now disabled the OSS compat stuff in F11 now, so I am not too concerned about this. Also, not sure if it would be a big loss if surround sound is only configurable with native ALSA, not OSS.
Can I assume that 'Master' and 'Front' are always independant?
No.
May I assume that they always are dependant?
Or can't I assume anything about the relation between Master and Front? That would suck.
Secondly, I have trouble supporting the 'Front'/'Rear'/'Side'/... elements properly, since they split up the surround channels into seperate elements. Now, this is confusing in many ways, even for "amixer" which will then show channels such as "Rear Front Left" and so on, which obviously make no sense. snd_mixer_selem_has_playback_channel() just returns bogus data for these cases. Why are those elements seperate anyway? Why aren't they combined into a single multi-channel event?
That's mainly a historical reason. In old days, there are no mixer apps supporting really multiple channels because the behavior of OSS. A stereo pair is easier to handle for apps.
Hmm. Maybe it's time to get rid of this now? As mentioned in Fedora we now disabled OSS and didn't get any complaints about that. I mean, it might be worth keeping compat for OSS PCM, but for the OSS mixer?
Surround sound with OSS is not really workable anyway, so I wouldn't be too concerned to break it.
Looking at the APIs I get the idea that the problem appears to be that elements can only control all channels the same are all independantly which doesn't really match 1:1 on my multichannel sound cards. However, wouldn't it be possible to use the 'index' value of a selem_id for this? I.e. have a series of controls by the same name but different indexes which would then implement snd_mixer_selem_has_playback_channel() correctly? i.e. foo,0 would do front-left/right, foo,1 would do rear, foo,2 would do lfe, and so? I have no clue how this implemented internally, so not sure how feasible this might be.
This breaks the existing apps. That's the biggest problem we face now. We can't change the stuff simply because PA isn't the only app using that API.
Hmm, could you be more explicit which apps you think would break? I mean, the ALSA mixer API always allowed multichannel audio, however no driver actually made use of that. If a client is using the ALSA mixer API properly it should not break. And if it doesn't use it properly it's not ALSA's fault...
IMO, the best would be a total rewrite of the current mixer API, as I mentioned some times. Right now it's more complicated than needed, but not powerful enough to handle exceptional cases.
I certainly agree with this. But this doesn't appear to be anything that will happen any time soon, or will it?
If we could agree to fix the surround sound situation within the current API as far as it allows that I'd be a much happier man.
I know designing a generic and fully-working API is pretty difficult, though...
That is absolutely true.
Lennart
At Thu, 7 May 2009 14:46:51 +0200, Lennart Poettering wrote:
On Thu, 07.05.09 10:49, Takashi Iwai (tiwai@suse.de) wrote:
It's an old feature. AC97 spec gives the "master" volume control only for front channels. Thus, old boards with AC97 may inherit this policy. (The problem of emu10k1 is partly this.)
Fixing it isn't too difficult with vmaster stuff in the driver side, but this breaks the compatibility, and hard to find the real test machines nowadays. In short, "don't touch a working system unless it gets broken" phase.
Breaks compatibility with what exactly? OSS?
Yes, and old ALSA-native apps without PA.
We have now disabled the OSS compat stuff in F11 now, so I am not too concerned about this. Also, not sure if it would be a big loss if surround sound is only configurable with native ALSA, not OSS.
True, but the question is rather ALSA-native apps and tests with old hardwares.
Can I assume that 'Master' and 'Front' are always independant?
No.
May I assume that they always are dependant?
If both exist, then they should be dependent, and Master should be really Master. If Front doesn't exist but only Master, the surrounds could be independent from master.
Or can't I assume anything about the relation between Master and Front? That would suck.
Secondly, I have trouble supporting the 'Front'/'Rear'/'Side'/... elements properly, since they split up the surround channels into seperate elements. Now, this is confusing in many ways, even for "amixer" which will then show channels such as "Rear Front Left" and so on, which obviously make no sense. snd_mixer_selem_has_playback_channel() just returns bogus data for these cases. Why are those elements seperate anyway? Why aren't they combined into a single multi-channel event?
That's mainly a historical reason. In old days, there are no mixer apps supporting really multiple channels because the behavior of OSS. A stereo pair is easier to handle for apps.
Hmm. Maybe it's time to get rid of this now? As mentioned in Fedora we now disabled OSS and didn't get any complaints about that. I mean, it might be worth keeping compat for OSS PCM, but for the OSS mixer?
Surround sound with OSS is not really workable anyway, so I wouldn't be too concerned to break it.
Looking at the APIs I get the idea that the problem appears to be that elements can only control all channels the same are all independantly which doesn't really match 1:1 on my multichannel sound cards. However, wouldn't it be possible to use the 'index' value of a selem_id for this? I.e. have a series of controls by the same name but different indexes which would then implement snd_mixer_selem_has_playback_channel() correctly? i.e. foo,0 would do front-left/right, foo,1 would do rear, foo,2 would do lfe, and so? I have no clue how this implemented internally, so not sure how feasible this might be.
This breaks the existing apps. That's the biggest problem we face now. We can't change the stuff simply because PA isn't the only app using that API.
Hmm, could you be more explicit which apps you think would break? I mean, the ALSA mixer API always allowed multichannel audio, however no driver actually made use of that. If a client is using the ALSA mixer API properly it should not break. And if it doesn't use it properly it's not ALSA's fault...
kmix surely won't work. GNOME mixer? I don't think it would. Many media players (new and old) support a mixer adjustment more or less, and certainly many of they won't work with multi channels.
Yes, they are (a kind of) broken. But, they work now. If they won't work after the change, it's called regression...
IMO, the best would be a total rewrite of the current mixer API, as I mentioned some times. Right now it's more complicated than needed, but not powerful enough to handle exceptional cases.
I certainly agree with this. But this doesn't appear to be anything that will happen any time soon, or will it?
If we could agree to fix the surround sound situation within the current API as far as it allows that I'd be a much happier man.
And I'd be likely an unhappier man who is responsible to fix regressions :)
So, in my perspective, it's much desirable to create another mixer API from the scratch. Or, at least, we should add a switch to keep / change the behavior of the current mixer API.
We must be really careful about playing the API changes and silent behavior changes. An addition would be OK, but a change needs a lot care (no matter what politician said)...
thanks,
Takashi
On Thu, 07.05.09 15:18, Takashi Iwai (tiwai@suse.de) wrote:
Fixing it isn't too difficult with vmaster stuff in the driver side, but this breaks the compatibility, and hard to find the real test machines nowadays. In short, "don't touch a working system unless it gets broken" phase.
Breaks compatibility with what exactly? OSS?
Yes, and old ALSA-native apps without PA.
Hmm. why exactly should that happen? I mean, the ALSA mixer API knew definitions like snd_mixer_selem_channel_id since about the beginning of time. Just making use of it shouldn't cause breakage in programs.
And even when if it does, this would be the 'softest' kind of breakage one can think of. Also, most of the apps in question are Free Software and could hence be fixed easily.
May I assume that they always are dependant?
If both exist, then they should be dependent, and Master should be really Master.
Hmm, ok. So you say "Master" in this case is more "outside" and "Front" is more to the "inside", right? I.e. from the applications PoV Front is first, Master follows as last step, right?
Hmm, could you be more explicit which apps you think would break? I mean, the ALSA mixer API always allowed multichannel audio, however no driver actually made use of that. If a client is using the ALSA mixer API properly it should not break. And if it doesn't use it properly it's not ALSA's fault...
kmix surely won't work. GNOME mixer? I don't think it would.
The new GNOME mixer links against PA, so it would support it indirectly. ;-)
Many media players (new and old) support a mixer adjustment more or less, and certainly many of they won't work with multi channels.
But to which effect? The worst thing that could happen is that they wouldn't show the Surround channels properly. But usually they wouldn't show that anyway..
Lennart
At Sun, 10 May 2009 00:11:39 +0200, Lennart Poettering wrote:
On Thu, 07.05.09 15:18, Takashi Iwai (tiwai@suse.de) wrote:
Fixing it isn't too difficult with vmaster stuff in the driver side, but this breaks the compatibility, and hard to find the real test machines nowadays. In short, "don't touch a working system unless it gets broken" phase.
Breaks compatibility with what exactly? OSS?
Yes, and old ALSA-native apps without PA.
Hmm. why exactly should that happen? I mean, the ALSA mixer API knew definitions like snd_mixer_selem_channel_id since about the beginning of time. Just making use of it shouldn't cause breakage in programs.
Using it means to change the current mapping. This would likely break the things.
And even when if it does, this would be the 'softest' kind of breakage one can think of. Also, most of the apps in question are Free Software and could hence be fixed easily.
Oh you are naive :)
May I assume that they always are dependant?
If both exist, then they should be dependent, and Master should be really Master.
Hmm, ok. So you say "Master" in this case is more "outside" and "Front" is more to the "inside", right? I.e. from the applications PoV Front is first, Master follows as last step, right?
Actually "Front" controls the volume from the front-output pin (left/right) while "Master" influences on all output channels, front, surround, CLFE, etc.
Hmm, could you be more explicit which apps you think would break? I mean, the ALSA mixer API always allowed multichannel audio, however no driver actually made use of that. If a client is using the ALSA mixer API properly it should not break. And if it doesn't use it properly it's not ALSA's fault...
kmix surely won't work. GNOME mixer? I don't think it would.
The new GNOME mixer links against PA, so it would support it indirectly. ;-)
Heh, but the "current" GNOME mixer would be broken.
Many media players (new and old) support a mixer adjustment more or less, and certainly many of they won't work with multi channels.
But to which effect? The worst thing that could happen is that they wouldn't show the Surround channels properly. But usually they wouldn't show that anyway..
I don't buy it unless you take over and fix all future bug reports.
Well, let my stance clear: I don't say that the change itself is wrong. But an incompatible change is just bad, at least, for the current ALSA-lib. It is no longer playground for children, and a regression is one of the worst things we can do now.
That's why I suggest to begin with a new API set or to add a function to switch the behavior. In that way, a regression can be avoided pretty easily.
Takashi
2009/5/7 Takashi Iwai tiwai@suse.de:
IMO, the best would be a total rewrite of the current mixer API, as I mentioned some times. Right now it's more complicated than needed, but not powerful enough to handle exceptional cases.
I know designing a generic and fully-working API is pretty difficult, though...
Isn't that a bit of overkill. I don't think the API is the problem. I thought about re-writing the API mixer api when I was updating the alsamixer.c for dB settings. When you look into it in more detail, there really is no easy way to do a better api than the one we already have and still support all the different sound cards out there. For pro applications, one also has large USB attached mixer control decks, with sliders etc, and one would have to write code to ensure they link up correctly with the correct mixer controls. I.e. A user moves a slider on the control deck, and it moves the correct slider in alsamixer. I think what we really do need is a global definition of each control. I.e. Master Playback control 1) Must exist. 2) Adds another gain/attenuation after the "front, side, rear etc." controls. (I.e between the front control and the speaker. 3) Should the control be presented as 3 stereo controls, or one 6 channel control? 4) Is the control linked to a PCM, or is it a global control on the output side of the sound chip.
I would like to see a global "Speaker arrangement" setting, where the user can tell alsa how they have their speakers arranged. E.g. They have 2 speakers, or they have 5.1 speakers, or that have 2 speakers and one set of headphones and where each are plugged in. The driver would restrict the options depending on the hardware possibilities. The alsa mixer could then present a consistent view in each case.
Once we have the global definition, we can then fix each driver to comply with it. In some cases we will have to add virtual controls. I.e. Create a software master control and use software to make it behave like a real hardware control. I think this global definition is all that PA really needs. They can then implement it using a single method, and then they can highlight all the sound card drivers that need fixing to match it.
I personally think that the user should set the global speaker settings, the master, front, rear etc. controls and then just leave them. I think applications should only control their own PCM channel volume, and not touch the master volume at all. PA could then further control PCM channel volumes, by for example automatically attenuating the sound of an application when it gets put in the backround, if the user so wishes. Applications should not have to talk to the alsa mixer at all. Just modify a value associated with the PCM stream they are outputting to adjust its gain/attenuation. I.e. int value; handle = snd_pcm_open(); snd_pcm_set_hwparam/swparams etc.; snd_pcm_set_gain(handle, &value); snd_pcm_get_gain(handle, &value);
That should be all the user app or PA needs to do The user app should not have to talk to the alsa mixer api at all. Only a single global mixer control app should need to do that.
On Tue, 12 May 2009, James Courtier-Dutton wrote:
2009/5/7 Takashi Iwai tiwai@suse.de:
IMO, the best would be a total rewrite of the current mixer API, as I mentioned some times. Right now it's more complicated than needed, but not powerful enough to handle exceptional cases.
I know designing a generic and fully-working API is pretty difficult, though...
Isn't that a bit of overkill. I don't think the API is the problem. I thought about re-writing the API mixer api when I was updating the alsamixer.c for dB settings. When you look into it in more detail, there really is no easy way to do a better api than the one we already have and still support all the different sound cards out there. For pro applications, one also has large USB attached mixer control decks, with sliders etc, and one would have to write code to ensure they link up correctly with the correct mixer controls. I.e. A user moves a slider on the control deck, and it moves the correct slider in alsamixer. I think what we really do need is a global definition of each control. I.e. Master Playback control
- Must exist.
- Adds another gain/attenuation after the "front, side, rear etc."
controls. (I.e between the front control and the speaker. 3) Should the control be presented as 3 stereo controls, or one 6 channel control? 4) Is the control linked to a PCM, or is it a global control on the output side of the sound chip.
I would like to see a global "Speaker arrangement" setting, where the user can tell alsa how they have their speakers arranged. E.g. They have 2 speakers, or they have 5.1 speakers, or that have 2 speakers and one set of headphones and where each are plugged in. The driver would restrict the options depending on the hardware possibilities. The alsa mixer could then present a consistent view in each case.
Once we have the global definition, we can then fix each driver to comply with it. In some cases we will have to add virtual controls. I.e. Create a software master control and use software to make it behave like a real hardware control. I think this global definition is all that PA really needs. They can then implement it using a single method, and then they can highlight all the sound card drivers that need fixing to match it.
I personally think that the user should set the global speaker settings, the master, front, rear etc. controls and then just leave them. I think applications should only control their own PCM channel volume, and not touch the master volume at all. PA could then further control PCM channel volumes, by for example automatically attenuating the sound of an application when it gets put in the backround, if the user so wishes. Applications should not have to talk to the alsa mixer at all. Just modify a value associated with the PCM stream they are outputting to adjust its gain/attenuation. I.e. int value; handle = snd_pcm_open(); snd_pcm_set_hwparam/swparams etc.; snd_pcm_set_gain(handle, &value); snd_pcm_get_gain(handle, &value);
That should be all the user app or PA needs to do The user app should not have to talk to the alsa mixer api at all. Only a single global mixer control app should need to do that.
In my opinion, it would be better to offer a reduced mixer for playback and capture PCMs. Something like:
int snd_amixer_open(snd_amixer_t **amixer, const char *name, snd_pcm_t *playback_pcm, snd_pcm_t *capture_pcm, int mode);
For playback, 2 controls will be available:
Master - master playback volume PCM - PCM stream volume
For capture, a set of controls bellow will be available:
Capture - capture volume Source - source (enum list) ### - other mixed sources (volume (optional) + switch (required))
If some pcm handle is zero, the mixer direction might be skipped. If both pcm handles are zero, global mixer will be available to app.
The reasons to pass partial mixer to app:
dB ranges value change event handling
Jaroslav
----- Jaroslav Kysela perex@perex.cz Linux Kernel Sound Maintainer ALSA Project, Red Hat, Inc.
On Tue, May 12, 2009 at 08:47:31AM +0100, James Courtier-Dutton wrote:
Once we have the global definition, we can then fix each driver to comply with it. In some cases we will have to add virtual controls.
...
I personally think that the user should set the global speaker settings, the master, front, rear etc. controls and then just leave them. I think applications should only control their own PCM channel volume, and not touch the master volume at all. PA could then further control PCM channel volumes, by for example automatically attenuating the sound of an application when it gets put in the backround, if the user
This is starting to sound a lot like the stuff that's been talked about for the scenario API in an embedded context - it has a similar idea of having a central application managing the overall setup of the card and giving applications something simpler to work with:
http://www.slimlogic.co.uk/?p=40
We don't want to force the use of per-application volume since that will be too expensive for some systems, but a system that arranges for there to be a standard control that the application should use for things like master volume can support either a per-application control or using the hardware control directly.
It's not always going to be possible to provide a fixed setup in the driver since in systems like smartphones the arrangement of controls can change very substantially depending on software controlled
On Tue, 12.05.09 08:47, James Courtier-Dutton (james.dutton@gmail.com) wrote:
Isn't that a bit of overkill. I don't think the API is the problem. I thought about re-writing the API mixer api when I was updating the alsamixer.c for dB settings. When you look into it in more detail, there really is no easy way to do a better api than the one we already have and still support all the different sound cards out there. For pro applications, one also has large USB attached mixer control decks, with sliders etc, and one would have to write code to ensure they link up correctly with the correct mixer controls. I.e. A user moves a slider on the control deck, and it moves the correct slider in alsamixer. I think what we really do need is a global definition of each control. I.e. Master Playback control
- Must exist.
- Adds another gain/attenuation after the "front, side, rear etc."
controls. (I.e between the front control and the speaker. 3) Should the control be presented as 3 stereo controls, or one 6 channel control? 4) Is the control linked to a PCM, or is it a global control on the output side of the sound chip.
That would probably not be sufficient, but I do like the approach of exposing a simplified more focussed mixer instead of more powerful complicated one. I have done a little bit researchabout what I need to expose in PA and what I shouldn't and where the difficulties when implementing this lie.
1) Definition of Mic vs. Capture is not clear. Usually Mic controls the feedback volume, while Capture controls the actual volume on the ADC. On some USB cards that's different and Mic actually does control the ADC.
2) Input selection is really not unified. The are controls 'Input Source Select', 'Input Source', 'Mic Select', 'Capture Source', that all need to be covered. Some cards even have two of those. The enumerated options they have are usually some form Mic and Line-In and a few others, but the names vary wildly. i.e. There is 'Mic', 'Microphone', 'iMic', 'i-Mic', 'DMic', 'Int Mic', 'IntMic', 'Internal Mic', 'Internal Microphone', 'Front Mic', 'Front Microphone', 'Mic 1', 'Mic1', ... Also, some of the input selection enums appear as playback enums, not as record enums. Then there are these enums, but also selectin of the input source via switches on the elements. From PA's perspective all I want to support is routing from exactly one input source, and have a very generic cageorization of the input. (i.e. is it Line-In? Is it Mic? But not any more detail.)
3) There are various Mic boosts available, and I generally have no idea to what dB factor they relate. I found at least 'Mic Boost', 'Capture Boost', 'Front Mic Boost', 'Mic Boost (+20dB)'. I'd rather see those boosts exposed as volume sliders that go from 0dB to their dB factor in a only 1 step.
4) Output selection isn't much better. There are usually no enums involved. But there are switches involved. And that in non obvious ways: i.e. sometimes the 'Headphone' switch is dependant on the 'Master' switch and sometimes this seems not to be the case.
5) Mapping to jack sensing devices is not clear to me.
6) Not sure what to do about the 'External Amplifier' setting.
7) I need to know the order of volume sliders on the path from the DAC to the speakers.
8) The channels of all sliders/switches need to identifiable. For example, if Master only exposes two channels it would be good to know which chanells this actually influences. i.e. the channel idshould be a bitfield, not a single integer.
9) Some cards seem to connect LFE to "Master Mono". It would be good if that slider could be called LFE then.
So, from the PA perspective all I need is a way to enumerate and select inputs. Only one input at a time. I need to enuemrate outputs, and select outputs. Only one output at a time. Those inputs/outputs should have geneirc, parsable names. And I need the sliders that are on the pipeline from the DAC/ADC to the outputs/inputs. Beyond that there are very few additional controls I might need to know anything about.
I personally think that the user should set the global speaker settings, the master, front, rear etc. controls and then just leave them.
I don't think so. I think 'Master' is the one that should be the one that is usually controlled since it is most likely controls an analog amp if there is any.
I think applications should only control their own PCM channel volume, and not touch the master volume at all. PA could then further control PCM channel volumes, by for example automatically attenuating the sound of an application when it gets put in the backround, if the user so wishes. Applications should not have to talk to the alsa mixer at all. Just modify a value associated with the PCM stream they are outputting to adjust its gain/attenuation. I.e. int value; handle = snd_pcm_open(); snd_pcm_set_hwparam/swparams etc.; snd_pcm_set_gain(handle, &value); snd_pcm_get_gain(handle, &value);
Not sure if this is really enough. Recording apps might want to control the Mic Boost too.
Lennart
participants (7)
-
Eliot Blennerhassett
-
James Courtier-Dutton
-
Jaroslav Kysela
-
Lennart Poettering
-
Mark Brown
-
Pavel Hofman
-
Takashi Iwai