On 2/20/21 11:55 AM, Jaroslav Kysela wrote:
Dne 18. 02. 21 v 15:49 Pierre-Louis Bossart napsal(a):
On 2/18/21 3:44 AM, Jaroslav Kysela wrote:
Dne 18. 02. 21 v 10:12 shumingf@realtek.com napsal(a):
- SND_SOC_DAPM_SWITCH("DAC L", SND_SOC_NOPM, 0, 0, &rt1316_sto_dac_l),
- SND_SOC_DAPM_SWITCH("DAC R", SND_SOC_NOPM, 0, 0, &rt1316_sto_dac_r),
Truly, I don't understand the reason to have a separate L/R switch when we can map this functionality to one stereo (multichannel) control.
It's an issue for all ASoC drivers. We should consider to be more strict for the new ones.
At the same time we have to recognize that the L/R notion only makes sense at the input to the amplifier. The amplifier may recombine channels to deal with orientation/posture or simply select a specific input, and drive different speakers (e.g. tweeter/woofer). Dac L and R are often an abuse of language when the system have multi-way speakers. Exhibit A for this is the TigerLake device with 2 RT1316 and 4 speakers. L/R don't make sense to describe amplifier outputs and speaker position.
My point is a bit different. If the channels are supposed to be used together (which usually mean a kind of the stereo operation in this case), it does not make much sense to split this control to separate single channels. It's just a waste of resources.
In this case the control affects analog resources and speaker outputs, so in this case I will assume that it's perfectly ok to have a single speaker. Put differently, assuming that the two channels will always be used is not quite right.
The current patch code:
one channel control "DAC L" one channel control "DAC R"
The one control:
two channels control "DAC"
From the user space POV, the only difference is the value write operation (both channels are set using one ioctl).
SDCA mandates that all devices are able to consume stereo data, even if there is a single speaker connected. It's useful IMHO if you provide controls so that one of the two DACs is switched off.
There's also a difficult balance to be found between exposing all the capabilities of the device, and making integration and userspace simpler. I2C/IS2 and SoundWire devices tend to expose more controls than HDaudio ones, and that was driven by a desire to optimize as much as possible. Some devices are designed with limited number of controls, others provide hooks to tweak everything in the system by exposing literally have thousands of controls. I don't think we should pick and choose which controls we want to expose, that's the codec vendor's job IMHO (or the device class definition when standard and applicable)
The problem with ASoC tree is that many of those controls are not supposed to be configured/used by the end user, but in UCM or other higher level layer configuration, because they're a part of the hw/driver setup.
I think that we should classify those controls so the standard user space tools can hide them, but it's another problem.
Jaroslav