Mark Brown broonie@kernel.org writes:
On Tue, Oct 25, 2022 at 12:17:25AM +0100, Aidan MacDonald wrote:
Mark Brown broonie@kernel.org writes:
We already have clock bindings, if we need to configure clocks we should be using those to configure there.
The existing clock bindings are only useful for setting rates, and .set_sysclk() does more than that. See my reply to Krzysztof if you want an explanation, check nau8821 or tas2552 codecs for an example of the kind of thing I'm talking about.
I thought there was stuff for muxes, but in any case if you are adding a new binding here you could just as well add one to the clock bindings.
I picked those codecs at random, but they are fairly representative: often a codec will allow the system clock to be derived from another I2S clock (eg. BCLK), or provided directly, or maybe generated from an internal PLL. In cases like that you need to configure the codec with .set_sysclk() to select the right input. Many card drivers need to do this, it's just as important as .set_fmt() or .hw_params().
There is a strong case for saying that all the clocking in CODECs might fit into the clock API, especially given the whole DT thing.
The ASoC APIs don't speak "struct clk", which seems (to me) like a prerequisite before we can think about doing anything with clocks.
Even if ASoC began to use the clock API for codec clocking, it's not clear how you maintain backward compatibility with the existing simple-card bindings. You'd have to go over all DAIs and mimic the effects of "snd_soc_dai_set_sysclk(dai, 0, freq, dir)" because there could be a device tree relying on it somewhere.
So... given you're already stuck maintaining .set_sysclk() behavior forever, is there much harm in exposing the sysclock ID to the DT?