On Mon, May 2, 2022 at 8:06 AM Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com wrote:
On 4/29/22 17:32, Curtis Malainey wrote:
On Fri, Apr 29, 2022 at 2:55 PM Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com wrote:
Hi, In the existing ASoC code, there is a fixed mapping between ASoC card and component. A component relies on a ->card pointer that is set during the probe. A component cannot be used by or "bound to" more than one card [1]
This has interesting impacts on how a codec or DSP driver need to be implemented.
In the AVS series posted this week, multiple components are registered by the DSP driver, following an interface-based split. There's in addition a second-level split, where the logic is pushed further: the DSP driver partitions the SSP DAIs in different set of 'dai_driver's used by different components, which are in turn used by different cards. What is done in these patches is not wrong, and is probably the only solution to support a real-world platform with the existing ASoC code, but are the framework assumptions correct? In this example, the board-level information on which interface is used for what functionality trickles down to the lowest level of the DSP driver implementation.
I believe this breaks to some extent the 'clean' split between platform and machine driver(s), and it's not quite aligned with the usual notion of register/probe used across frameworks, be it for drivers/clocks/you name it.
A similar case could happen in a codec driver, if independent functionality such as headset and amplifier support was exposed by separate cards, that would in turn mandate that the codec driver exposed N components, each handling different functionality but the same type of DAI.
An alternative approach would be that the DSP driver exposes all the possible DAIs that can be used, and the binding is refined to allow for more flexibility. I think it's really the individual DAI that cannot be used by more than one card.
Would it also be logical to expose the DAIs on the codecs independently or should this be validated on a case by case basis?
Not following the question, sorry.
If we are considering helping divide the boundary between DAIs and components, just curious if there is any gain on codecs with more than 1 DAI.
E.g. rt5677 has 6 DAIs, just pondering if it's possible (or even useful) to do this on the codec side as well. So in theory a single codec could be part of 2 cards.
I figured I would ask on this mailing list if
a) I am not mistaken on the component/card relationship and
Just trying to think of a reason why this would not be true. Are we aware of platforms that have configuration relationships across DAIs? E.g. they use a single clock and must be configured together, so splitting them might cause them to be in sync? Otherwise I agree, if DAIs can be handled independently then I don't see why we should tie them together.
There are restrictions on most platforms, but those restrictions should be expressed with modeling of clocks and serialization when accessing registers if required. Splitting the DAIs in different components to expose different cards to userspace without modeling such dependencies is a sure fail indeed. It's also an assured fail even if the DAIs are exposed in a single component and used in a single card. One example would be our very own Intel SSP, if you try to configure a shared MCLK with different settings that will quickly go South.
Curtis
b) if this is by design, or if we want to clarify what a component is and what its restrictions might be.
Thanks for your feedback/comments -Pierre
[1] https://elixir.bootlin.com/linux/latest/source/sound/soc/soc-core.c#L1364