On Fri, Apr 29, 2022 at 04:55:18PM -0500, Pierre-Louis Bossart wrote:
Please fix your mail client to word wrap within paragraphs at something substantially less than 80 columns. Doing this makes your messages much easier to read and reply to.
In the existing ASoC code, there is a fixed mapping between ASoC card and component. A component relies on a ->card pointer that is set during the probe. A component cannot be used by or "bound to" more than one card [1]
This has interesting impacts on how a codec or DSP driver need to be implemented.
In the AVS series posted this week, multiple components are registered by the DSP driver, following an interface-based split. There's in addition a second-level split, where the logic is pushed further: the DSP driver partitions the SSP DAIs in different set of 'dai_driver's used by different components, which are in turn used by different cards. What is done in these patches is not wrong, and is probably the only solution to support a real-world platform with the existing ASoC code, but are the framework assumptions correct? In this example, the board-level information on which interface is used for what functionality trickles down to the lowest level of the DSP driver implementation.
I'm unclear as to why this is the only mechanism for supporting a platform - it's the only way currently to achieve multiple cards with the current code but there's an assumption there that we need to do so. If we start from the assumption that we have to split a given bit of hardware between cards then it currently follows that the driver for that card is going to have to register multiple components, but that's a bit of an assumption.
I believe this breaks to some extent the 'clean' split between platform and machine driver(s), and it's not quite aligned with the usual notion of register/probe used across frameworks, be it for drivers/clocks/you name it.
This is something which does cause issues for these other frameworks, and it's something which we were having some trouble with in ASoC when components were added. Where there's interlinks between parts of a device something needs to know about them and coordinate to avoid or resolve any conflicts in requirements. This was causing issues for ASoC when DAIs didn't know about things like shared clocking - drivers ended up having to invent components for themselves.
A similar case could happen in a codec driver, if independent functionality such as headset and amplifier support was exposed by separate cards, that would in turn mandate that the codec driver exposed N components, each handling different functionality but the same type of DAI.
If a device genuinely had a bunch of completely independent blocks that just happened to be packaged together I think that would be a completely sensible implementation TBH, it's just a MFD at that point so there's very little reason for the different components to even be the same Linux device other than a presumably shared control interface.
An alternative approach would be that the DSP driver exposes all the possible DAIs that can be used, and the binding is refined to allow for more flexibility. I think it's really the individual DAI that cannot be used by more than one card.
There's a bit more going on than just that DAIs can't be shared (and indeed one might ask questions about splitting bits of a DAI up, for example do playback and capture *really* need to go to the same card?). It's also that the clocking and routing within the component need to be coordinated and if multiple cards are talking to the same component both the machine drivers and DAPM are going to need to understand this and handle it in some sensible fashion. At some point you then end up with something that's got the internals of a single card providing muliple cards to userspace only with a more complicated implementation.
This means we get back to the assumption we started off with - what are we gaining by partitioning things into cards when that's not really what's going on with the hardware?