Yes, the driver currently only models the SOC-facing side, and that follows the 'reverse' clocking scheme:
- The master node always receives the clock on the SOC-facing side, and
produces the clock on the bus-facing side.
- The slave node always receives the clock on the bus-facing side, and
produces the clock on the SOC-facing side.
I thought the SOC would always be connected to a master node since all bus allocation/configurations required a bit of intelligence. Does your driver model the case when an SOC would run an ALSA/ASoC driver handling data produced/consumed by a A2B slave? Who would control the A2B master then?
I currently don't see a reason for modelling the bus-facing side in the ASoC topology at all, but of course that could be added.
But for the SOC-facing side on *slave* nodes, the currently implemented logic should be correct, no? Do you think it makes sense to add the bus-side as well?
Likewise the master has an 'SOC-facing' interface and a bus-facing interface. it *could* be master on both if ASRC was supported. The point is that the bus-facing interface is not clock slave.
That's right, I need to look into the modes for the master node again. Maybe the check needs to be relaxed on that end.
Your questions are interesting, I am not sure I have answers.
the ASoC clock definitions are usually 'codec-centric', but when a slave acts as a bridge and has soc- and audio-facing interfaces, and the latter one connects to say an amplifier, then what is the reference point? Or should all segments be considered independent?