About Cleanup ASoC
Hi ASoC Cc Mark, Pierre-Louis, Cezary
I have very interesting to clean up ASoC. I think it is still very complex, thus, non flexible. I'm thinking that we want to cleanup...
- Component with multi Card connection - fixed playback vs capture - fixed CPU vs Codec - DPCM connection vs normal connection
About first topic, I guess the biggest reason why we have limitation is its connections. Text based image is a little bit difficult, but if my understanding was correct, current connections are...
card <-> rtd <-> rtd <-> rtd <-> ... | | +--------|-----+---------------+-----------+ (A) | | | | +-> component -> component -> component -> ... | (CPU) (Codec) (Platform) | | | | | +-> dai -> dai -> dai -> dai -> ...
Card has "component list", and Component has pointer to Card (A). This makes the limitation which blocking multiple Card connections, I think.
One note here is that "Platform" doesn't have "DAI".
So if rtd have something other connector list, let's say "ep" list here for now, instead of "dai", and remove Card <-> Component connection, I think the connection can be simple, and the issue can be gone, but what do you think ?
card <-> rtd <-> rtd <-> rtd <-> ... | | dai dai dai dai | | | | | +--> ep --> ep --> ep -> ep -> ep -> ... | | | | | component component component (CPU) (Codec) (Platform)
Here, ep is like this
struct rtd_entry_point { struct list_head list; // list for rtd struct struct snd_soc_pcm_runtime *rtd; struct snd_soc_component *component; struct snd_soc_dai *dai; };
Current for_each_xxx() can be like this
for_each_rtd_components() = for_each_rtd_ep() + ep_to_component() for_each_rtd_dais() = for_each_rtd_ep() + ep_to_dai() for_each_card_components() = for_each_card_rtds() + for_each_rtd_components()
I guess Component can have multiple Card connection on this idea, but what do you think ?
Maybe I'm misunderstanding and/or missing something, because ASoC is complex enough :) Please let me know if I was misunderstanding or missing.
Thank you for your help !!
Best regards --- Kuninori Morimoto
On 5/24/2022 6:40 AM, Kuninori Morimoto wrote:
Hi ASoC Cc Mark, Pierre-Louis, Cezary
I have very interesting to clean up ASoC. I think it is still very complex, thus, non flexible. I'm thinking that we want to cleanup...
- Component with multi Card connection
- fixed playback vs capture
- fixed CPU vs Codec
- DPCM connection vs normal connection
About first topic, I guess the biggest reason why we have limitation is its connections.
(...)
This makes the limitation which blocking multiple Card connections, I think.
Hi,
I would like to understand the problem here, before I start to discuss potential solutions. You seem to be saying that there is some kind of card<->component connection limitation? Is there specific use case you have in mind?
Personally I don't see a reason for connecting component to multiple cards. Even if it would be possible it soon becomes problematic when you for example try to decide which card controls clock on component. As I've wrote I would like to first see some use case definition, before jumping into a rewrite.
Best regards, Amadeusz
On 2022-05-24 1:17 PM, Amadeusz Sławiński wrote:
On 5/24/2022 6:40 AM, Kuninori Morimoto wrote:
Hi ASoC Cc Mark, Pierre-Louis, Cezary
I have very interesting to clean up ASoC. I think it is still very complex, thus, non flexible. I'm thinking that we want to cleanup...
- Component with multi Card connection - fixed playback vs capture - fixed CPU vs Codec - DPCM connection vs normal connection
About first topic, I guess the biggest reason why we have limitation is its connections.
(...)
This makes the limitation which blocking multiple Card connections, I think.
Hi,
I would like to understand the problem here, before I start to discuss potential solutions. You seem to be saying that there is some kind of card<->component connection limitation? Is there specific use case you have in mind?
Personally I don't see a reason for connecting component to multiple cards. Even if it would be possible it soon becomes problematic when you for example try to decide which card controls clock on component. As I've wrote I would like to first see some use case definition, before jumping into a rewrite.
Hello,
Thanks for your input Kuninori! Some questions first though :) On top of what Amadeo already highlighted, I'd like to understand what do the below three mean:
- fixed playback vs capture - fixed CPU vs Codec - DPCM connection vs normal connection
Especially the first two - what does 'fixed' stand here for?
Regards, Czarek
Hi Amadeusz, Cezary
Thank you for your feedback.
I would like to understand the problem here, before I start to discuss potential solutions. You seem to be saying that there is some kind of card<->component connection limitation? Is there specific use case you have in mind?
Personally I don't see a reason for connecting component to multiple cards. Even if it would be possible it soon becomes problematic when you for example try to decide which card controls clock on component. As I've wrote I would like to first see some use case definition, before jumping into a rewrite.
Sorry, my head was biased. I thought people are thinking same things as me.
In my case, my CPU Component has many channels. We have "basic board" which has simple sound system, it is using CPU ch0 for example.
And customer can expand it by using "expansion board", it can use CPU ch1 or later channels.
+-- basic board --------+ |+--------+ | || CPU ch0| <--> CodecA | || ch1| <-+ | |+--------+ | | +-------------|---------+
+-- expansion board ----+ | | | | +-> CodecB| +-----------------------+
Both sounds are using same CPU Component, but different Codecs.
I'm not sure how to count the Card, but "basic board sound" is 1st card, "expansion board sound" is 2nd card for us in intuitively.
But, because we can't use same Component (= CPU) to different Cards, we need to merge both sound into one Card. We can do that, but not intuitively, and needs overwrite settings.
About Power, our CPU don't have DAPM, and Power itself is controlled via PM-runtime, so we don't have worry about multi card case. About Clock, our CPU clock is controlled by DAI base (= not Component base), and no loopbacks. So at least our case, having multi Card connection for CPU Component has no problem, and is useful (At least someone had been doing it by removing the limitation locally).
I had thought similar things are happening.
Yes, "my case" is not same as "any cases". It's impossible to make "perfect" one from the beginning, so I thought start from small, and expand step-by-step (= complex Power/Clock). My suggestion was for first step.
Thanks for your input Kuninori! Some questions first though :) On top of what Amadeo already highlighted, I'd like to understand what do the below three mean:
- fixed playback vs capture - fixed CPU vs Codec - DPCM connection vs normal connection
Especially the first two - what does 'fixed' stand here for?
Oops, sorry to my lack of explanation. About 1st one (= playback vs capture), Current ASoC is assuming it has both playback and capture, but sometimes it is playback only or capture only. We are handling it, but I think it can be more flexible (or we can have multi playbacks or multi capture ? I'm not sure). And/or in Codec2Codec case, this naming is confusable.
From HW point of view, Transmit/Receive is more understandable ?
Similar things happen on 2nd one case (= CPU vs Codec). ASoC is assuming it has both CPU and Codec. I think we can handle it more flexible. Codec2Codec is good sample.
And topic is back and forth, but if we can handle "normal connection" as one of "DPCM connection", we don't need to have both CPU vs Codec, because no one want to have "dummuy CPU" or "dummuy Codec".
Thank you for your help !!
Best regards --- Kuninori Morimoto
On 5/24/22 21:54, Kuninori Morimoto wrote:
Hi Amadeusz, Cezary
Thank you for your feedback.
I would like to understand the problem here, before I start to discuss potential solutions. You seem to be saying that there is some kind of card<->component connection limitation? Is there specific use case you have in mind?
Personally I don't see a reason for connecting component to multiple cards. Even if it would be possible it soon becomes problematic when you for example try to decide which card controls clock on component. As I've wrote I would like to first see some use case definition, before jumping into a rewrite.
Sorry, my head was biased. I thought people are thinking same things as me.
In my case, my CPU Component has many channels. We have "basic board" which has simple sound system, it is using CPU ch0 for example.
And customer can expand it by using "expansion board", it can use CPU ch1 or later channels.
did you mean 'channels' or 'streams'?
I think it's very hard to expose a different card for content that need to start at the same time and remain phase-synchronized.
If however the 'ch0' and 'ch1' can be independent content then this configuration would make sense indeed, and it would be quite similar to the ideas on the Intel side.
One example is e.g. Chromebooks, where we typically have a stream for headphone/headset, and another for amplifiers. But there are cases where the jack is not present and the headphone is not populated on a board, so we end-up having multiple versions of the card, with and without the headphone path. It could be argued that the headphone could be handled on a different card. That information is however not known to the 'cpu' parts, it's really a board-level decision.
+-- basic board --------+ |+--------+ | || CPU ch0| <--> CodecA | || ch1| <-+ | |+--------+ | | +-------------|---------+
+-- expansion board ----+ | | | | +-> CodecB| +-----------------------+
Both sounds are using same CPU Component, but different Codecs.
I'm not sure how to count the Card, but "basic board sound" is 1st card, "expansion board sound" is 2nd card for us in intuitively.
But, because we can't use same Component (= CPU) to different Cards, we need to merge both sound into one Card. We can do that, but not intuitively, and needs overwrite settings.
Hi Pierre-Louis
Thank you for your feedback
+-- basic board --------+ |+--------+ | || CPU ch0| <--> CodecA | || ch1| <-+ | |+--------+ | | +-------------|---------+
+-- expansion board ----+ | | | | +-> CodecB| +-----------------------+
(snip)
I'm not sure how to count the Card, but "basic board sound" is 1st card, "expansion board sound" is 2nd card for us in intuitively.
(snip)
did you mean 'channels' or 'streams'?
(snip)
If however the 'ch0' and 'ch1' can be independent content then this configuration would make sense indeed, and it would be quite similar to the ideas on the Intel side.
These are independent sounds. It is working for us if we merge these into one Cards, but... I'm OK and will do nothing if no one want to have multi Card connection. but I'm happy to work for it if someone needs it.
Thank you for your help !!
Best regards --- Kuninori Morimoto
On 5/26/2022 4:13 AM, Kuninori Morimoto wrote:
Hi Pierre-Louis
Thank you for your feedback
+-- basic board --------+ |+--------+ | || CPU ch0| <--> CodecA | || ch1| <-+ | |+--------+ | | +-------------|---------+
+-- expansion board ----+ | | | | +-> CodecB| +-----------------------+
(snip)
I'm not sure how to count the Card, but "basic board sound" is 1st card, "expansion board sound" is 2nd card for us in intuitively.
(snip)
did you mean 'channels' or 'streams'?
(snip)
If however the 'ch0' and 'ch1' can be independent content then this configuration would make sense indeed, and it would be quite similar to the ideas on the Intel side.
These are independent sounds. It is working for us if we merge these into one Cards, but... I'm OK and will do nothing if no one want to have multi Card connection. but I'm happy to work for it if someone needs it.
Well, this sounds like what we did in avs, namely we split cards separately based on endpoint. Main driver decides what endpoints to expose, based on ACPI detection and ACPI match rules, see [1]. For example in our model it is possible to have 2 separate i2s codecs connected and each having its own card. From avs driver we expose snd_soc_dai_driver with 2 streams (playback and capture), see [2] and then tell machine driver to just connect them to codec [3] - look for "SSP%d Pin", "ssp%d Tx" and "ssp%d Rx". Connections between "ssp%d Tx"/"ssp%d Rx" and userspace FE are handled by topology in our case, but should be easy to expose without it, if you don't use topologies.
[1] https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git/tree/sound... [2] https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git/tree/sound... [3] https://lore.kernel.org/alsa-devel/20220511162403.3987658-8-cezary.rojewski@...
On 5/26/22 04:57, Amadeusz Sławiński wrote:
On 5/26/2022 4:13 AM, Kuninori Morimoto wrote:
Hi Pierre-Louis
Thank you for your feedback
+-- basic board --------+ |+--------+ | || CPU ch0| <--> CodecA | || ch1| <-+ | |+--------+ | | +-------------|---------+
+-- expansion board ----+ | | | | +-> CodecB| +-----------------------+
(snip)
I'm not sure how to count the Card, but "basic board sound" is 1st card, "expansion board sound" is 2nd card for us in intuitively.
(snip)
did you mean 'channels' or 'streams'?
(snip)
If however the 'ch0' and 'ch1' can be independent content then this configuration would make sense indeed, and it would be quite similar to the ideas on the Intel side.
These are independent sounds. It is working for us if we merge these into one Cards, but... I'm OK and will do nothing if no one want to have multi Card connection. but I'm happy to work for it if someone needs it.
Well, this sounds like what we did in avs, namely we split cards separately based on endpoint. Main driver decides what endpoints to expose, based on ACPI detection and ACPI match rules, see [1]. For example in our model it is possible to have 2 separate i2s codecs connected and each having its own card. From avs driver we expose snd_soc_dai_driver with 2 streams (playback and capture), see [2] and then tell machine driver to just connect them to codec [3] - look for "SSP%d Pin", "ssp%d Tx" and "ssp%d Rx". Connections between "ssp%d Tx"/"ssp%d Rx" and userspace FE are handled by topology in our case, but should be easy to expose without it, if you don't use topologies.
It works for ACPI because the platform driver can check if specific _HIDs are present or not, and decide to create different platform devices for different cards, with resources split in different components. In other words, there is no strict boundary between platform and machine driver, the platform has all the information needed.
I don't know if it's feasible in a Device Tree environment: the DT blob exposes the machine device, which uses *references* to resources exposed by platform and codec drivers. If there were multiple machine devices for different cards, how would that split of resources between different components be done?
The current ACPI approach will also not work if we move to a generic machine driver for SoundWire platform, with one or more devices exposed in ACPI for the board-level composition. If the machine devices are NOT created by the platform driver, I don't see a clear solution to support multiple cards.
[1] https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git/tree/sound...
[2] https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git/tree/sound...
[3] https://lore.kernel.org/alsa-devel/20220511162403.3987658-8-cezary.rojewski@...
On Thu, May 26, 2022 at 08:48:26AM -0500, Pierre-Louis Bossart wrote:
I don't know if it's feasible in a Device Tree environment: the DT blob exposes the machine device, which uses *references* to resources exposed by platform and codec drivers. If there were multiple machine devices for different cards, how would that split of resources between different components be done?
You can add whatever properties you like however this is getting into policy decisions so isn't really hardware description and it wouldn't be idiomatic.
On 2022-05-26 3:48 PM, Pierre-Louis Bossart wrote:
On 5/26/22 04:57, Amadeusz Sławiński wrote:
Well, this sounds like what we did in avs, namely we split cards separately based on endpoint. Main driver decides what endpoints to expose, based on ACPI detection and ACPI match rules, see [1]. For example in our model it is possible to have 2 separate i2s codecs connected and each having its own card. From avs driver we expose snd_soc_dai_driver with 2 streams (playback and capture), see [2] and then tell machine driver to just connect them to codec [3] - look for "SSP%d Pin", "ssp%d Tx" and "ssp%d Rx". Connections between "ssp%d Tx"/"ssp%d Rx" and userspace FE are handled by topology in our case, but should be easy to expose without it, if you don't use topologies.
It works for ACPI because the platform driver can check if specific _HIDs are present or not, and decide to create different platform devices for different cards, with resources split in different components. In other words, there is no strict boundary between platform and machine driver, the platform has all the information needed.
I don't know if it's feasible in a Device Tree environment: the DT blob exposes the machine device, which uses *references* to resources exposed by platform and codec drivers. If there were multiple machine devices for different cards, how would that split of resources between different components be done?
The current ACPI approach will also not work if we move to a generic machine driver for SoundWire platform, with one or more devices exposed in ACPI for the board-level composition. If the machine devices are NOT created by the platform driver, I don't see a clear solution to support multiple cards.
Hmm.. I don't see why SDW is a problem here. Could you elaborate? I could boost avs-driver to support SDW configurations if we need a POC. Why would machine devices not be created by the platform (e.g. PCI) driver?
Regards, Czarek
On 5/26/22 09:17, Cezary Rojewski wrote:
On 2022-05-26 3:48 PM, Pierre-Louis Bossart wrote:
On 5/26/22 04:57, Amadeusz Sławiński wrote:
Well, this sounds like what we did in avs, namely we split cards separately based on endpoint. Main driver decides what endpoints to expose, based on ACPI detection and ACPI match rules, see [1]. For example in our model it is possible to have 2 separate i2s codecs connected and each having its own card. From avs driver we expose snd_soc_dai_driver with 2 streams (playback and capture), see [2] and then tell machine driver to just connect them to codec [3] - look for "SSP%d Pin", "ssp%d Tx" and "ssp%d Rx". Connections between "ssp%d Tx"/"ssp%d Rx" and userspace FE are handled by topology in our case, but should be easy to expose without it, if you don't use topologies.
It works for ACPI because the platform driver can check if specific _HIDs are present or not, and decide to create different platform devices for different cards, with resources split in different components. In other words, there is no strict boundary between platform and machine driver, the platform has all the information needed.
I don't know if it's feasible in a Device Tree environment: the DT blob exposes the machine device, which uses *references* to resources exposed by platform and codec drivers. If there were multiple machine devices for different cards, how would that split of resources between different components be done?
The current ACPI approach will also not work if we move to a generic machine driver for SoundWire platform, with one or more devices exposed in ACPI for the board-level composition. If the machine devices are NOT created by the platform driver, I don't see a clear solution to support multiple cards.
Hmm.. I don't see why SDW is a problem here. Could you elaborate? I could boost avs-driver to support SDW configurations if we need a POC. Why would machine devices not be created by the platform (e.g. PCI) driver?
Because there will be at some point an ACPI device for the machine driver. I can't give more details on a public mailing list just yet, but the approach based on the PCI driver creating a platform device is NOT future-proof.
Hi
It works for ACPI because the platform driver can check if specific _HIDs are present or not, and decide to create different platform devices for different cards, with resources split in different components. In other words, there is no strict boundary between platform and machine driver, the platform has all the information needed.
(snip)
Because there will be at some point an ACPI device for the machine driver. I can't give more details on a public mailing list just yet, but the approach based on the PCI driver creating a platform device is NOT future-proof.
I'm sorry but I'm not well understanding what does "machine device" is meaning here. But anyway, if my understanding is correct, AVS solution was like this ?
+-- DeviceA --+ +-- DeviceB -+ |+-----------+| |+----------+| ||ComponentA1|| ||ComponentB|| || [DAI] <-> [DAI] || Card-1 |+-----------+| |+----------+| | | +------------+ | | | | +-- DeviceC -+ |+-----------+| |+----------+| ||ComponentA2|| ||ComponentC|| || [DAI] <-> [DAI] || Card-2 |+-----------+| |+----------+| +-------------+ +------------+
If [DeviceA] could separate feature into [ComponentA1] and [ComponentA2], it can use multi Cards on current ASoC, but it is not a generic solution, as Pierre-Louis explained. I can agree about it if my understanding was correct. At least my Sound can't use this style.
My indicated was like this
+-- DeviceA --+ +- DeviceB --+ |+-----------+| |+----------+| ||ComponentA || ||ComponentB|| || [DAI] <-> [DAI] || Card-1 || || |+----------+| || || +------------+ || || || || +- DeviceC --+ || || |+----------+| || || ||ComponentC|| || [DAI] <-> [DAI] || Card-2 |+-----------+| |+----------+| +-------------+ +------------+
[DeviceA]/[ComponentA] is using more generic style, but we can't use this on current ASoC because of Component vs Card.
If [DeviceA] doesn't need complex DAPM/clocks control, my indicated style can be one of the solution for it (?). But in such case, *general* DAPM/clock solution is difficult. One note is we can still use AVS style on this style.
Am I understanding well ?
Current ASoC basic style was created very long time ago, and thus, some of them are becoming unnecessary restrictions or mismatch to current system or not intuitively today, I think.
My opinion is I don't think we can create perfect solution from the beginning and/or by one patch. What we can do is small update with many deep test to go to correct direction.
If my indicated (remove Component vs Card restriction as fist step) was correct direction, and if we can agree about that, I'm very happy to work for it (not including DAPM/clock *generic* solution for now though ;).
Thank you for your help !!
Best regards --- Kuninori Morimoto
On Fri, May 27, 2022 at 06:50:34AM +0000, Kuninori Morimoto wrote:
Because there will be at some point an ACPI device for the machine driver. I can't give more details on a public mailing list just yet, but the approach based on the PCI driver creating a platform device is NOT future-proof.
I'm sorry but I'm not well understanding what does "machine device" is meaning here.
The card (eg, audio-graph-card).
If [DeviceA] doesn't need complex DAPM/clocks control, my indicated style can be one of the solution for it (?). But in such case, *general* DAPM/clock solution is difficult. One note is we can still use AVS style on this style.
It can definitely work for some simpler cases, but working out which cases and making sure that for example things don't break if someone improves the driver for a piece of hardware gets fun - things might not be linked up with current code, but the hardware might actually have more features that cause some cross connection.
If my indicated (remove Component vs Card restriction as fist step) was correct direction, and if we can agree about that, I'm very happy to work for it (not including DAPM/clock *generic* solution for now though ;).
On the one hand there does seem to be some demand for it, on the other hand I do worry that it will end up eventually just running into things that mean we're effectively pulling everything back together into a single card again, even if what's exposed to userspace looks like multiple cards somehow.
Hi Mark
If [DeviceA] doesn't need complex DAPM/clocks control, my indicated style can be one of the solution for it (?). But in such case, *general* DAPM/clock solution is difficult. One note is we can still use AVS style on this style.
It can definitely work for some simpler cases, but working out which cases and making sure that for example things don't break if someone improves the driver for a piece of hardware gets fun - things might not be linked up with current code, but the hardware might actually have more features that cause some cross connection.
Yes, I agree about this. No one want to dive into the mess :) If I work for it, loosen little-by-little, step-by-step, and deep test is my style, not a one big change patch ;)
On the one hand there does seem to be some demand for it, on the other hand I do worry that it will end up eventually just running into things that mean we're effectively pulling everything back together into a single card again, even if what's exposed to userspace looks like multiple cards somehow.
Hmm... I have no idea about this.
Thank you for your help !!
Best regards --- Kuninori Morimoto
On 2022-05-27 8:50 AM, Kuninori Morimoto wrote:
I'm sorry but I'm not well understanding what does "machine device" is meaning here. But anyway, if my understanding is correct, AVS solution was like this ?
+-- DeviceA --+ +-- DeviceB -+ |+-----------+| |+----------+| ||ComponentA1|| ||ComponentB|| || [DAI] <-> [DAI] || Card-1 |+-----------+| |+----------+| | | +------------+ | | | | +-- DeviceC -+ |+-----------+| |+----------+| ||ComponentA2|| ||ComponentC|| || [DAI] <-> [DAI] || Card-2 |+-----------+| |+----------+| +-------------+ +------------+
If [DeviceA] could separate feature into [ComponentA1] and [ComponentA2], it can use multi Cards on current ASoC, but it is not a generic solution, as Pierre-Louis explained. I can agree about it if my understanding was correct. At least my Sound can't use this style.
Hello Kuninori,
Could you explain what you meant by "my Sound"? Perhaps you could also share location of problematic bits in your driver (file path + code lines)? I'd like to take a look - if possible, that is - to better understand your issue.
Regards, Czarek
Hi Cezary
Thank you for your feedback
+-- DeviceA --+ +-- DeviceB -+ |+-----------+| |+----------+| ||ComponentA1|| ||ComponentB|| || [DAI] <-> [DAI] || Card-1 |+-----------+| |+----------+| | | +------------+ | | | | +-- DeviceC -+ |+-----------+| |+----------+| ||ComponentA2|| ||ComponentC|| || [DAI] <-> [DAI] || Card-2 |+-----------+| |+----------+| +-------------+ +------------+
If [DeviceA] could separate feature into [ComponentA1] and [ComponentA2], it can use multi Cards on current ASoC, but it is not a generic solution, as Pierre-Louis explained. I can agree about it if my understanding was correct. At least my Sound can't use this style.
Hello Kuninori,
Could you explain what you meant by "my Sound"? Perhaps you could also share location of problematic bits in your driver (file path + code lines)? I'd like to take a look - if possible, that is - to better understand your issue.
Thanks. "my Sound" mean "my sound driver". The Device image is like this
+-- DeviceA --+ |+-----------+| ||Component || || [DAI] || [DAI] ... || [DAI] || [DAI] |+-----------+| +-------------+
It calls devm_snd_soc_register_component() here. Number of DAIs = rsnd_rdai_nr(priv) is based on the SoC (= DT settings), but these are same DAI. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/sound/soc/sh/rcar/core.c?h=v5.18#n1923
The DAIs are setuped here. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/sound/soc/sh/rcar/core.c?h=v5.18#n1350
Component driver is here https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/sound/soc/sh/rcar/core.c?h=v5.18#n1810
DAI ops is here https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/sound/soc/sh/rcar/core.c?h=v5.18#n1067
Please let me know if something was not understandable.
Thank you for your help !!
Best regards --- Kuninori Morimoto
On 2022-06-03 1:57 AM, Kuninori Morimoto wrote:
Thanks. "my Sound" mean "my sound driver". The Device image is like this
+-- DeviceA --+ |+-----------+| ||Component || || [DAI] || [DAI] ... || [DAI] || [DAI] |+-----------+| +-------------+
It calls devm_snd_soc_register_component() here. Number of DAIs = rsnd_rdai_nr(priv) is based on the SoC (= DT settings), but these are same DAI. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/sound/soc/sh/rcar/core.c?h=v5.18#n1923
The DAIs are setuped here. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/sound/soc/sh/rcar/core.c?h=v5.18#n1350
Component driver is here https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/sound/soc/sh/rcar/core.c?h=v5.18#n1810
DAI ops is here https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/sound/soc/sh/rcar/core.c?h=v5.18#n1067
Hello Kuninori,
Thank you for the links - helped me speed up the learning process.
So, together with Amadeo we spent some time analyzing your audio stack. The equivalent of avs's pci_driver (sound/soc/intel/avs/core.c) seems to be rcar_sound platform_driver defined in sound/soc/sh/rcar/core.c. There is no firmware or topology involved. Instead, we have hardware components, each representing some processing function e.g.: sample rate conversion. There is also a component which does not represent any of such functions - represents ASoC side of things instead. It's called *DAI*. And that's the component we were looking at mostly.
I believe each *DAI* exposes up to four FEs depending on the board, and during PCM operations, every FE enlists one or more hw components (rsnd_mod instances) to do the actual job.
Earlier in the thread you specified the following example as being problematic:
+-- basic board --------+ |+--------+ | || CPU ch0| <--> CodecA | || ch1| <-+ | |+--------+ | | +-------------|---------+
+-- expansion board ----+ | | | | +-> CodecB| +-----------------------+
with suggestion to solve it with 1xComponent:NxCard relation which is not supported by ASoC. However, we believe ASoC already presents solutions to such problem, either through 1) compound card or less commonly, by 2) splitting one big CPU-Card pair into several smaller pairs.
Relation that ASoC does support is NxComponent:1xCard. As both CPU and Codec are components, this roughly translates to: NxCPU:1xCard:NxCodec. Your case is an example of 1xCPU:1xCard:2xCodec. Such 1xCPU:1xCard:NxCodec combinations are arguably the most common ones and developers majority of time are picking option 1). Less hassle. See sound/soc/intel/boards for examples.
As I believe the expansion board has some identifier, we will be able to fetch whether we are dealing with basic board or basic+expansion board. Or at least the board name should differ between the two devices. If so, two machine board drivers are needed - one servicing just the basic board (basic sound card) and the other for combo board (combo sound card).
Another option is registration of many components allowing developer to represent each logical block with a single component. This comes down to invoking snd_soc_register_component() multiple times.
This is especially useful if one or more of your four FEs available on the R-Car board has nothing to do with CodecA. Let's say FEs 1,2,3 are actually routed through CodecA whereas FE 4 through CodecB. It is logical to split the CPU functionality so that FE 4 is being serviced by the CPU component that is responsible for just that one part. This can be modified further if we want to expose entire set of FEs for both CodecA and CodecB. In such case the essence of PCM operations should be moved to some common code so that functions exposed via dai->ops become wrappers - eliminates code duplication. Then just create two instances of your CPU component, assign dai_driver(s) and register them both.
Currently, you're calling snd_soc_register_component() just once in your driver so option 2) is not possible without altering rsnd_dai_probe()/rsnd_probe().
Perhaps there is something I'm missing but considering that ASoC does have solutions to the raised problem, I do not see why either 1) or 2) cannot be made use of to deal with the problem. I and Amadeo can definitely help with 2) should you select it :)
Regards, Czarek
Hi Cezary
Thank you for your feedback
+-- DeviceA --+ |+-----------+| ||Component || || [DAI] || [DAI] ... || [DAI] || [DAI] |+-----------+| +-------------+
(snip)
+-- basic board --------+ |+--------+ | || CPU ch0| <--> CodecA | || ch1| <-+ | |+--------+ | | +-------------|---------+
+-- expansion board ----+ | | | | +-> CodecB| +-----------------------+
with suggestion to solve it with 1xComponent:NxCard relation which is not supported by ASoC. However, we believe ASoC already presents solutions to such problem, either through 1) compound card or less commonly, by 2) splitting one big CPU-Card pair into several smaller pairs.
(snip)
Perhaps there is something I'm missing but considering that ASoC does have solutions to the raised problem, I do not see why either 1) or 2) cannot be made use of to deal with the problem. I and Amadeo can definitely help with 2) should you select it :)
Yes.
If my topic was "how to use expansion sound board", case 1) and/or case 2) can solve the issue. And Renesas is already using case 1) for a longer time.
But my topic is "it looks strange" in rough saying.
"Basic board sound" and "Expansion board sound" are different sound card, in *intuitive*. Case 1) handles it as "big one card". It works, but not intuitive. For example, developer can understand it because he know the background, but user will confuse, because they don't know the background.
My indicated image was very simple, but in reality, it can be like this
+-- basic board -------------------+ |+-------------------+ | || CPU <DAI0> --- ch0| <--> CodecA | || <DAI1> --- ch1| <--> CodecB | || <DAI2> --- ch2| <--+ | || <DAI3> --- ch3| <-+| | |+-------------------+ || | +------------------------||--------+
+------ expansion board -||--------+ | || | | |+> CodecC| (*) | +-> CodecD| +----------------------------------+
In intuitive, CodecC (*) is "first Codec of 2nd Sound Card", but it is "3rd Codec of 1st Sound Card" in case 1) solution. And thus, user need to know how many DAIs exist on basic board.
In addition, in my system case, it can use "MIXer" everywhere which uses many DAIs (special DT is needed in this case). <DAI0> and <DAI1> is using it in below sample. In this case, interface number of CodecC (*) will be more confusable for user who is thinking that it is "first Codec of 2nd Sound Card".
+-- basic board -------------------+ |+-------------------+ | || CPU <DAI0> --- ch0| <--> CodecA | || <DAI1> -/ | | || <DAI2> --- ch1| <--> CodecB | || <DAI3> --- ch2| <--+ | || <DAI4> --- ch3| <-+| | |+-------------------+ || | +------------------------||--------+
+------ expansion board -||--------+ | || | | |+> CodecC| (*) | +-> CodecD| +----------------------------------+
And case 2) need to have many Components. Indeed this case can have "2 cards" in my system. If we focus to "working system", I think this is good choice.
But, it is not good match to "Component" concept, in my opinion. I can agree if some DAIs can be semantically grouped. Your system is this case, if my understanding was correct. But in my case, all DAIs/channel are same. Having many "1xComponent:1xDAI" is strange, and not good match the "Component concept", IMO.
Both case 1) and case 2) can handle my "basic board sound" + "expansion board sound" system anyway. But both are "stopgap solution", not "generic solution" for me for now.
I wanted to tell was we can expand ASoC with keeping "Component concept" and keeping "intuitive usage", if we can have "1xComponent:NxCard" style.
I hope this mail tells my concern correctly.
Thank you for your help !!
Best regards --- Kuninori Morimoto
On 2022-05-26 4:45 PM, Pierre-Louis Bossart wrote:
On 5/26/22 09:17, Cezary Rojewski wrote:
Hmm.. I don't see why SDW is a problem here. Could you elaborate? I could boost avs-driver to support SDW configurations if we need a POC. Why would machine devices not be created by the platform (e.g. PCI) driver?
Because there will be at some point an ACPI device for the machine driver. I can't give more details on a public mailing list just yet, but the approach based on the PCI driver creating a platform device is NOT future-proof.
I still believe that upcoming descriptor changes only simplifies things - less machine descriptor tables, more reliance on actual ACPI description. Remember that granular card approach does not ban the usage of compound cards. You can still mix both together. If we encounter some descriptor that says given N devices are logically inseparable, then we create a single machine board for them. The fact that sound card has a device assigned to it does not change - something will create a device and sound card driver will attach onto it. From that point onward, assigned DAI LINK components can still take control of cards behavior just like we do it today.
Regards, Czarek
On 5/27/22 03:38, Cezary Rojewski wrote:
On 2022-05-26 4:45 PM, Pierre-Louis Bossart wrote:
On 5/26/22 09:17, Cezary Rojewski wrote:
Hmm.. I don't see why SDW is a problem here. Could you elaborate? I could boost avs-driver to support SDW configurations if we need a POC. Why would machine devices not be created by the platform (e.g. PCI) driver?
Because there will be at some point an ACPI device for the machine driver. I can't give more details on a public mailing list just yet, but the approach based on the PCI driver creating a platform device is NOT future-proof.
I still believe that upcoming descriptor changes only simplifies things
- less machine descriptor tables, more reliance on actual ACPI
description. Remember that granular card approach does not ban the usage of compound cards. You can still mix both together. If we encounter some descriptor that says given N devices are logically inseparable, then we create a single machine board for them. The fact that sound card has a device assigned to it does not change - something will create a device and sound card driver will attach onto it. From that point onward, assigned DAI LINK components can still take control of cards behavior just like we do it today.
Having multiple cards created is not the issue, it's that there's no generic way for the platform driver to know how to expose the DAIs in separate components.
Shamelessly borrowing Morimoto-san's ascii art, it would be ideal if both of the models below could be supported. Currently the first one is not supported due to the component:card 1:1 relationship, and when the card is created it's too late to change how the components are exposed.
+-- DeviceA --+ +- DeviceB --+ |+-----------+| |+----------+| ||ComponentA || ||ComponentB|| || [DAI] <-> [DAI] || Card-1 || || |+----------+| || || +------------+ || || || || +- DeviceC --+ || || |+----------+| || || ||ComponentC|| || [DAI] <-> [DAI] || Card-2 |+-----------+| |+----------+| +-------------+ +------------+
+-- DeviceA --+ +-- DeviceB -+ |+-----------+| |+----------+| ||ComponentA1|| ||ComponentB|| || [DAI] <-> [DAI] || Card-1 |+-----------+| |+----------+| | | +------------+ | | | | +-- DeviceC -+ |+-----------+| |+----------+| ||ComponentA2|| ||ComponentC|| || [DAI] <-> [DAI] || Card-2 |+-----------+| |+----------+| +-------------+ +------------+
On Tue, May 24, 2022 at 04:40:24AM +0000, Kuninori Morimoto wrote:
I have very interesting to clean up ASoC. I think it is still very complex, thus, non flexible. I'm thinking that we want to cleanup...
- Component with multi Card connection
I'm really not certain that we want components in multiple cards at all, I know the Intel AVS people wanted this but I'm concerned that if a single component is in multiple cards we'll inevitably have connections between the cards which each needs to take into consideration, that could be routing or something else like clock configuration.
- fixed playback vs capture
- fixed CPU vs Codec
- DPCM connection vs normal connection
These all should be clear wins. The first two are much easier.
I guess the biggest reason why we have limitation is its connections. Text based image is a little bit difficult, but if my understanding was correct, current connections are...
So if rtd have something other connector list, let's say "ep" list here for now, instead of "dai", and remove Card <-> Component connection, I think the connection can be simple, and the issue can be gone, but what do you think ?
There's also the DAPM graph and clocking to consider - if you can route audio from one DAI to another, or mix audio signals to/from multiple DAIs then DAPM can end up spanning multiple cards which gets interesting. The clocks are probably less of an issue if we move to the clock API, though there's the potential for disagreements between cards about clock rates.
On 2022-05-24 3:24 PM, Mark Brown wrote:
On Tue, May 24, 2022 at 04:40:24AM +0000, Kuninori Morimoto wrote:
I have very interesting to clean up ASoC. I think it is still very complex, thus, non flexible. I'm thinking that we want to cleanup...
- Component with multi Card connection
I'm really not certain that we want components in multiple cards at all, I know the Intel AVS people wanted this but I'm concerned that if a single component is in multiple cards we'll inevitably have connections between the cards which each needs to take into consideration, that could be routing or something else like clock configuration.
Hello,
It seems explanation of design of the avs-driver had some shortcomings. The AVS people (like me) are to blame for this :S
The exact opposite is true - we do not want 1:N component:card relation. Separate set of components and a separate card (machine board) per logically separate audio device is the approach of choice here. It has its benefits of simplifying PM and allowing for better user experience - probe() failure of one device does not prevent other devices from enumerating.
Regards, Czarek
On 5/24/22 10:06, Cezary Rojewski wrote:
On 2022-05-24 3:24 PM, Mark Brown wrote:
On Tue, May 24, 2022 at 04:40:24AM +0000, Kuninori Morimoto wrote:
I have very interesting to clean up ASoC. I think it is still very complex, thus, non flexible. I'm thinking that we want to cleanup...
- Component with multi Card connection
I'm really not certain that we want components in multiple cards at all, I know the Intel AVS people wanted this but I'm concerned that if a single component is in multiple cards we'll inevitably have connections between the cards which each needs to take into consideration, that could be routing or something else like clock configuration.
Hello,
It seems explanation of design of the avs-driver had some shortcomings. The AVS people (like me) are to blame for this :S
The exact opposite is true - we do not want 1:N component:card relation. Separate set of components and a separate card (machine board) per logically separate audio device is the approach of choice here. It has its benefits of simplifying PM and allowing for better user experience - probe() failure of one device does not prevent other devices from enumerating.
The separate card solution works for ACPI devices only because we don't have any structured information. I don't know how it would work for Device Tree. There's no mechanism I am aware of by which the platform driver would be informed of board requirements and would split its DAIs in different components required by that board. It might be worked-around by having one component per DAI though, but that waters-down the notion of component driver quite a bit.
If you have any data connection or loopbacks between cards, or shared clocks, then you have DAPM events that are interesting to propagate. Power management is not 'simple' or even 'simpler' to me.
We can also debate forever on probe failures, the user will only see a problem when they try to use the missing devices. Fail big and fail early is the best model I've seen so far, much easier to support and report.
I am not saying having multiple cards is a bad idea, just that there are a number of technical opens with strong implications on the implementation and scaling.
Hi Mark, Cezary, Pierre-Louis.
Thank you for your feedbacks.
It seems explanation of design of the avs-driver had some shortcomings. The AVS people (like me) are to blame for this :S
The exact opposite is true - we do not want 1:N component:card relation.
OK, thanks, nice to know. Sorry to my misunderstanding, my brain was biased.
If you have any data connection or loopbacks between cards, or shared clocks, then you have DAPM events that are interesting to propagate. Power management is not 'simple' or even 'simpler' to me.
Yes, I agree about this.
I am not saying having multiple cards is a bad idea, just that there are a number of technical opens with strong implications on the implementation and scaling.
Thanks.
My basic idea is step-by-step small refactoring can remove limitation and/or add expansion. I don't want to have big change of course, and I'm not thinking that we can create "perfect solution" at the beginning.
Thank you for your help !!
Best regards --- Kuninori Morimoto
participants (5)
-
Amadeusz Sławiński
-
Cezary Rojewski
-
Kuninori Morimoto
-
Mark Brown
-
Pierre-Louis Bossart