Re: Query on audio-graph-card DT binding
On Tue, Dec 26, 2023 at 09:58:02PM +0530, Sameer Pujar wrote:
Hi Morimoto-san,
This question is regarding audio-graph-card.c driver related DT binding.
I am looking to enable following DAI link connection in device tree for Tegra audio:
/-----> codec1 endpoint / CPU endpoint \ -----> codec2 endpoint
I see that, "remote-endpoint" property can only specify single phandle object for connection to a remote endpoint. In other words, the link can be one-to-one. For illustration, please see below example. However I see it leads to a build error if phandle-array is provided for "remote-endpoint" property.
cpu_port { cpu_ep: endpoint { remote-endpoint = <&codec1_ep>, <&codec2_ep>; }; };
codec1 { codec1_ep: endpoint { remote-endpoint = <&cpu_ep>; }; };
codec2 { codec2_ep: endpoint { remote-endpoint = <&cpu_ep>; }; };
Is there a possibility to re-use the same CPU endpoint for connecting to multiple codec endpoints like shown in above example?
Can you describe the use-case? Is there a need to switch between codec1 and codec2 endpoints or do they receive the same data in parallel all the time?
Could this perhaps be described by adding multiple CPU ports with one endpoint each?
Thierry
On Thu, Jan 04, 2024 at 06:07:22PM +0100, Thierry Reding wrote:
On Tue, Dec 26, 2023 at 09:58:02PM +0530, Sameer Pujar wrote:
/-----> codec1 endpoint / CPU endpoint \ -----> codec2 endpoint
Can you describe the use-case? Is there a need to switch between codec1 and codec2 endpoints or do they receive the same data in parallel all the time?
Could this perhaps be described by adding multiple CPU ports with one endpoint each?
Don't know about the specific use case that Sameer is looking at but to me this looks like a surround sound setup where multiple stereo (or mono) DACs are wired in parallel, either with a TDM setup or with multiple data lines. There's multiple CODECs all taking input from a single host controller.
On 04-01-2024 22:52, Mark Brown wrote:
On Thu, Jan 04, 2024 at 06:07:22PM +0100, Thierry Reding wrote:
On Tue, Dec 26, 2023 at 09:58:02PM +0530, Sameer Pujar wrote:
/-----> codec1 endpoint / CPU endpoint \ -----> codec2 endpoint
Can you describe the use-case? Is there a need to switch between codec1 and codec2 endpoints or do they receive the same data in parallel all the time? Could this perhaps be described by adding multiple CPU ports with one endpoint each?
Don't know about the specific use case that Sameer is looking at but to me this looks like a surround sound setup where multiple stereo (or mono) DACs are wired in parallel, either with a TDM setup or with multiple data lines. There's multiple CODECs all taking input from a single host controller.
Yes, it is a TDM use case where the same clock and data line is shared with multiple CODECs. Each CODEC is expected to pickup data based on the allotted TDM slot.
It is possible to create multiple CPU dummy endpoints and use these in DT binding for each CODEC. I am not sure if this is the best way right now. There are few things to note here with dummy endpoints. First, it leads to bit of duplication of endpoint DAIs and DAI links for these. Please note that host controller pins are actually shared with external CODECs. So shouldn't DT provide a way to represent this connection? Second, ASoC provides a way to represent multiple CODECs on a single DAI link in the driver and my concern is to understand if present binding can be extended to represent this scenario. Third, one of the user wanted to connect 6 CODECs and that is the maximum request I have seen so far. I can expose additional dummy CPU DAIs keeping this maximum request in mind, but not sure if users would like to extend it further. The concern I have is, how can we make this easily extendible and simpler to use?
With custom DT bindings it may be simpler to resolve this, but Tegra audio presently relies on standard graph remote-endpoints binding. So I guess diverging from this may not be preferable?
On Fri, Jan 05, 2024 at 10:24:18AM +0530, Sameer Pujar wrote:
On 04-01-2024 22:52, Mark Brown wrote:
On Thu, Jan 04, 2024 at 06:07:22PM +0100, Thierry Reding wrote:
On Tue, Dec 26, 2023 at 09:58:02PM +0530, Sameer Pujar wrote:
/-----> codec1 endpoint / CPU endpoint \ -----> codec2 endpoint
Can you describe the use-case? Is there a need to switch between codec1 and codec2 endpoints or do they receive the same data in parallel all the time? Could this perhaps be described by adding multiple CPU ports with one endpoint each?
Don't know about the specific use case that Sameer is looking at but to me this looks like a surround sound setup where multiple stereo (or mono) DACs are wired in parallel, either with a TDM setup or with multiple data lines. There's multiple CODECs all taking input from a single host controller.
Yes, it is a TDM use case where the same clock and data line is shared with multiple CODECs. Each CODEC is expected to pickup data based on the allotted TDM slot.
It is possible to create multiple CPU dummy endpoints and use these in DT binding for each CODEC. I am not sure if this is the best way right now. There are few things to note here with dummy endpoints. First, it leads to bit of duplication of endpoint DAIs and DAI links for these. Please note that host controller pins are actually shared with external CODECs. So shouldn't DT provide a way to represent this connection? Second, ASoC provides a way to represent multiple CODECs on a single DAI link in the driver and my concern is to understand if present binding can be extended to represent this scenario. Third, one of the user wanted to connect 6 CODECs and that is the maximum request I have seen so far. I can expose additional dummy CPU DAIs keeping this maximum request in mind, but not sure if users would like to extend it further. The concern I have is, how can we make this easily extendible and simpler to use?
With custom DT bindings it may be simpler to resolve this, but Tegra audio presently relies on standard graph remote-endpoints binding. So I guess diverging from this may not be preferable?
This seems like a legitimate use-case for the graph bindings, but perhaps one that nobody has run into yet. It might be worth looking into extending the bindings to account for this.
I think there are two pieces for this. On one hand we have the DTC that complains, which I think is what you were seeing. It's a bit tricky to update because it checks for bidirectionality of the endpoints, which is trivial to do with 1:1 but more complicated with 1:N relationships. I've done some prototyping but not sure if my test DT is exactly what you need. Can you send a snippet of what your DT looks like to test the DTC changes against?
The other part is the DT schema which currently restricts the remote-endpoint property to be a single phandle. We would want phandle-array in this case with an updated description. Something like this:
--- >8 --- diff --git a/dtschema/schemas/graph.yaml b/dtschema/schemas/graph.yaml index bca450514640..1459b88b9b77 100644 --- a/dtschema/schemas/graph.yaml +++ b/dtschema/schemas/graph.yaml @@ -42,8 +42,9 @@ $defs:
remote-endpoint: description: | - phandle to an 'endpoint' subnode of a remote device node. - $ref: /schemas/types.yaml#/definitions/phandle + A list of phandles to 'endpoint' subnodes of one or more remote + device node. + $ref: /schemas/types.yaml#/definitions/phandle-array
port-base: type: object --- >8 ---
Thierry
On 05-01-2024 13:41, Thierry Reding wrote:
On Fri, Jan 05, 2024 at 10:24:18AM +0530, Sameer Pujar wrote:
On 04-01-2024 22:52, Mark Brown wrote:
On Thu, Jan 04, 2024 at 06:07:22PM +0100, Thierry Reding wrote:
On Tue, Dec 26, 2023 at 09:58:02PM +0530, Sameer Pujar wrote:
/-----> codec1 endpoint / CPU endpoint \ -----> codec2 endpoint
Can you describe the use-case? Is there a need to switch between codec1 and codec2 endpoints or do they receive the same data in parallel all the time? Could this perhaps be described by adding multiple CPU ports with one endpoint each?
Don't know about the specific use case that Sameer is looking at but to me this looks like a surround sound setup where multiple stereo (or mono) DACs are wired in parallel, either with a TDM setup or with multiple data lines. There's multiple CODECs all taking input from a single host controller.
Yes, it is a TDM use case where the same clock and data line is shared with multiple CODECs. Each CODEC is expected to pickup data based on the allotted TDM slot.
It is possible to create multiple CPU dummy endpoints and use these in DT binding for each CODEC. I am not sure if this is the best way right now. There are few things to note here with dummy endpoints. First, it leads to bit of duplication of endpoint DAIs and DAI links for these. Please note that host controller pins are actually shared with external CODECs. So shouldn't DT provide a way to represent this connection? Second, ASoC provides a way to represent multiple CODECs on a single DAI link in the driver and my concern is to understand if present binding can be extended to represent this scenario. Third, one of the user wanted to connect 6 CODECs and that is the maximum request I have seen so far. I can expose additional dummy CPU DAIs keeping this maximum request in mind, but not sure if users would like to extend it further. The concern I have is, how can we make this easily extendible and simpler to use?
With custom DT bindings it may be simpler to resolve this, but Tegra audio presently relies on standard graph remote-endpoints binding. So I guess diverging from this may not be preferable?
This seems like a legitimate use-case for the graph bindings, but perhaps one that nobody has run into yet. It might be worth looking into extending the bindings to account for this.
I think there are two pieces for this. On one hand we have the DTC that complains, which I think is what you were seeing. It's a bit tricky to update because it checks for bidirectionality of the endpoints, which is trivial to do with 1:1 but more complicated with 1:N relationships. I've done some prototyping but not sure if my test DT is exactly what you need. Can you send a snippet of what your DT looks like to test the DTC changes against?
This is the snippet I was trying to test:
diff --git a/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000.dtsi b/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000.dtsi index eb79e80..22a97e2 100644 --- a/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000.dtsi +++ b/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000.dtsi @@ -13,7 +13,8 @@ port@1 { endpoint { dai-format = "i2s"; - remote-endpoint = <&rt5640_ep>; + remote-endpoint = <&rt5640_ep>, + <&rt5640_ep2>; }; }; }; @@ -53,10 +54,14 @@ sound-name-prefix = "CVB-RT";
port { - rt5640_ep: endpoint { + rt5640_ep: endpoint@0 { remote-endpoint = <&i2s1_dap>; mclk-fs = <256>; }; + + rt5640_ep2: endpoint@1 { + remote-endpoint = <&i2s1_dap>; + }; }; }; };
The other part is the DT schema which currently restricts the remote-endpoint property to be a single phandle. We would want phandle-array in this case with an updated description. Something like this:
--- >8 --- diff --git a/dtschema/schemas/graph.yaml b/dtschema/schemas/graph.yaml index bca450514640..1459b88b9b77 100644 --- a/dtschema/schemas/graph.yaml +++ b/dtschema/schemas/graph.yaml @@ -42,8 +42,9 @@ $defs:
remote-endpoint: description: |
phandle to an 'endpoint' subnode of a remote device node.
$ref: /schemas/types.yaml#/definitions/phandle
A list of phandles to 'endpoint' subnodes of one or more remote
device node.
$ref: /schemas/types.yaml#/definitions/phandle-array
port-base: type: object
--- >8 ---
Thierry
Hi Sameer
/-----> codec1 endpoint / CPU endpoint \ -----> codec2 endpoint
It sounds "Single CPU - Mult Codec" connection, and if my understanding was correct, current ASoC is not supporting it so far. But dummy CPU with Multi-CPU/Codec connection helps you ? I'm not 100% sure though... See ${LINUX}/sound/soc/generic/audio-graph-card2-custom-sample.dtsi
DT looks like
[Multi-CPU/Codec] +-+ +-+ cpu <--| |<-@--------->| |-> codec1 dummy <--| | | |-> codec2 +-+ +-+
Use Multi-CPU/Codec connection with dummy.
audio-graph-card2 { compatible = "audio-graph-card2"; links = <&mcpu>;
multi { ports@0 { /* [Multi-CPU] */ mcpu: port@0 { mcpu0_ep: endpoint { remote-endpoint = <&mcodec0_ep>; }; }; port@1 { mcpu1_ep: endpoint { remote-endpoint = <&cpu_ep>; }; }; port@2 { mcpu2_ep: endpoint { remote-endpoint = <&dummy_ep>; }; }; };
/* [Multi-Codec] */ ports@1 { port@0 { mcodec0_ep: endpoint { remote-endpoint = <&mcpu0_ep>; }; }; port@1 { mcodec1_ep: endpoint { remote-endpoint = <&codec1_ep>; }; }; port@2 { mcodec2_ep: endpoint { remote-endpoint = <&codec2_ep>; }; }; }; }; };
test_cpu { compatible = "test-cpu"; port { dummy_ep: endpoint { remote-endpoint = <&mcpu2_ep>; }; }; };
Thank you for your help !!
Best regards --- Renesas Electronics Ph.D. Kuninori Morimoto
On 09-01-2024 07:47, Kuninori Morimoto wrote:
/-----> codec1 endpoint /
CPU endpoint \ -----> codec2 endpoint
It sounds "Single CPU - Mult Codec" connection, and if my understanding was correct, current ASoC is not supporting it so far.
Yes, this is a typical TDM use case. The core loops over all codecs for a given rtd. Please see, https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/soun... So latter part of your above comment is not clear to me.
But dummy CPU with Multi-CPU/Codec connection helps you ? I'm not 100% sure though... See ${LINUX}/sound/soc/generic/audio-graph-card2-custom-sample.dtsi
DT looks like
[Multi-CPU/Codec] +-+ +-+ cpu <--| |<-@--------->| |-> codec1 dummy <--| | | |-> codec2 +-+ +-+
Use Multi-CPU/Codec connection with dummy.
audio-graph-card2 { compatible = "audio-graph-card2"; links = <&mcpu>; multi { ports@0 { /* [Multi-CPU] */ mcpu: port@0 { mcpu0_ep: endpoint { remote-endpoint = <&mcodec0_ep>; }; }; port@1 { mcpu1_ep: endpoint { remote-endpoint = <&cpu_ep>; }; }; port@2 { mcpu2_ep: endpoint { remote-endpoint = <&dummy_ep>; }; }; }; /* [Multi-Codec] */ ports@1 { port@0 { mcodec0_ep: endpoint { remote-endpoint = <&mcpu0_ep>; }; }; port@1 { mcodec1_ep: endpoint { remote-endpoint = <&codec1_ep>; }; }; port@2 { mcodec2_ep: endpoint { remote-endpoint = <&codec2_ep>; }; }; }; }; }; test_cpu { compatible = "test-cpu"; port { dummy_ep: endpoint { remote-endpoint = <&mcpu2_ep>; }; }; };
I looked at the 1:N (Semi-Multi) example in the references you shared. Seems like this is broken into multiple 1:1 links. Is this correct understanding?
Also the binding properties of "audio-graph-card2" seem to be different from "audio-graph-card". I am looking at a simpler extension of existing bindings for Tegra audio without having to re-write the whole bindings. If "remote-endpoint" can take phandle array it would be simpler from DT point of view.
On 09-01-2024 07:47, Kuninori Morimoto wrote:
/-----> codec1 endpoint /
CPU endpoint \ -----> codec2 endpoint
It sounds "Single CPU - Mult Codec" connection, and if my understanding was correct, current ASoC is not supporting it so far.
Yes, this is a typical TDM use case. __soc_pcm_hw_params() call in soc-pcm.c loops over all CODECs for a given rtd. So is there something else you are referring to which makes ASoC core doesn't work?
But dummy CPU with Multi-CPU/Codec connection helps you ? I'm not 100% sure though... See ${LINUX}/sound/soc/generic/audio-graph-card2-custom-sample.dtsi
DT looks like
[Multi-CPU/Codec] +-+ +-+ cpu <--| |<-@--------->| |-> codec1 dummy <--| | | |-> codec2 +-+ +-+
Use Multi-CPU/Codec connection with dummy.
audio-graph-card2 { compatible = "audio-graph-card2"; links = <&mcpu>; multi { ports@0 { /* [Multi-CPU] */ mcpu: port@0 { mcpu0_ep: endpoint { remote-endpoint = <&mcodec0_ep>; }; }; port@1 { mcpu1_ep: endpoint { remote-endpoint = <&cpu_ep>; }; }; port@2 { mcpu2_ep: endpoint { remote-endpoint = <&dummy_ep>; }; }; }; /* [Multi-Codec] */ ports@1 { port@0 { mcodec0_ep: endpoint { remote-endpoint = <&mcpu0_ep>; }; }; port@1 { mcodec1_ep: endpoint { remote-endpoint = <&codec1_ep>; }; }; port@2 { mcodec2_ep: endpoint { remote-endpoint = <&codec2_ep>; }; }; }; }; }; test_cpu { compatible = "test-cpu"; port { dummy_ep: endpoint { remote-endpoint = <&mcpu2_ep>; }; }; };
I looked at the 1:N (Semi-Multi) example in the references you shared. Seems like this is broken into multiple 1:1 links. Is this correct understanding?
Also the binding properties of "audio-graph-card2" seem to be different from "audio-graph-card". I am looking at a simpler extension of existing bindings for Tegra audio without having to re-write the whole bindings. If "remote-endpoint" can take phandle array it would be simpler from DT point of view.
On 09-01-2024 07:47, Kuninori Morimoto wrote:
/-----> codec1 endpoint /
CPU endpoint \ -----> codec2 endpoint
It sounds "Single CPU - Mult Codec" connection, and if my understanding was correct, current ASoC is not supporting it so far.
Yes, this is a typical TDM use case. __soc_pcm_hw_params() call in soc-pcm.c loops over all CODECs for a given rtd. So is there something else you are referring to which makes ASoC core doesn't support it?
But dummy CPU with Multi-CPU/Codec connection helps you ? I'm not 100% sure though... See ${LINUX}/sound/soc/generic/audio-graph-card2-custom-sample.dtsi
DT looks like
[Multi-CPU/Codec] +-+ +-+ cpu <--| |<-@--------->| |-> codec1 dummy <--| | | |-> codec2 +-+ +-+
Use Multi-CPU/Codec connection with dummy.
audio-graph-card2 { compatible = "audio-graph-card2"; links = <&mcpu>; multi { ports@0 { /* [Multi-CPU] */ mcpu: port@0 { mcpu0_ep: endpoint { remote-endpoint = <&mcodec0_ep>; }; }; port@1 { mcpu1_ep: endpoint { remote-endpoint = <&cpu_ep>; }; }; port@2 { mcpu2_ep: endpoint { remote-endpoint = <&dummy_ep>; }; }; }; /* [Multi-Codec] */ ports@1 { port@0 { mcodec0_ep: endpoint { remote-endpoint = <&mcpu0_ep>; }; }; port@1 { mcodec1_ep: endpoint { remote-endpoint = <&codec1_ep>; }; }; port@2 { mcodec2_ep: endpoint { remote-endpoint = <&codec2_ep>; }; }; }; }; }; test_cpu { compatible = "test-cpu"; port { dummy_ep: endpoint { remote-endpoint = <&mcpu2_ep>; }; }; };
I looked at the 1:N (Semi-Multi) example in the references you shared. Seems like this is broken into multiple 1:1 links. Is this correct understanding? Also the binding properties of "audio-graph-card2" seem to be different from "audio-graph-card". I am looking at a simpler extension of existing bindings for Tegra audio without having to re-write the whole bindings. If "remote-endpoint" can take phandle array it would be simpler from DT point of view.
On 09-01-2024 07:47, Kuninori Morimoto wrote:
/-----> codec1 endpoint /
CPU endpoint \ -----> codec2 endpoint
It sounds "Single CPU - Mult Codec" connection, and if my understanding was correct, current ASoC is not supporting it so far.
Yes, this is a typical TDM use case. __soc_pcm_hw_params() call in soc-pcm.c loops over all CODECs for a given rtd. So is there something else you are referring to which makes ASoC core doesn't support it?
But dummy CPU with Multi-CPU/Codec connection helps you ? I'm not 100% sure though... See ${LINUX}/sound/soc/generic/audio-graph-card2-custom-sample.dtsi
DT looks like
[Multi-CPU/Codec] +-+ +-+ cpu <--| |<-@--------->| |-> codec1 dummy <--| | | |-> codec2 +-+ +-+
Use Multi-CPU/Codec connection with dummy.
audio-graph-card2 { compatible = "audio-graph-card2"; links = <&mcpu>; multi { ports@0 { /* [Multi-CPU] */ mcpu: port@0 { mcpu0_ep: endpoint { remote-endpoint = <&mcodec0_ep>; }; }; port@1 { mcpu1_ep: endpoint { remote-endpoint = <&cpu_ep>; }; }; port@2 { mcpu2_ep: endpoint { remote-endpoint = <&dummy_ep>; }; }; }; /* [Multi-Codec] */ ports@1 { port@0 { mcodec0_ep: endpoint { remote-endpoint = <&mcpu0_ep>; }; }; port@1 { mcodec1_ep: endpoint { remote-endpoint = <&codec1_ep>; }; }; port@2 { mcodec2_ep: endpoint { remote-endpoint = <&codec2_ep>; }; }; }; }; }; test_cpu { compatible = "test-cpu"; port { dummy_ep: endpoint { remote-endpoint = <&mcpu2_ep>; }; }; };
I looked at the 1:N (Semi-Multi) example in the references you shared. Seems like this is broken into multiple 1:1 links. Is this correct understanding?
Also the binding properties of "audio-graph-card2" seem to be different from "audio-graph-card". I am looking at a simpler extension of existing bindings for Tegra audio without having to re-write the whole bindings. If "remote-endpoint" can take phandle array it would be simpler from DT point of view.
Hi Sameer
/-----> codec1 endpoint /
CPU endpoint \ -----> codec2 endpoint
It sounds "Single CPU - Mult Codec" connection, and if my understanding was correct, current ASoC is not supporting it so far.
Yes, this is a typical TDM use case. __soc_pcm_hw_params() call in soc-pcm.c loops over all CODECs for a given rtd. So is there something else you are referring to which makes ASoC core doesn't support it?
Oops sorry, I was confused. asymmetry Multi CPU/Codec is supported on ASoC / Card2 on for-6.8 branch.
Also the binding properties of "audio-graph-card2" seem to be different from "audio-graph-card". I am looking at a simpler extension of existing bindings for Tegra audio without having to re-write the whole bindings. If "remote-endpoint" can take phandle array it would be simpler from DT point of view.
Yes, "card2" and "card" are similar but different. I'm not DT-man, but I think remote-endpoint phandle array is not allowed ? If my understanding was correct, you need to use multi endpoint in such case instead of phandle array.
CPU port { cpu_endpoint0: endpoint@0 { remote-endpoint = <&codec1_endpoint>; }; cpu_endpoint1: endpoint@1 { remote-endpoint = <&codec2_endpoint>; }; };
Codec1 port { codec1_endpoint: endpoint { remote-endpoint = <&cpu_endpoint0>; }; };
Codec2 port { codec2_endpoint: endpoint { remote-endpoint = <&cpu_endpoint1>; }; };
Thank you for your help !!
Best regards --- Renesas Electronics Ph.D. Kuninori Morimoto
On 10-01-2024 04:45, Kuninori Morimoto wrote:
/-----> codec1 endpoint /
CPU endpoint \ -----> codec2 endpoint
It sounds "Single CPU - Mult Codec" connection, and if my understanding was correct, current ASoC is not supporting it so far.
Yes, this is a typical TDM use case. __soc_pcm_hw_params() call in soc-pcm.c loops over all CODECs for a given rtd. So is there something else you are referring to which makes ASoC core doesn't support it?
Oops sorry, I was confused. asymmetry Multi CPU/Codec is supported on ASoC / Card2 on for-6.8 branch.
Also the binding properties of "audio-graph-card2" seem to be different from "audio-graph-card". I am looking at a simpler extension of existing bindings for Tegra audio without having to re-write the whole bindings. If "remote-endpoint" can take phandle array it would be simpler from DT point of view.
Yes, "card2" and "card" are similar but different. I'm not DT-man, but I think remote-endpoint phandle array is not allowed ?
Yes, it is not allowed and there is DTC error. Exploring if there is an extension possible to allow phandle array.
If my understanding was correct, you need to use multi endpoint in such case instead of phandle array.
CPU port { cpu_endpoint0: endpoint@0 { remote-endpoint = <&codec1_endpoint>; }; cpu_endpoint1: endpoint@1 { remote-endpoint = <&codec2_endpoint>; }; };
Codec1 port { codec1_endpoint: endpoint { remote-endpoint = <&cpu_endpoint0>; }; };
Codec2 port { codec2_endpoint: endpoint { remote-endpoint = <&cpu_endpoint1>; }; };
This is a workaround. Note that CPU endpoint@1 doesn't exist and a dummy endpoint needs to be created. Like I mentioned in previous replies, the number of dummy endpoints that need to be created depends on how many CODECs user want to connect and it doesn't look scalable.
Hi Sameer
CPU port { cpu_endpoint0: endpoint@0 { remote-endpoint = <&codec1_endpoint>; }; cpu_endpoint1: endpoint@1 { remote-endpoint = <&codec2_endpoint>; }; };
Codec1 port { codec1_endpoint: endpoint { remote-endpoint = <&cpu_endpoint0>; }; };
Codec2 port { codec2_endpoint: endpoint { remote-endpoint = <&cpu_endpoint1>; }; };
This is a workaround. Note that CPU endpoint@1 doesn't exist and a dummy endpoint needs to be created. Like I mentioned in previous replies, the number of dummy endpoints that need to be created depends on how many CODECs user want to connect and it doesn't look scalable.
I'm not DT-man, but it sounds you are misunderstanding about port vs endpoint ? "port" is for physical interface, "endpoint" is for connection. If 1 CPU physical interface is connected to 2 Codecs physical interfaces, above is for it in my understanding.
Can Audio-Graph-Card2 N:M connection [1][2][3] help you ? Sample is for 2:3 connection, but it should be OK for 1:2. You need v6.8 or later
[1] https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git/tree/sound... [2] https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git/tree/sound... [3] https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git/tree/sound...
Thank you for your help !!
Best regards --- Renesas Electronics Ph.D. Kuninori Morimoto
On 11-01-2024 06:14, Kuninori Morimoto wrote:
CPU port { cpu_endpoint0: endpoint@0 { remote-endpoint = <&codec1_endpoint>; };
cpu_endpoint1: endpoint@1 { remote-endpoint = <&codec2_endpoint>; };
You expect this endpoint to be exposed by driver right? Or are you suggesting nothing needs to be done in the driver for this endpoint?
...
Sample is for 2:3 connection, but it should be OK for 1:2.
For 1:N connection, how many DAI links audio-graph-card2 driver creates?
Hi Sameer
port { cpu_endpoint0: endpoint@0 { remote-endpoint = <&codec1_endpoint>; }; cpu_endpoint1: endpoint@1 { remote-endpoint = <&codec2_endpoint>; };
You expect this endpoint to be exposed by driver right? Or are you suggesting nothing needs to be done in the driver for this endpoint?
If you use Card2, and if it is normal Codec (= not HDMI sound) basically you need is only DT settings, no driver patch is needed.
Sample is for 2:3 connection, but it should be OK for 1:2.
For 1:N connection, how many DAI links audio-graph-card2 driver creates?
DAI link max is based on ASoC. I think you want to know is connection N max. It is basically no limit on Card2
Thank you for your help !!
Best regards --- Renesas Electronics Ph.D. Kuninori Morimoto
On 11-01-2024 10:26, Kuninori Morimoto wrote:
External email: Use caution opening links or attachments
Hi Sameer
port { cpu_endpoint0: endpoint@0 { remote-endpoint = <&codec1_endpoint>; }; cpu_endpoint1: endpoint@1 { remote-endpoint = <&codec2_endpoint>; };
You expect this endpoint to be exposed by driver right? Or are you suggesting nothing needs to be done in the driver for this endpoint?
If you use Card2, and if it is normal Codec (= not HDMI sound) basically you need is only DT settings, no driver patch is needed.
Is it possible to have similar behavior with audio-graph-card?
Sample is for 2:3 connection, but it should be OK for 1:2.
For 1:N connection, how many DAI links audio-graph-card2 driver creates?
DAI link max is based on ASoC. I think you want to know is connection N max. It is basically no limit on Card2
No, that is not what I am looking for. Let me please try to rephrase this. Does audio-graph-card2 driver creates N+1 DAI links or a single DAI link?
Hi Sameer
port { cpu_endpoint0: endpoint@0 { remote-endpoint = <&codec1_endpoint>; }; cpu_endpoint1: endpoint@1 { remote-endpoint = <&codec2_endpoint>; };
(snip)
Is it possible to have similar behavior with audio-graph-card?
Unfortunately, N:M connection is supported only Card2.
For 1:N connection, how many DAI links audio-graph-card2 driver creates?
(snip)
No, that is not what I am looking for. Let me please try to rephrase this. Does audio-graph-card2 driver creates N+1 DAI links or a single DAI link?
Oh, I see. It can handle many DAI links. see [1]. One note here is some link is comment-outed here, because it handles too many DAI links, it reached to the upper size limit.
[1] https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git/tree/sound...
If you want to try Card2, you can try this sample on your machine. see [2]. This sample is using card2-custom, but if you want, you can easy to switch to card2 by
- compatible = "audio-graph-card2-custom-sample"; + compatible = "audio-graph-card2";
[2] https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git/tree/sound...
Thank you for your help !!
Best regards --- Renesas Electronics Ph.D. Kuninori Morimoto
On 11-01-2024 11:02, Kuninori Morimoto wrote:
> port { > cpu_endpoint0: endpoint@0 { remote-endpoint = <&codec1_endpoint>; }; > cpu_endpoint1: endpoint@1 { remote-endpoint = <&codec2_endpoint>; };
(snip)
Is it possible to have similar behavior with audio-graph-card?
Unfortunately, N:M connection is supported only Card2.
For 1:N connection, how many DAI links audio-graph-card2 driver creates?
(snip)
No, that is not what I am looking for. Let me please try to rephrase this. Does audio-graph-card2 driver creates N+1 DAI links or a single DAI link?
Oh, I see. It can handle many DAI links. see [1]. One note here is some link is comment-outed here, because it handles too many DAI links, it reached to the upper size limit.
What I am asking is, with audio-graph-card2, when you declare 1:N connection in DT bindings, how many DAI links you create in the driver. Is it like the audio-graph-card2 driver parses the whole 1:N connection and creates only one DAI link in ASoC core or it breaks them into multiple links and create N+1 DAI links in ASoC core?
In other words,
1:N connection in DT == 1 DAI link in ASoC core? Or 1:N connection in DT == N+1 DAI links in ASoC core?
Hi Sameer
Sorry to my lack of understanding
What I am asking is, with audio-graph-card2, when you declare 1:N connection in DT bindings, how many DAI links you create in the driver. Is it like the audio-graph-card2 driver parses the whole 1:N connection and creates only one DAI link in ASoC core or it breaks them into multiple links and create N+1 DAI links in ASoC core?
In other words,
1:N connection in DT == 1 DAI link in ASoC core? Or 1:N connection in DT == N+1 DAI links in ASoC core?
If you create it as Multi-CPU/Codec connection, 1:N connection will be 1 DAI link [1]. I think your case is this. But if you create it as DPCM connection, 1:N connection will be N+1 DAI links [2].
[1] https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git/tree/sound... [2] https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git/tree/sound...
Thank you for your help !!
Best regards --- Renesas Electronics Ph.D. Kuninori Morimoto
On 12-01-2024 04:29, Kuninori Morimoto wrote:
What I am asking is, with audio-graph-card2, when you declare 1:N connection in DT bindings, how many DAI links you create in the driver. Is it like the audio-graph-card2 driver parses the whole 1:N connection and creates only one DAI link in ASoC core or it breaks them into multiple links and create N+1 DAI links in ASoC core?
In other words,
1:N connection in DT == 1 DAI link in ASoC core? Or 1:N connection in DT == N+1 DAI links in ASoC core?
If you create it as Multi-CPU/Codec connection, 1:N connection will be 1 DAI link [1]. I think your case is this. But if you create it as DPCM connection, 1:N connection will be N+1 DAI links [2].
[1] https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git/tree/sound... [2] https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git/tree/sound...
Thanks Morimoto-san for references. I need a lot more understanding on "card2" before commenting anything further. Right now I look to continue using "card" driver and have an easy DT extension, if possible, without disturbing existing Tegra users. I hope it would be fine to push changes to "card" without affecting existing users.
Hi Sameer
Thanks Morimoto-san for references. I need a lot more understanding on "card2" before commenting anything further. Right now I look to continue using "card" driver and have an easy DT extension, if possible, without disturbing existing Tegra users. I hope it would be fine to push changes to "card" without affecting existing users.
"card" and "card2" are indeed different, but similar I think. I hope you can use "card2", but if you want to use "card", you can use "custome card" feature which is zero effect to existing "card" users. "tegra_audio_graph_card.c" is already using this feature. see audio_graph_parse_of()
Thank you for your help !!
Best regards --- Renesas Electronics Ph.D. Kuninori Morimoto
participants (4)
-
Kuninori Morimoto
-
Mark Brown
-
Sameer Pujar
-
Thierry Reding