Re: [alsa-devel] [PATCH 0/2 v5] dmaengine: rcar-audmapp: independent from SH_DMAE_BASE
Hi Laurent, Vinod, and Arnd
# I added Linux ALSA ML and Mark
This time I added SoC/platform side setting patches too (= 3) - 6)). SoC/platform side setting needs many entries for this rcar-audmapp, because it has many combinations. But, I believe this is very normal DMAEngine style, not special.
Vinod commented on 22/12/2014 (Message-ID: 20141222151447.GL16827@intel.com)
"I think this makes sense. Going thru the driver, it was clear that we were not really gaining anything for using dmaengine API here. So agree that lets use dmanegine for 1st dmac thru dmaengine library and then configure this in your sound driver.."
My understanding is that a solution specific to the sound driver was preferred, instead of a generic DMA engine driver. Have I missed something ?
Grr... my understanding was that "1st DMAC will use dmaengine library (= sound framework has sound-dma-xxx functions) 2nd DMAC will use normal dmaengine API"
But, I need to fixup sound driver ?
- 1st DMAC = normal DMAEngine API
- 2nd DMAC = part of sound driver
Sorry, for discuss it again here, but I want flexible switching for 1st/2nd DMAC (because of 1st/2nd DMAC path combination). So, using same DMAEngine interface from sound driver is easy to implement/understanding.
- 1st DMAC = normal DMAEngine API
- 2nd DMAC = normal DMAEngine API
The first DMA engine (the one handling transfer from/to memory) is a general- purpose DMA engine. It should be handled by a driver that implement the DMA engine API, no doubt about that.
The second "DMA engine" is dedicated to the sound IP cores and is far from being general-purpose, given that it only supports peripheral-to-peripheral transfers, without even interrupts to report transfer completion. I'm not even sure we can call it a DMA engine as there's no Direct Memory Access involved here :-) The hardware looks more like a crossbar switch with programmable routing of audio channels. That's why Vinod and I were wondering whether it really makes sense to implement it using the DMA engine API, given the resulting complexity of the DT bindings (the sound DT nodes look a bit scary), or if it could be simpler to implement it as part of the Renesas sound drivers.
If you are caring about naming (= DMA), it is "Audio *DMAC* peri peri". I wonder dma_transfer_direction has DMA_DEV_TO_DEV (this driver is not using it though...) it is for peripheral-to-peripheral ? And API point of view, 2nd DMAC doesn't need new DMAEngine API. From DRY (= Don't Repeat Yourself) point of view, I don't want to re-create "similar but different" implementation for naming issue.
From DT bindings complexity point of view, which is complex ? DMAC driver side ? DT node side ? Indeed sound driver needs many node, but is is regular arrangement, not complex, and, it needs many node for 1st DMAC too. I don't understand why 1st is OK, 2nd is not OK ? From DMAC driver side complexity point of view, 1st DMAC has same complexity (= it accepts many node from many drivers) ?
If I need to move 2nd DMAC from DMAEngine to sound driver side, please explain it to Mark Brown (= ALSA SoC maintainer)
Best regards --- Kuninori Morimoto
Best regards --- Kuninori Morimoto
Hi Mark
I need your comment about this
Grr... my understanding was that "1st DMAC will use dmaengine library (= sound framework has sound-dma-xxx functions) 2nd DMAC will use normal dmaengine API"
But, I need to fixup sound driver ?
- 1st DMAC = normal DMAEngine API
- 2nd DMAC = part of sound driver
Sorry, for discuss it again here, but I want flexible switching for 1st/2nd DMAC (because of 1st/2nd DMAC path combination). So, using same DMAEngine interface from sound driver is easy to implement/understanding.
- 1st DMAC = normal DMAEngine API
- 2nd DMAC = normal DMAEngine API
The first DMA engine (the one handling transfer from/to memory) is a general- purpose DMA engine. It should be handled by a driver that implement the DMA engine API, no doubt about that.
The second "DMA engine" is dedicated to the sound IP cores and is far from being general-purpose, given that it only supports peripheral-to-peripheral transfers, without even interrupts to report transfer completion. I'm not even sure we can call it a DMA engine as there's no Direct Memory Access involved here :-) The hardware looks more like a crossbar switch with programmable routing of audio channels. That's why Vinod and I were wondering whether it really makes sense to implement it using the DMA engine API, given the resulting complexity of the DT bindings (the sound DT nodes look a bit scary), or if it could be simpler to implement it as part of the Renesas sound drivers.
If you are caring about naming (= DMA), it is "Audio *DMAC* peri peri". I wonder dma_transfer_direction has DMA_DEV_TO_DEV (this driver is not using it though...) it is for peripheral-to-peripheral ? And API point of view, 2nd DMAC doesn't need new DMAEngine API. From DRY (= Don't Repeat Yourself) point of view, I don't want to re-create "similar but different" implementation for naming issue.
From DT bindings complexity point of view, which is complex ? DMAC driver side ? DT node side ? Indeed sound driver needs many node, but is is regular arrangement, not complex, and, it needs many node for 1st DMAC too. I don't understand why 1st is OK, 2nd is not OK ? From DMAC driver side complexity point of view, 1st DMAC has same complexity (= it accepts many node from many drivers) ?
If I need to move 2nd DMAC from DMAEngine to sound driver side, please explain it to Mark Brown (= ALSA SoC maintainer)
http://thread.gmane.org/gmane.linux.ports.sh.devel/41063/focus=42054 http://thread.gmane.org/gmane.linux.ports.sh.devel/41063/focus=42998
Sorry for sudden request, but can you please check above mail thread ? This topic is for Renesas sound DMAC driver. Renesas sound driver is using 2 DMAEngine now (= 1st/2nd). And, this mail thread was started for replacement of 2nd DMAC. (This replacement is because 2nd DMAC driver's bug fix).
In this mail thread, (I was misunderstanding about agreement though... but) Vinod/Laurent are thinking that 2nd DMAC should be implemented in sound driver side. (I'm thinking that both 1st/2nd DMAC on DMAEngine is useful) But if we need to move 2nd DMAC from DMAEngine to sound driver, I need to get agreement from you. What is you opinion ? If you agree about it, I will send the patch to ALSA/DMAEngine ML.
Best regards --- Kuninori Morimoto
Hi Arnd
I need your comment about this, too. Because, you are ARM platform maintainer, and my DTS patch for it was rejected before. So, It needs total agreement from - DMAEngine maintainer - ALSA SoC maintainer - ARM platform maintainer
Grr... my understanding was that "1st DMAC will use dmaengine library (= sound framework has sound-dma-xxx functions) 2nd DMAC will use normal dmaengine API"
But, I need to fixup sound driver ?
- 1st DMAC = normal DMAEngine API
- 2nd DMAC = part of sound driver
Sorry, for discuss it again here, but I want flexible switching for 1st/2nd DMAC (because of 1st/2nd DMAC path combination). So, using same DMAEngine interface from sound driver is easy to implement/understanding.
- 1st DMAC = normal DMAEngine API
- 2nd DMAC = normal DMAEngine API
The first DMA engine (the one handling transfer from/to memory) is a general- purpose DMA engine. It should be handled by a driver that implement the DMA engine API, no doubt about that.
The second "DMA engine" is dedicated to the sound IP cores and is far from being general-purpose, given that it only supports peripheral-to-peripheral transfers, without even interrupts to report transfer completion. I'm not even sure we can call it a DMA engine as there's no Direct Memory Access involved here :-) The hardware looks more like a crossbar switch with programmable routing of audio channels. That's why Vinod and I were wondering whether it really makes sense to implement it using the DMA engine API, given the resulting complexity of the DT bindings (the sound DT nodes look a bit scary), or if it could be simpler to implement it as part of the Renesas sound drivers.
If you are caring about naming (= DMA), it is "Audio *DMAC* peri peri". I wonder dma_transfer_direction has DMA_DEV_TO_DEV (this driver is not using it though...) it is for peripheral-to-peripheral ? And API point of view, 2nd DMAC doesn't need new DMAEngine API. From DRY (= Don't Repeat Yourself) point of view, I don't want to re-create "similar but different" implementation for naming issue.
From DT bindings complexity point of view, which is complex ? DMAC driver side ? DT node side ? Indeed sound driver needs many node, but is is regular arrangement, not complex, and, it needs many node for 1st DMAC too. I don't understand why 1st is OK, 2nd is not OK ? From DMAC driver side complexity point of view, 1st DMAC has same complexity (= it accepts many node from many drivers) ?
If I need to move 2nd DMAC from DMAEngine to sound driver side, please explain it to Mark Brown (= ALSA SoC maintainer)
http://thread.gmane.org/gmane.linux.ports.sh.devel/41063/focus=42054 http://thread.gmane.org/gmane.linux.ports.sh.devel/41063/focus=42998
Sorry for sudden request, but can you please check above mail thread ? This topic is for Renesas sound DMAC driver. Renesas sound driver is using 2 DMAEngine now (= 1st/2nd). And, this mail thread was started for replacement of 2nd DMAC. (This replacement is because 2nd DMAC driver's bug fix).
In this mail thread, (I was misunderstanding about agreement though... but) Vinod/Laurent are thinking that 2nd DMAC should be implemented in sound driver side. (I'm thinking that both 1st/2nd DMAC on DMAEngine is useful) But if we need to move 2nd DMAC from DMAEngine to sound driver, I need to get agreement from you. What is you opinion ? If you agree about it, I will send the patch to ALSA/DMAEngine ML.
Best regards
Kuninori Morimoto
To unsubscribe from this list: send the line "unsubscribe linux-sh" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Morimoto-san,
On Friday 23 January 2015 00:35:32 Kuninori Morimoto wrote:
Hi Laurent, Vinod, and Arnd
# I added Linux ALSA ML and Mark
This time I added SoC/platform side setting patches too (= 3) - 6)). SoC/platform side setting needs many entries for this rcar-audmapp, because it has many combinations. But, I believe this is very normal DMAEngine style, not special.
Vinod commented on 22/12/2014 (Message-ID: 20141222151447.GL16827@intel.com)
"I think this makes sense. Going thru the driver, it was clear that we were not really gaining anything for using dmaengine API here. So agree that lets use dmanegine for 1st dmac thru dmaengine library and then configure this in your sound driver.."
My understanding is that a solution specific to the sound driver was preferred, instead of a generic DMA engine driver. Have I missed something ?
Grr... my understanding was that "1st DMAC will use dmaengine library (= sound framework has sound-dma-xxx functions) 2nd DMAC will use normal dmaengine API"
But, I need to fixup sound driver ?
- 1st DMAC = normal DMAEngine API
- 2nd DMAC = part of sound driver
Sorry, for discuss it again here, but I want flexible switching for 1st/2nd DMAC (because of 1st/2nd DMAC path combination). So, using same DMAEngine interface from sound driver is easy to implement/understanding.
- 1st DMAC = normal DMAEngine API
- 2nd DMAC = normal DMAEngine API
The first DMA engine (the one handling transfer from/to memory) is a general- purpose DMA engine. It should be handled by a driver that implement the DMA engine API, no doubt about that.
The second "DMA engine" is dedicated to the sound IP cores and is far from being general-purpose, given that it only supports peripheral-to-peripheral transfers, without even interrupts to report transfer completion. I'm not even sure we can call it a DMA engine as there's no Direct Memory Access involved here :-) The hardware looks more like a crossbar switch with programmable routing of audio channels. That's why Vinod and I were wondering whether it really makes sense to implement it using the DMA engine API, given the resulting complexity of the DT bindings (the sound DT nodes look a bit scary), or if it could be simpler to implement it as part of the Renesas sound drivers.
If you are caring about naming (= DMA), it is "Audio *DMAC* peri peri". I wonder dma_transfer_direction has DMA_DEV_TO_DEV (this driver is not using it though...) it is for peripheral-to-peripheral ? And API point of view, 2nd DMAC doesn't need new DMAEngine API. From DRY (= Don't Repeat Yourself) point of view, I don't want to re-create "similar but different" implementation for naming issue.
From DT bindings complexity point of view, which is complex ? DMAC driver side ? DT node side ? Indeed sound driver needs many node, but is is regular arrangement, not complex, and, it needs many node for 1st DMAC too. I don't understand why 1st is OK, 2nd is not OK ? From DMAC driver side complexity point of view, 1st DMAC has same complexity (= it accepts many node from many drivers) ?
If I need to move 2nd DMAC from DMAEngine to sound driver side, please explain it to Mark Brown (= ALSA SoC maintainer)
I'm not saying you need to, I just wanted to raise the issue. From what I understood Vinod was also having doubts on using the DMA engine API for this device, given that it doesn't really match what the DMA engine API has been designed for. If everybody else is fine with your patches, and if the sound DT nodes are not considered overly complex with the DMA engine bindings, then I have no objection.
Hi Laurent
If you are caring about naming (= DMA), it is "Audio *DMAC* peri peri". I wonder dma_transfer_direction has DMA_DEV_TO_DEV (this driver is not using it though...) it is for peripheral-to-peripheral ? And API point of view, 2nd DMAC doesn't need new DMAEngine API. From DRY (= Don't Repeat Yourself) point of view, I don't want to re-create "similar but different" implementation for naming issue.
From DT bindings complexity point of view, which is complex ? DMAC driver side ? DT node side ? Indeed sound driver needs many node, but is is regular arrangement, not complex, and, it needs many node for 1st DMAC too. I don't understand why 1st is OK, 2nd is not OK ? From DMAC driver side complexity point of view, 1st DMAC has same complexity (= it accepts many node from many drivers) ?
If I need to move 2nd DMAC from DMAEngine to sound driver side, please explain it to Mark Brown (= ALSA SoC maintainer)
I'm not saying you need to, I just wanted to raise the issue. From what I understood Vinod was also having doubts on using the DMA engine API for this device, given that it doesn't really match what the DMA engine API has been designed for. If everybody else is fine with your patches, and if the sound DT nodes are not considered overly complex with the DMA engine bindings, then I have no objection.
Thank you for your feedback, and I'm so sorry for my previous rude mail.
I think 2nd DMAC doesn't be complex issue, because it is very simple device. But, this is my side (sound driver point) opinion. Of course I can agree about DMAEngine side opinion/concern. I don't know what it the best solution.
Now, I asked about it to Mark (= ALSA SoC maintainer). I can follow ALSA SoC maintainer + DMAEngine maintainer.
Best regards --- Kuninori Morimoto
Hi Morimoto-san,
On Monday 26 January 2015 02:57:32 Kuninori Morimoto wrote:
Hi Laurent
If you are caring about naming (= DMA), it is "Audio *DMAC* peri peri". I wonder dma_transfer_direction has DMA_DEV_TO_DEV (this driver is not using it though...) it is for peripheral-to-peripheral ? And API point of view, 2nd DMAC doesn't need new DMAEngine API. From DRY (= Don't Repeat Yourself) point of view, I don't want to re-create "similar but different" implementation for naming issue.
From DT bindings complexity point of view, which is complex ? DMAC driver side ? DT node side ? Indeed sound driver needs many node, but is is regular arrangement, not complex, and, it needs many node for 1st DMAC too. I don't understand why 1st is OK, 2nd is not OK ? From DMAC driver side complexity point of view, 1st DMAC has same complexity (= it accepts many node from many drivers) ?
If I need to move 2nd DMAC from DMAEngine to sound driver side, please explain it to Mark Brown (= ALSA SoC maintainer)
I'm not saying you need to, I just wanted to raise the issue. From what I understood Vinod was also having doubts on using the DMA engine API for this device, given that it doesn't really match what the DMA engine API has been designed for. If everybody else is fine with your patches, and if the sound DT nodes are not considered overly complex with the DMA engine bindings, then I have no objection.
Thank you for your feedback, and I'm so sorry for my previous rude mail.
No worries, I haven't found it rude. I know it could seem that I've trying to block this patch series without any reason, so a straight to the point reply was expected :-)
I think 2nd DMAC doesn't be complex issue, because it is very simple device. But, this is my side (sound driver point) opinion. Of course I can agree about DMAEngine side opinion/concern. I don't know what it the best solution.
Now, I asked about it to Mark (= ALSA SoC maintainer). I can follow ALSA SoC maintainer + DMAEngine maintainer.
I'd like to hear Marc's opinion, yes. And if Vinod is fine with your proposal, that's totally fine with me as well.
Hi Laurent, everyone,
On Mon, Jan 26, 2015 at 12:01 PM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Hi Morimoto-san,
On Monday 26 January 2015 02:57:32 Kuninori Morimoto wrote:
Hi Laurent
If you are caring about naming (= DMA), it is "Audio *DMAC* peri peri". I wonder dma_transfer_direction has DMA_DEV_TO_DEV (this driver is not using it though...) it is for peripheral-to-peripheral ? And API point of view, 2nd DMAC doesn't need new DMAEngine API. From DRY (= Don't Repeat Yourself) point of view, I don't want to re-create "similar but different" implementation for naming issue.
From DT bindings complexity point of view, which is complex ? DMAC driver side ? DT node side ? Indeed sound driver needs many node, but is is regular arrangement, not complex, and, it needs many node for 1st DMAC too. I don't understand why 1st is OK, 2nd is not OK ? From DMAC driver side complexity point of view, 1st DMAC has same complexity (= it accepts many node from many drivers) ?
If I need to move 2nd DMAC from DMAEngine to sound driver side, please explain it to Mark Brown (= ALSA SoC maintainer)
I'm not saying you need to, I just wanted to raise the issue. From what I understood Vinod was also having doubts on using the DMA engine API for this device, given that it doesn't really match what the DMA engine API has been designed for. If everybody else is fine with your patches, and if the sound DT nodes are not considered overly complex with the DMA engine bindings, then I have no objection.
Thank you for your feedback, and I'm so sorry for my previous rude mail.
No worries, I haven't found it rude. I know it could seem that I've trying to block this patch series without any reason, so a straight to the point reply was expected :-)
I think 2nd DMAC doesn't be complex issue, because it is very simple device. But, this is my side (sound driver point) opinion. Of course I can agree about DMAEngine side opinion/concern. I don't know what it the best solution.
Now, I asked about it to Mark (= ALSA SoC maintainer). I can follow ALSA SoC maintainer + DMAEngine maintainer.
I'd like to hear Marc's opinion, yes. And if Vinod is fine with your proposal, that's totally fine with me as well.
From my side anything is fine really, and I agree that the DT
integration patch looked rather "special". =)
At the same time I do think it makes sense to model the DT after the hardware. So if there is a separate DMA controller device then I can't see what is wrong with representing that in DT as a separate device. That aside, the current implementation may not have been entirely clean so perhaps we can begin by fixing that and see where that leads us.
So I wonder as an incremental approach, how about simply reworking the DT interface (old code has 200+ channels mapped out individually) to something more manageable (maybe 20+ groups instead)? If that still seems completely wrong DT-wise then we can look into how to rework the architecture.
Cheers,
/ magnus
Hi Vinod, Laurent, Magnus
Thank you for your feedback
From my side anything is fine really, and I agree that the DT integration patch looked rather "special". =)
At the same time I do think it makes sense to model the DT after the hardware. So if there is a separate DMA controller device then I can't see what is wrong with representing that in DT as a separate device. That aside, the current implementation may not have been entirely clean so perhaps we can begin by fixing that and see where that leads us.
So I wonder as an incremental approach, how about simply reworking the DT interface (old code has 200+ channels mapped out individually) to something more manageable (maybe 20+ groups instead)? If that still seems completely wrong DT-wise then we can look into how to rework the architecture.
Yes indeed, it needs too many DT nodes in current implementation (total 220 node). and I can agree that it is one of concern about Vinod/Laurent. It could be reduced to 22 node if I fixuped current implementation to calculate ID by DMAEngine driver side. It is not full-patchset, but I send this fixup patch as [RFC]
Best regards --- Kuninori Morimoto
Current rcar-audmapp is assuming that CHCR value are specified from platform data or DTS bindings. but, it is possible to calculate CHCR settings from src/dst address. DTS bindings node can be reduced by this patch.
Signed-off-by: Kuninori Morimoto kuninori.morimoto.gx@renesas.com --- drivers/dma/sh/rcar-audmapp.c | 136 +++++++++++++++++++++++++++++++++++++---- 1 file changed, 124 insertions(+), 12 deletions(-)
diff --git a/drivers/dma/sh/rcar-audmapp.c b/drivers/dma/sh/rcar-audmapp.c index d95bbdd..9690291 100644 --- a/drivers/dma/sh/rcar-audmapp.c +++ b/drivers/dma/sh/rcar-audmapp.c @@ -48,7 +48,6 @@ struct audmapp_chan { struct shdma_chan shdma_chan; void __iomem *base; dma_addr_t slave_addr; - u32 chcr; };
struct audmapp_device { @@ -100,6 +99,124 @@ static void audmapp_halt(struct shdma_chan *schan) } }
+struct id_addr_table { + u8 id; + u16 addr; +}; + +static u32 audmapp_addr_to_id(struct device *dev, u32 addr) +{ + struct id_addr_table ssi_id_table[] = { + {0x00, 0x0000}, /* SSI00 */ + {0x01, 0x0400}, /* SSI01 */ + {0x02, 0x0800}, /* SSI02 */ + {0x03, 0x0C00}, /* SSI03 */ + {0x04, 0x1000}, /* SSI10 */ + {0x05, 0x1400}, /* SSI11 */ + {0x06, 0x1800}, /* SSI12 */ + {0x07, 0x1C00}, /* SSI13 */ + {0x08, 0x2000}, /* SSI20 */ + {0x09, 0x2400}, /* SSI21 */ + {0x0a, 0x2800}, /* SSI22 */ + {0x0b, 0x2C00}, /* SSI23 */ + {0x0c, 0x3000}, /* SSI3 */ + {0x0d, 0x4000}, /* SSI4 */ + {0x0e, 0x5000}, /* SSI5 */ + {0x0f, 0x6000}, /* SSI6 */ + {0x10, 0x7000}, /* SSI7 */ + {0x11, 0x8000}, /* SSI8 */ + {0x12, 0x9000}, /* SSI90 */ + {0x13, 0x9400}, /* SSI91 */ + {0x14, 0x9800}, /* SSI92 */ + {0x15, 0x9C00}, /* SSI93 */ + }; + struct id_addr_table dtcp_id_tabel[] = { + {0x16, 0x0000}, /* DTCPPP0 */ + {0x17, 0x0400}, /* DTCPPP1 */ + {0x18, 0x4000}, /* DTCPCP0 */ + {0x19, 0x4400}, /* DTCPCP1 */ + }; + struct id_addr_table mlm_id_table[] = { + {0x25, 0x0000}, /* MLM0 */ + {0x26, 0x0400}, /* MLM1 */ + {0x27, 0x0800}, /* MLM2 */ + {0x28, 0x0C00}, /* MLM3 */ + {0x29, 0x1000}, /* MLM4 */ + {0x2a, 0x1400}, /* MLM5 */ + {0x2b, 0x1800}, /* MLM6 */ + {0x2c, 0x1C00}, /* MLM7 */ + }; + struct id_addr_table scu_id_table[] = { + {0x2d, 0x0000}, /* SCU_SRCI0 */ + {0x2e, 0x0400}, /* SCU_SRCI1 */ + {0x2f, 0x0800}, /* SCU_SRCI2 */ + {0x30, 0x0C00}, /* SCU_SRCI3 */ + {0x31, 0x1000}, /* SCU_SRCI4 */ + {0x32, 0x1400}, /* SCU_SRCI5 */ + {0x33, 0x1800}, /* SCU_SRCI6 */ + {0x34, 0x1C00}, /* SCU_SRCI7 */ + {0x35, 0x2000}, /* SCU_SRCI8 */ + {0x36, 0x2400}, /* SCU_SRCI9 */ + + {0x2d, 0x4000}, /* SCU_SRCO0 */ + {0x2e, 0x4400}, /* SCU_SRCO1 */ + {0x2f, 0x4800}, /* SCU_SRCO2 */ + {0x30, 0x4C00}, /* SCU_SRCO3 */ + {0x31, 0x5000}, /* SCU_SRCO4 */ + {0x32, 0x5400}, /* SCU_SRCO5 */ + {0x33, 0x5800}, /* SCU_SRCO6 */ + {0x34, 0x5C00}, /* SCU_SRCO7 */ + {0x35, 0x6000}, /* SCU_SRCO8 */ + {0x36, 0x6400}, /* SCU_SRCO9 */ + {0x37, 0x8000}, /* SCU_CMD0 */ + {0x38, 0x8400}, /* SCU_CMD1 */ + }; + struct id_addr_table *table = NULL; + int size; + int i; + + switch (addr >> 16) { + case 0xEC40: /* SSI */ + table = ssi_id_table; + size = ARRAY_SIZE(ssi_id_table); + break; + case 0xEC42: /* DTCP */ + table = dtcp_id_tabel; + size = ARRAY_SIZE(dtcp_id_tabel); + break; + case 0xEC32: /* MLM */ + table = mlm_id_table; + size = ARRAY_SIZE(mlm_id_table); + break; + case 0xEC30: /* SRC / CMD */ + table = scu_id_table; + size = ARRAY_SIZE(scu_id_table); + break; + default: + table = NULL; + size = 0; + } + + for (i = 0; i < size; i++) { + if (table[i].addr == (addr & 0xFFFF)) + return table[i].id; + } + + dev_err(dev, "unknown addr (%x)\n", addr); + + return 0xFF; +} + +static u32 audmapp_get_chcr(struct device *dev, struct audmapp_desc *desc) +{ + u32 src_id, dst_id; + + src_id = audmapp_addr_to_id(dev, desc->src); + dst_id = audmapp_addr_to_id(dev, desc->dst); + + return src_id << 24 | dst_id << 16; +} + static void audmapp_start_xfer(struct shdma_chan *schan, struct shdma_desc *sdesc) { @@ -107,7 +224,7 @@ static void audmapp_start_xfer(struct shdma_chan *schan, struct audmapp_device *audev = to_dev(auchan); struct audmapp_desc *desc = to_desc(sdesc); struct device *dev = audev->dev; - u32 chcr = auchan->chcr | PDMACHCR_DE; + u32 chcr = audmapp_get_chcr(dev, desc) | PDMACHCR_DE;
dev_dbg(dev, "src/dst/chcr = %pad/%pad/%08x\n", &desc->src, &desc->dst, chcr); @@ -118,19 +235,17 @@ static void audmapp_start_xfer(struct shdma_chan *schan, }
static int audmapp_get_config(struct audmapp_chan *auchan, int slave_id, - u32 *chcr, dma_addr_t *dst) + dma_addr_t *dst) { struct audmapp_device *audev = to_dev(auchan); struct audmapp_pdata *pdata = audev->pdata; struct audmapp_slave_config *cfg; int i;
- *chcr = 0; *dst = 0;
if (!pdata) { /* DT */ - *chcr = ((u32)slave_id) << 16; - auchan->shdma_chan.slave_id = (slave_id) >> 8; + auchan->shdma_chan.slave_id = slave_id; return 0; }
@@ -141,7 +256,6 @@ static int audmapp_get_config(struct audmapp_chan *auchan, int slave_id,
for (i = 0, cfg = pdata->slave; i < pdata->slave_num; i++, cfg++) if (cfg->slave_id == slave_id) { - *chcr = cfg->chcr; *dst = cfg->dst; return 0; } @@ -153,18 +267,16 @@ static int audmapp_set_slave(struct shdma_chan *schan, int slave_id, dma_addr_t slave_addr, bool try) { struct audmapp_chan *auchan = to_chan(schan); - u32 chcr; dma_addr_t dst; int ret;
- ret = audmapp_get_config(auchan, slave_id, &chcr, &dst); + ret = audmapp_get_config(auchan, slave_id, &dst); if (ret < 0) return ret;
if (try) return 0;
- auchan->chcr = chcr; auchan->slave_addr = slave_addr ? : dst;
return 0; @@ -267,7 +379,7 @@ static struct dma_chan *audmapp_of_xlate(struct of_phandle_args *dma_spec, { dma_cap_mask_t mask; struct dma_chan *chan; - u32 chcr = dma_spec->args[0]; + u32 id = dma_spec->args[0];
if (dma_spec->args_count != 1) return NULL; @@ -277,7 +389,7 @@ static struct dma_chan *audmapp_of_xlate(struct of_phandle_args *dma_spec,
chan = dma_request_channel(mask, shdma_chan_filter, NULL); if (chan) - to_shdma_chan(chan)->hw_req = chcr; + to_shdma_chan(chan)->hw_req = id;
return chan; }
On Thursday 29 January 2015 01:24:06 Kuninori Morimoto wrote:
Current rcar-audmapp is assuming that CHCR value are specified from platform data or DTS bindings. but, it is possible to calculate CHCR settings from src/dst address. DTS bindings node can be reduced by this patch.
Signed-off-by: Kuninori Morimoto kuninori.morimoto.gx@renesas.com
The intention definitely looks good, but I'm worried that the table you add here might be too SoC specific.
+struct id_addr_table {
- u8 id;
- u16 addr;
+};
+static u32 audmapp_addr_to_id(struct device *dev, u32 addr) +{
- struct id_addr_table ssi_id_table[] = {
{0x00, 0x0000}, /* SSI00 */
{0x01, 0x0400}, /* SSI01 */
{0x02, 0x0800}, /* SSI02 */
{0x03, 0x0C00}, /* SSI03 */
{0x04, 0x1000}, /* SSI10 */
{0x05, 0x1400}, /* SSI11 */
{0x06, 0x1800}, /* SSI12 */
{0x07, 0x1C00}, /* SSI13 */
{0x08, 0x2000}, /* SSI20 */
Isn't the address part here the physical address of the FIFO, which can be different for each implementation that contains an audmapp device?
Arnd
On Thu, Jan 29, 2015 at 10:15 AM, Arnd Bergmann arnd@arndb.de wrote:
On Thursday 29 January 2015 01:24:06 Kuninori Morimoto wrote:
Current rcar-audmapp is assuming that CHCR value are specified from platform data or DTS bindings. but, it is possible to calculate CHCR settings from src/dst address. DTS bindings node can be reduced by this patch.
Signed-off-by: Kuninori Morimoto kuninori.morimoto.gx@renesas.com
The intention definitely looks good, but I'm worried that the table you add here might be too SoC specific.
+struct id_addr_table {
u8 id;
u16 addr;
+};
+static u32 audmapp_addr_to_id(struct device *dev, u32 addr) +{
struct id_addr_table ssi_id_table[] = {
static const, so they don't get copied to the stack on every invocation.
{0x00, 0x0000}, /* SSI00 */
{0x01, 0x0400}, /* SSI01 */
{0x02, 0x0800}, /* SSI02 */
{0x03, 0x0C00}, /* SSI03 */
{0x04, 0x1000}, /* SSI10 */
{0x05, 0x1400}, /* SSI11 */
{0x06, 0x1800}, /* SSI12 */
{0x07, 0x1C00}, /* SSI13 */
{0x08, 0x2000}, /* SSI20 */
Isn't the address part here the physical address of the FIFO,
Yes they are.
which can be different for each implementation that contains an audmapp device?
Fortunately they're identical for all (current) members of the R-Car Gen2 family.
Morimoto-san: Are the base addresses configured in DT? I did a quick search for 0xec300000, 0xec320000, 0xec40000, 0xec420000, and 0xec000000, and couldn't find them in DT, only in source code (comments). They must come from somewhere? Note that I didn't follow the sound binding discussions that closely.
Thanks!
Gr{oetje,eeting}s,
Geert
-- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org
In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say "programmer" or something like that. -- Linus Torvalds
Hi Geert
+static u32 audmapp_addr_to_id(struct device *dev, u32 addr) +{
struct id_addr_table ssi_id_table[] = {
static const, so they don't get copied to the stack on every invocation.
Indeed, thanks
{0x00, 0x0000}, /* SSI00 */
{0x01, 0x0400}, /* SSI01 */
{0x02, 0x0800}, /* SSI02 */
{0x03, 0x0C00}, /* SSI03 */
{0x04, 0x1000}, /* SSI10 */
{0x05, 0x1400}, /* SSI11 */
{0x06, 0x1800}, /* SSI12 */
{0x07, 0x1C00}, /* SSI13 */
{0x08, 0x2000}, /* SSI20 */
Isn't the address part here the physical address of the FIFO,
Yes they are.
which can be different for each implementation that contains an audmapp device?
Fortunately they're identical for all (current) members of the R-Car Gen2 family.
Morimoto-san: Are the base addresses configured in DT? I did a quick search for 0xec300000, 0xec320000, 0xec40000, 0xec420000, and 0xec000000, and couldn't find them in DT, only in source code (comments). They must come from somewhere? Note that I didn't follow the sound binding discussions that closely.
This base addresses are specified from sound driver. - sound driver specifies src/dst address by DMAEngine API - audmapp find necessary ID from src/dst address
Best regards --- Kuninori Morimoto
Hi Morimoto-san,
On Thu, Jan 29, 2015 at 10:45 AM, Kuninori Morimoto kuninori.morimoto.gx@renesas.com wrote:
Morimoto-san: Are the base addresses configured in DT? I did a quick search for 0xec300000, 0xec320000, 0xec40000, 0xec420000, and 0xec000000, and couldn't find them in DT, only in source code (comments). They must come from somewhere? Note that I didn't follow the sound binding discussions that closely.
This base addresses are specified from sound driver.
- sound driver specifies src/dst address by DMAEngine API
From the rcar_sound node? That one has 0xec500000 as the base address.
- audmapp find necessary ID from src/dst address
Gr{oetje,eeting}s,
Geert
-- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org
In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say "programmer" or something like that. -- Linus Torvalds
Hi Geert
Thank you for your feedback
Morimoto-san: Are the base addresses configured in DT? I did a quick search for 0xec300000, 0xec320000, 0xec40000, 0xec420000, and 0xec000000, and couldn't find them in DT, only in source code (comments). They must come from somewhere? Note that I didn't follow the sound binding discussions that closely.
This base addresses are specified from sound driver.
- sound driver specifies src/dst address by DMAEngine API
From the rcar_sound node? That one has 0xec500000 as the base address.
Thank you. I double checked, and remembered. This is a littile bit confusable.
FIFO address 1) for CPU 2) for Audio DMAC 3) for Audio DMAC peri peri are different Sound driver is understanding all addresses. But, hmm.. no one call request_region() for these, is this your concern ?
Best regards --- Kuninori Morimoto
Hi Geert, again
Morimoto-san: Are the base addresses configured in DT? I did a quick search for 0xec300000, 0xec320000, 0xec40000, 0xec420000, and 0xec000000, and couldn't find them in DT, only in source code (comments). They must come from somewhere? Note that I didn't follow the sound binding discussions that closely.
This base addresses are specified from sound driver.
- sound driver specifies src/dst address by DMAEngine API
From the rcar_sound node? That one has 0xec500000 as the base address.
Thank you. I double checked, and remembered. This is a littile bit confusable.
FIFO address 1) for CPU 2) for Audio DMAC 3) for Audio DMAC peri peri are different Sound driver is understanding all addresses. But, hmm.. no one call request_region() for these, is this your concern ?
Sorry for my confusable mail. Yes, I need to add these address from DT. I fixup it in v2
0xec420000, 0xec320000 are listed on this patch, but these are not supported yet.
Hi xxx
Hi Arnd
Current rcar-audmapp is assuming that CHCR value are specified from platform data or DTS bindings. but, it is possible to calculate CHCR settings from src/dst address. DTS bindings node can be reduced by this patch.
Signed-off-by: Kuninori Morimoto kuninori.morimoto.gx@renesas.com
The intention definitely looks good, but I'm worried that the table you add here might be too SoC specific.
Thank you
+struct id_addr_table {
- u8 id;
- u16 addr;
+};
+static u32 audmapp_addr_to_id(struct device *dev, u32 addr) +{
- struct id_addr_table ssi_id_table[] = {
{0x00, 0x0000}, /* SSI00 */
{0x01, 0x0400}, /* SSI01 */
{0x02, 0x0800}, /* SSI02 */
{0x03, 0x0C00}, /* SSI03 */
{0x04, 0x1000}, /* SSI10 */
{0x05, 0x1400}, /* SSI11 */
{0x06, 0x1800}, /* SSI12 */
{0x07, 0x1C00}, /* SSI13 */
{0x08, 0x2000}, /* SSI20 */
Isn't the address part here the physical address of the FIFO, which can be different for each implementation that contains an audmapp device?
These are IN/OUT physical address, and hard coded / specified for audmapp. Other address is not supported.
ex) audmapp [SRCx] --------> [SSIx] out in address address
Best regards --- Kuninori Morimoto
Signed-off-by: Kuninori Morimoto kuninori.morimoto.gx@renesas.com --- arch/arm/boot/dts/r8a7790.dtsi | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-)
diff --git a/arch/arm/boot/dts/r8a7790.dtsi b/arch/arm/boot/dts/r8a7790.dtsi index ca22009..bf58f0d 100644 --- a/arch/arm/boot/dts/r8a7790.dtsi +++ b/arch/arm/boot/dts/r8a7790.dtsi @@ -1460,7 +1460,15 @@ <&audma0 0x91>, <&audma1 0xb4>, <&audma0 0x93>, <&audma1 0xb6>, <&audma0 0x95>, <&audma1 0xb8>, - <&audma0 0x97>, <&audma1 0xba>; + <&audma0 0x97>, <&audma1 0xba>, + + <&audmapp 0>, <&audmapp 1>, <&audmapp 2>, <&audmapp 3>, <&audmapp 4>, + <&audmapp 5>, <&audmapp 6>, <&audmapp 7>, <&audmapp 8>, <&audmapp 9>, + + <&audmapp 10>, <&audmapp 11>, <&audmapp 12>, <&audmapp 13>, <&audmapp 14>, + <&audmapp 15>, <&audmapp 16>, <&audmapp 17>, <&audmapp 18>, <&audmapp 19>, + + <&audmapp 20>, <&audmapp 21>;
dma-names = "mem_ssi0", "ssi0_mem", "mem_ssiu0", "ssiu0_mem", "mem_ssi1", "ssi1_mem", "mem_ssiu1", "ssiu1_mem", @@ -1482,7 +1490,15 @@ "mem_src6", "src6_mem", "mem_src7", "src7_mem", "mem_src8", "src8_mem", - "mem_src9", "src9_mem"; + "mem_src9", "src9_mem", + + "src0", "src1", "src2", "src3", "src4", + "src5", "src6", "src7", "src8", "src9", + + "ssiu0", "ssiu1", "ssiu2", "ssiu3", "ssiu4", + "ssiu5", "ssiu6", "ssiu7", "ssiu8", "ssiu9", + + "dvc0", "dvc1";
status = "disabled";
Hi Morimoto-san,
On Thu, Jan 29, 2015 at 10:23 AM, Kuninori Morimoto kuninori.morimoto.gx@renesas.com wrote:
Hi Vinod, Laurent, Magnus
Thank you for your feedback
From my side anything is fine really, and I agree that the DT integration patch looked rather "special". =)
At the same time I do think it makes sense to model the DT after the hardware. So if there is a separate DMA controller device then I can't see what is wrong with representing that in DT as a separate device. That aside, the current implementation may not have been entirely clean so perhaps we can begin by fixing that and see where that leads us.
So I wonder as an incremental approach, how about simply reworking the DT interface (old code has 200+ channels mapped out individually) to something more manageable (maybe 20+ groups instead)? If that still seems completely wrong DT-wise then we can look into how to rework the architecture.
Yes indeed, it needs too many DT nodes in current implementation (total 220 node). and I can agree that it is one of concern about Vinod/Laurent. It could be reduced to 22 node if I fixuped current implementation to calculate ID by DMAEngine driver side. It is not full-patchset, but I send this fixup patch as [RFC]
Sounds good. Can you please let me know exactly which patch series should I look at?
Thanks,
/ magnus
Hi Magnus
From my side anything is fine really, and I agree that the DT integration patch looked rather "special". =)
At the same time I do think it makes sense to model the DT after the hardware. So if there is a separate DMA controller device then I can't see what is wrong with representing that in DT as a separate device. That aside, the current implementation may not have been entirely clean so perhaps we can begin by fixing that and see where that leads us.
So I wonder as an incremental approach, how about simply reworking the DT interface (old code has 200+ channels mapped out individually) to something more manageable (maybe 20+ groups instead)? If that still seems completely wrong DT-wise then we can look into how to rework the architecture.
Yes indeed, it needs too many DT nodes in current implementation (total 220 node). and I can agree that it is one of concern about Vinod/Laurent. It could be reduced to 22 node if I fixuped current implementation to calculate ID by DMAEngine driver side. It is not full-patchset, but I send this fixup patch as [RFC]
Sounds good. Can you please let me know exactly which patch series should I look at?
Thank you for your help I have sent 2 patches as [RFC] [RFC] dmaengine: rcar-audmapp: calculate chcr via src/dst addr [RFC] ARM: shmobile: r8a7790: sound enables Audio DMAC peri peri entry on DTSI
Actually, sound driver side needs fixup patch related to these change. But above are focusing to Audio DMAC peri peri at this point. Main change is 2/2 patch, it is using 22 node on this patch. (it needed 220 node without theses patches)
Best regards --- Kuninori Morimoto
On Mon, Jan 26, 2015 at 04:39:42AM +0200, Laurent Pinchart wrote:
Hi Morimoto-san,
On Friday 23 January 2015 00:35:32 Kuninori Morimoto wrote:
Hi Laurent, Vinod, and Arnd
# I added Linux ALSA ML and Mark
This time I added SoC/platform side setting patches too (= 3) - 6)). SoC/platform side setting needs many entries for this rcar-audmapp, because it has many combinations. But, I believe this is very normal DMAEngine style, not special.
Vinod commented on 22/12/2014 (Message-ID: 20141222151447.GL16827@intel.com)
"I think this makes sense. Going thru the driver, it was clear that we were not really gaining anything for using dmaengine API here. So agree that lets use dmanegine for 1st dmac thru dmaengine library and then configure this in your sound driver.."
My understanding is that a solution specific to the sound driver was preferred, instead of a generic DMA engine driver. Have I missed something ?
Grr... my understanding was that "1st DMAC will use dmaengine library (= sound framework has sound-dma-xxx functions) 2nd DMAC will use normal dmaengine API"
But, I need to fixup sound driver ?
- 1st DMAC = normal DMAEngine API
- 2nd DMAC = part of sound driver
Sorry, for discuss it again here, but I want flexible switching for 1st/2nd DMAC (because of 1st/2nd DMAC path combination). So, using same DMAEngine interface from sound driver is easy to implement/understanding.
- 1st DMAC = normal DMAEngine API
- 2nd DMAC = normal DMAEngine API
The first DMA engine (the one handling transfer from/to memory) is a general- purpose DMA engine. It should be handled by a driver that implement the DMA engine API, no doubt about that.
The second "DMA engine" is dedicated to the sound IP cores and is far from being general-purpose, given that it only supports peripheral-to-peripheral transfers, without even interrupts to report transfer completion. I'm not even sure we can call it a DMA engine as there's no Direct Memory Access involved here :-) The hardware looks more like a crossbar switch with programmable routing of audio channels. That's why Vinod and I were wondering whether it really makes sense to implement it using the DMA engine API, given the resulting complexity of the DT bindings (the sound DT nodes look a bit scary), or if it could be simpler to implement it as part of the Renesas sound drivers.
If you are caring about naming (= DMA), it is "Audio *DMAC* peri peri". I wonder dma_transfer_direction has DMA_DEV_TO_DEV (this driver is not using it though...) it is for peripheral-to-peripheral ? And API point of view, 2nd DMAC doesn't need new DMAEngine API. From DRY (= Don't Repeat Yourself) point of view, I don't want to re-create "similar but different" implementation for naming issue.
From DT bindings complexity point of view, which is complex ? DMAC driver side ? DT node side ? Indeed sound driver needs many node, but is is regular arrangement, not complex, and, it needs many node for 1st DMAC too. I don't understand why 1st is OK, 2nd is not OK ? From DMAC driver side complexity point of view, 1st DMAC has same complexity (= it accepts many node from many drivers) ?
If I need to move 2nd DMAC from DMAEngine to sound driver side, please explain it to Mark Brown (= ALSA SoC maintainer)
I'm not saying you need to, I just wanted to raise the issue. From what I understood Vinod was also having doubts on using the DMA engine API for this device, given that it doesn't really match what the DMA engine API has been designed for. If everybody else is fine with your patches, and if the sound DT nodes are not considered overly complex with the DMA engine bindings, then I have no objection.
Laurent is right in his observations, When I last reviewed the series (though have looked at this one yet), the advise was to use dmaengine APIs with sound dmanengine library for 1st DMAC. The second DMAC is specfic to this device so should be handled internally or as part of the sound driver (or whatever client)
HTH
Hi Vinod
I'm not saying you need to, I just wanted to raise the issue. From what I understood Vinod was also having doubts on using the DMA engine API for this device, given that it doesn't really match what the DMA engine API has been designed for. If everybody else is fine with your patches, and if the sound DT nodes are not considered overly complex with the DMA engine bindings, then I have no objection.
Laurent is right in his observations, When I last reviewed the series (though have looked at this one yet), the advise was to use dmaengine APIs with sound dmanengine library for 1st DMAC. The second DMAC is specfic to this device so should be handled internally or as part of the sound driver (or whatever client)
So, is this mean we can't use DMAEngine style if it was non-generic DMAC ? For Example, we have USB-DMAC (it is used from USB only)
Best regards --- Kuninori Morimoto
Hi Vinod, again
I'm not saying you need to, I just wanted to raise the issue. From what I understood Vinod was also having doubts on using the DMA engine API for this device, given that it doesn't really match what the DMA engine API has been designed for. If everybody else is fine with your patches, and if the sound DT nodes are not considered overly complex with the DMA engine bindings, then I have no objection.
Laurent is right in his observations, When I last reviewed the series (though have looked at this one yet), the advise was to use dmaengine APIs with sound dmanengine library for 1st DMAC. The second DMAC is specfic to this device so should be handled internally or as part of the sound driver (or whatever client)
So, is this mean we can't use DMAEngine style if it was non-generic DMAC ? For Example, we have USB-DMAC (it is used from USB only)
This is out-of-topic from sound DMAC, but in USB-DMAC case, our SoC have USB IP and its DMAC. Old USB used generic DMAC before, but, New USB have USB specific DMAC today. And we already have usb driver. what should we do in this case ?
Best regards --- Kuninori Morimoto
participants (6)
-
Arnd Bergmann
-
Geert Uytterhoeven
-
Kuninori Morimoto
-
Laurent Pinchart
-
Magnus Damm
-
Vinod Koul