[alsa-devel] [PATCH 0/8] Add ASoC support for AMD APUs [v4]
This patch set implements support for i2s audio and new AMD GPUs. The i2s codec is fed by a DMA engine on the GPU. To handle this we create mfd cells which we hang the i2s codec and DMA engine on. Because of this, this patch set covers two subsystems: drm and alsa. The drm patches add support for the ACP hw block which provides the DMA engine for the i2s codec. The alsa patches add the ASoC driver for the i2s codec. Since the alsa changes depend on the drm changes in this patch set as well as some other drm changes queued for 4.3, I'd like to take the alsa patches in via the drm tree.
V2 changes: - Use the MFD subsystem rather than adding our own bus - Squash all sub-feature patches together - fix comments mentioned in previous review
V3 changes: - Update the designware driver to handle slave mode, amd specific features - Use the designware driver directly for i2s - Move the DMA handling from the GPU driver into the AMD ASoC driver - Change the license on the ASoC driver to GPL
v4 changes: - patch "ASoC : dwc : support dw i2s in slave mode" accepted - Add a _dai_fmt() operation that checks to make sure that the mode we're setting corresponds to what we read back from the hardware. - Split specific quirks into separate patches - Set the specific quirks that apply to AMD chips in the acp driver
Patch 7 adds the register headers for the ACP block which is a pretty big patch so I've excluded it from email. The entire patch set can be viewed here: http://cgit.freedesktop.org/~agd5f/linux/log/?h=acp-upstream5
Thanks,
Alex
Maruthi Bayyavarapu (1): drm/amd: add ACP driver support
Maruthi Srinivas Bayyavarapu (7): ASoC : dwc : add check for master/slave format ASoC : dwc : add different playback/capture base ASoC : dwc : add quirk for dwc controller instances ASoC : dwc : add quirk for different register offset ASoC : dwc : reconfigure dwc in 'resume' from 'suspend' ASoC : AMD : add ACP 2.2 register headers ASoC: AMD: add AMD ASoC ACP-I2S driver
drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/amd/acp/Kconfig | 10 + drivers/gpu/drm/amd/acp/Makefile | 9 + drivers/gpu/drm/amd/acp/acp_hw.c | 127 ++ drivers/gpu/drm/amd/acp/include/acp_gfx_if.h | 49 + drivers/gpu/drm/amd/amdgpu/Makefile | 13 +- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 12 + drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c | 275 +++ drivers/gpu/drm/amd/amdgpu/amdgpu_acp.h | 41 + drivers/gpu/drm/amd/amdgpu/vi.c | 12 + drivers/gpu/drm/amd/include/amd_shared.h | 1 + include/linux/mfd/amd_acp.h | 43 + include/sound/designware_i2s.h | 6 + sound/soc/Kconfig | 1 + sound/soc/Makefile | 1 + sound/soc/amd/Kconfig | 4 + sound/soc/amd/Makefile | 3 + sound/soc/amd/acp-pcm-dma.c | 518 ++++++ sound/soc/amd/acp.c | 736 +++++++++ sound/soc/amd/acp.h | 147 ++ sound/soc/amd/include/acp_2_2_d.h | 609 +++++++ sound/soc/amd/include/acp_2_2_enum.h | 1068 ++++++++++++ sound/soc/amd/include/acp_2_2_sh_mask.h | 2292 ++++++++++++++++++++++++++ sound/soc/dwc/designware_i2s.c | 206 ++- 24 files changed, 6121 insertions(+), 64 deletions(-) create mode 100644 drivers/gpu/drm/amd/acp/Kconfig create mode 100644 drivers/gpu/drm/amd/acp/Makefile create mode 100644 drivers/gpu/drm/amd/acp/acp_hw.c create mode 100644 drivers/gpu/drm/amd/acp/include/acp_gfx_if.h create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_acp.h create mode 100644 include/linux/mfd/amd_acp.h create mode 100644 sound/soc/amd/Kconfig create mode 100644 sound/soc/amd/Makefile create mode 100644 sound/soc/amd/acp-pcm-dma.c create mode 100644 sound/soc/amd/acp.c create mode 100644 sound/soc/amd/acp.h create mode 100644 sound/soc/amd/include/acp_2_2_d.h create mode 100644 sound/soc/amd/include/acp_2_2_enum.h create mode 100644 sound/soc/amd/include/acp_2_2_sh_mask.h
From: Maruthi Srinivas Bayyavarapu Maruthi.Bayyavarapu@amd.com
DW i2s controller's master/slave config can be read from a read-only register. Machine driver can try to set a master/slave format on cpu-dai using 'set_fmt' of dai ops. A check is added to verify codec is master when dwc is slave and vice-versa.
Signed-off-by: Maruthi Bayyavarapu maruthi.bayyavarapu@amd.com --- sound/soc/dwc/designware_i2s.c | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+)
diff --git a/sound/soc/dwc/designware_i2s.c b/sound/soc/dwc/designware_i2s.c index 3a52f82..f427325 100644 --- a/sound/soc/dwc/designware_i2s.c +++ b/sound/soc/dwc/designware_i2s.c @@ -341,12 +341,43 @@ static int dw_i2s_trigger(struct snd_pcm_substream *substream, return ret; }
+static int dw_i2s_set_fmt(struct snd_soc_dai *cpu_dai, unsigned int fmt) +{ + struct dw_i2s_dev *dev = snd_soc_dai_get_drvdata(cpu_dai); + int ret = 0; + + switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) { + case SND_SOC_DAIFMT_CBM_CFM: + if (dev->capability & DW_I2S_SLAVE) + ret = 0; + else + ret = -EINVAL; + break; + case SND_SOC_DAIFMT_CBS_CFS: + if (dev->capability & DW_I2S_MASTER) + ret = 0; + else + ret = -EINVAL; + break; + case SND_SOC_DAIFMT_CBM_CFS: + case SND_SOC_DAIFMT_CBS_CFM: + ret = -EINVAL; + break; + default: + dev_dbg(dev->dev, "dwc : Invalid master/slave format\n"); + ret = -EINVAL; + break; + } + return ret; +} + static struct snd_soc_dai_ops dw_i2s_dai_ops = { .startup = dw_i2s_startup, .shutdown = dw_i2s_shutdown, .hw_params = dw_i2s_hw_params, .prepare = dw_i2s_prepare, .trigger = dw_i2s_trigger, + .set_fmt = dw_i2s_set_fmt, };
static const struct snd_soc_component_driver dw_i2s_component = {
On Thu, Oct 08, 2015 at 12:12:34PM -0400, Alex Deucher wrote:
From: Maruthi Srinivas Bayyavarapu Maruthi.Bayyavarapu@amd.com
DW i2s controller's master/slave config can be read from a read-only register. Machine driver can try to set a master/slave format on cpu-dai using 'set_fmt' of dai ops. A check is added to verify codec is master when dwc is slave and vice-versa.
Signed-off-by: Maruthi Bayyavarapu maruthi.bayyavarapu@amd.com
I can't apply this, you've not added a Signed-off-by for code sent by someone else as covered in SubmittingPatches.
Please also try to use subject lines matching the style for the subsystem.
On Thu, Oct 22, 2015 at 10:56 AM, Mark Brown broonie@kernel.org wrote:
On Thu, Oct 08, 2015 at 12:12:34PM -0400, Alex Deucher wrote:
From: Maruthi Srinivas Bayyavarapu Maruthi.Bayyavarapu@amd.com
DW i2s controller's master/slave config can be read from a read-only register. Machine driver can try to set a master/slave format on cpu-dai using 'set_fmt' of dai ops. A check is added to verify codec is master when dwc is slave and vice-versa.
Signed-off-by: Maruthi Bayyavarapu maruthi.bayyavarapu@amd.com
I can't apply this, you've not added a Signed-off-by for code sent by someone else as covered in SubmittingPatches.
Please also try to use subject lines matching the style for the subsystem.
I'll resend with these fixed up.
Thanks.
Alex
The patch
ASoC: dwc: add check for master/slave format
has been applied to the asoc tree at
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git
All being well this means that it will be integrated into the linux-next tree (usually sometime in the next 24 hours) and sent to Linus during the next merge window (or sooner if it is a bug fix), however if problems are discovered then the patch may be dropped or reverted.
You may get further e-mails resulting from automated or manual testing and review of the tree, please engage with people reporting problems and send followup patches addressing any issues that are reported if needed.
If any updates are required or you are submitting further changes they should be sent as incremental updates against current git, existing patches will not be replaced.
Please add any relevant lists and maintainers to the CCs when replying to this mail.
Thanks, Mark
From ab57b8e90c658a72944c48a81a1352e61e84db02 Mon Sep 17 00:00:00 2001
From: Maruthi Srinivas Bayyavarapu Maruthi.Bayyavarapu@amd.com Date: Fri, 23 Oct 2015 17:15:41 -0400 Subject: [PATCH] ASoC: dwc: add check for master/slave format
DW i2s controller's master/slave config can be read from a read-only register. Machine driver can try to set a master/slave format on cpu-dai using 'set_fmt' of dai ops. A check is added to verify codec is master when dwc is slave and vice-versa.
Signed-off-by: Maruthi Bayyavarapu maruthi.bayyavarapu@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Mark Brown broonie@kernel.org --- sound/soc/dwc/designware_i2s.c | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+)
diff --git a/sound/soc/dwc/designware_i2s.c b/sound/soc/dwc/designware_i2s.c index 3a52f82..f427325 100644 --- a/sound/soc/dwc/designware_i2s.c +++ b/sound/soc/dwc/designware_i2s.c @@ -341,12 +341,43 @@ static int dw_i2s_trigger(struct snd_pcm_substream *substream, return ret; }
+static int dw_i2s_set_fmt(struct snd_soc_dai *cpu_dai, unsigned int fmt) +{ + struct dw_i2s_dev *dev = snd_soc_dai_get_drvdata(cpu_dai); + int ret = 0; + + switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) { + case SND_SOC_DAIFMT_CBM_CFM: + if (dev->capability & DW_I2S_SLAVE) + ret = 0; + else + ret = -EINVAL; + break; + case SND_SOC_DAIFMT_CBS_CFS: + if (dev->capability & DW_I2S_MASTER) + ret = 0; + else + ret = -EINVAL; + break; + case SND_SOC_DAIFMT_CBM_CFS: + case SND_SOC_DAIFMT_CBS_CFM: + ret = -EINVAL; + break; + default: + dev_dbg(dev->dev, "dwc : Invalid master/slave format\n"); + ret = -EINVAL; + break; + } + return ret; +} + static struct snd_soc_dai_ops dw_i2s_dai_ops = { .startup = dw_i2s_startup, .shutdown = dw_i2s_shutdown, .hw_params = dw_i2s_hw_params, .prepare = dw_i2s_prepare, .trigger = dw_i2s_trigger, + .set_fmt = dw_i2s_set_fmt, };
static const struct snd_soc_component_driver dw_i2s_component = {
From: Maruthi Srinivas Bayyavarapu Maruthi.Bayyavarapu@amd.com
Some platforms (Ex: AMD CZ) can have separate dwc controllers with different base address for playback and capture. This patch adds necessary changes to support it. Platforms which make use of it can supply a quirk in platform data to make use of this. By default, playback and capture base addesses are assumed to be same.
Signed-off-by: Maruthi Bayyavarapu maruthi.bayyavarapu@amd.com --- sound/soc/dwc/designware_i2s.c | 93 ++++++++++++++++++++++-------------------- 1 file changed, 48 insertions(+), 45 deletions(-)
diff --git a/sound/soc/dwc/designware_i2s.c b/sound/soc/dwc/designware_i2s.c index f427325..374a83e 100644 --- a/sound/soc/dwc/designware_i2s.c +++ b/sound/soc/dwc/designware_i2s.c @@ -89,7 +89,8 @@ union dw_i2s_snd_dma_data { };
struct dw_i2s_dev { - void __iomem *i2s_base; + void __iomem *i2s_pbase; + void __iomem *i2s_cbase; struct clk *clk; int active; unsigned int capability; @@ -118,10 +119,10 @@ static inline void i2s_disable_channels(struct dw_i2s_dev *dev, u32 stream)
if (stream == SNDRV_PCM_STREAM_PLAYBACK) { for (i = 0; i < 4; i++) - i2s_write_reg(dev->i2s_base, TER(i), 0); + i2s_write_reg(dev->i2s_pbase, TER(i), 0); } else { for (i = 0; i < 4; i++) - i2s_write_reg(dev->i2s_base, RER(i), 0); + i2s_write_reg(dev->i2s_cbase, RER(i), 0); } }
@@ -131,25 +132,25 @@ static inline void i2s_clear_irqs(struct dw_i2s_dev *dev, u32 stream)
if (stream == SNDRV_PCM_STREAM_PLAYBACK) { for (i = 0; i < 4; i++) - i2s_write_reg(dev->i2s_base, TOR(i), 0); + i2s_write_reg(dev->i2s_pbase, TOR(i), 0); } else { for (i = 0; i < 4; i++) - i2s_write_reg(dev->i2s_base, ROR(i), 0); + i2s_write_reg(dev->i2s_cbase, ROR(i), 0); } }
static void i2s_start(struct dw_i2s_dev *dev, struct snd_pcm_substream *substream) { - - i2s_write_reg(dev->i2s_base, IER, 1); - - if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) - i2s_write_reg(dev->i2s_base, ITER, 1); - else - i2s_write_reg(dev->i2s_base, IRER, 1); - - i2s_write_reg(dev->i2s_base, CER, 1); + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { + i2s_write_reg(dev->i2s_pbase, IER, 1); + i2s_write_reg(dev->i2s_pbase, ITER, 1); + i2s_write_reg(dev->i2s_pbase, CER, 1); + } else { + i2s_write_reg(dev->i2s_cbase, IER, 1); + i2s_write_reg(dev->i2s_cbase, IRER, 1); + i2s_write_reg(dev->i2s_cbase, CER, 1); + } }
static void i2s_stop(struct dw_i2s_dev *dev, @@ -159,24 +160,25 @@ static void i2s_stop(struct dw_i2s_dev *dev,
i2s_clear_irqs(dev, substream->stream); if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { - i2s_write_reg(dev->i2s_base, ITER, 0); - + i2s_write_reg(dev->i2s_pbase, ITER, 0); for (i = 0; i < 4; i++) { - irq = i2s_read_reg(dev->i2s_base, IMR(i)); - i2s_write_reg(dev->i2s_base, IMR(i), irq | 0x30); + irq = i2s_read_reg(dev->i2s_pbase, IMR(i)); + i2s_write_reg(dev->i2s_pbase, IMR(i), + irq | 0x30); } } else { - i2s_write_reg(dev->i2s_base, IRER, 0); + i2s_write_reg(dev->i2s_cbase, IRER, 0);
for (i = 0; i < 4; i++) { - irq = i2s_read_reg(dev->i2s_base, IMR(i)); - i2s_write_reg(dev->i2s_base, IMR(i), irq | 0x03); + irq = i2s_read_reg(dev->i2s_cbase, IMR(i)); + i2s_write_reg(dev->i2s_cbase, IMR(i), + irq | 0x03); } }
if (!dev->active) { - i2s_write_reg(dev->i2s_base, CER, 0); - i2s_write_reg(dev->i2s_base, IER, 0); + i2s_write_reg(dev->i2s_pbase, CER, 0); + i2s_write_reg(dev->i2s_pbase, IER, 0); } }
@@ -253,23 +255,23 @@ static int dw_i2s_hw_params(struct snd_pcm_substream *substream,
for (ch_reg = 0; ch_reg < (config->chan_nr / 2); ch_reg++) { if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { - i2s_write_reg(dev->i2s_base, TCR(ch_reg), + i2s_write_reg(dev->i2s_pbase, TCR(ch_reg), xfer_resolution); - i2s_write_reg(dev->i2s_base, TFCR(ch_reg), 0x02); - irq = i2s_read_reg(dev->i2s_base, IMR(ch_reg)); - i2s_write_reg(dev->i2s_base, IMR(ch_reg), irq & ~0x30); - i2s_write_reg(dev->i2s_base, TER(ch_reg), 1); + i2s_write_reg(dev->i2s_pbase, TFCR(ch_reg), 0x02); + irq = i2s_read_reg(dev->i2s_pbase, IMR(ch_reg)); + i2s_write_reg(dev->i2s_pbase, IMR(ch_reg), irq & ~0x30); + i2s_write_reg(dev->i2s_pbase, TER(ch_reg), 1); } else { - i2s_write_reg(dev->i2s_base, RCR(ch_reg), + i2s_write_reg(dev->i2s_cbase, RCR(ch_reg), xfer_resolution); - i2s_write_reg(dev->i2s_base, RFCR(ch_reg), 0x07); - irq = i2s_read_reg(dev->i2s_base, IMR(ch_reg)); - i2s_write_reg(dev->i2s_base, IMR(ch_reg), irq & ~0x03); - i2s_write_reg(dev->i2s_base, RER(ch_reg), 1); + i2s_write_reg(dev->i2s_cbase, RFCR(ch_reg), 0x07); + irq = i2s_read_reg(dev->i2s_cbase, IMR(ch_reg)); + i2s_write_reg(dev->i2s_cbase, IMR(ch_reg), irq & ~0x03); + i2s_write_reg(dev->i2s_cbase, RER(ch_reg), 1); } }
- i2s_write_reg(dev->i2s_base, CCR, ccr); + i2s_write_reg(dev->i2s_pbase, CCR, ccr);
config->sample_rate = params_rate(params);
@@ -307,9 +309,9 @@ static int dw_i2s_prepare(struct snd_pcm_substream *substream, struct dw_i2s_dev *dev = snd_soc_dai_get_drvdata(dai);
if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) - i2s_write_reg(dev->i2s_base, TXFFR, 1); + i2s_write_reg(dev->i2s_pbase, TXFFR, 1); else - i2s_write_reg(dev->i2s_base, RXFFR, 1); + i2s_write_reg(dev->i2s_cbase, RXFFR, 1);
return 0; } @@ -450,8 +452,8 @@ static int dw_configure_dai(struct dw_i2s_dev *dev, * Read component parameter registers to extract * the I2S block's configuration. */ - u32 comp1 = i2s_read_reg(dev->i2s_base, I2S_COMP_PARAM_1); - u32 comp2 = i2s_read_reg(dev->i2s_base, I2S_COMP_PARAM_2); + u32 comp1 = i2s_read_reg(dev->i2s_pbase, I2S_COMP_PARAM_1); + u32 comp2 = i2s_read_reg(dev->i2s_pbase, I2S_COMP_PARAM_2); u32 idx;
if (COMP1_TX_ENABLED(comp1)) { @@ -494,7 +496,7 @@ static int dw_configure_dai_by_pd(struct dw_i2s_dev *dev, struct resource *res, const struct i2s_platform_data *pdata) { - u32 comp1 = i2s_read_reg(dev->i2s_base, I2S_COMP_PARAM_1); + u32 comp1 = i2s_read_reg(dev->i2s_pbase, I2S_COMP_PARAM_1); u32 idx = COMP1_APB_DATA_WIDTH(comp1); int ret;
@@ -524,8 +526,8 @@ static int dw_configure_dai_by_dt(struct dw_i2s_dev *dev, struct snd_soc_dai_driver *dw_i2s_dai, struct resource *res) { - u32 comp1 = i2s_read_reg(dev->i2s_base, I2S_COMP_PARAM_1); - u32 comp2 = i2s_read_reg(dev->i2s_base, I2S_COMP_PARAM_2); + u32 comp1 = i2s_read_reg(dev->i2s_pbase, I2S_COMP_PARAM_1); + u32 comp2 = i2s_read_reg(dev->i2s_pbase, I2S_COMP_PARAM_2); u32 fifo_depth = 1 << (1 + COMP1_FIFO_DEPTH_GLOBAL(comp1)); u32 idx = COMP1_APB_DATA_WIDTH(comp1); u32 idx2; @@ -589,11 +591,11 @@ static int dw_i2s_probe(struct platform_device *pdev) dw_i2s_dai->resume = dw_i2s_resume;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - dev->i2s_base = devm_ioremap_resource(&pdev->dev, res); - if (IS_ERR(dev->i2s_base)) - return PTR_ERR(dev->i2s_base); + dev->i2s_pbase = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(dev->i2s_pbase)) + return PTR_ERR(dev->i2s_pbase);
- dev->dev = &pdev->dev; + dev->i2s_cbase = dev->i2s_pbase;
if (pdata) { dev->capability = pdata->cap; @@ -624,6 +626,7 @@ static int dw_i2s_probe(struct platform_device *pdev) return ret; }
+ dev->dev = &pdev->dev; dev_set_drvdata(&pdev->dev, dev); ret = devm_snd_soc_register_component(&pdev->dev, &dw_i2s_component, dw_i2s_dai, 1);
On Thu, Oct 08, 2015 at 12:12:35PM -0400, Alex Deucher wrote:
From: Maruthi Srinivas Bayyavarapu Maruthi.Bayyavarapu@amd.com
Some platforms (Ex: AMD CZ) can have separate dwc controllers with different base address for playback and capture. This patch adds necessary changes to support it. Platforms which make use of it can supply a quirk in platform data to make use of this. By default, playback and capture base addesses are assumed to be same.
If these are two physically separate controllers that don't share anything then why are they being represented as a single device as opposed to just having two devices, one for playback and one for capture? You'd still need to register the DAI differently but it should be a much less invasive change and map much more naturally onto the device model (what if some system has put playback and capture into separate power domains for example?).
If we do go with this then we will also need an update to the DT binding to document how the additional resources associated with the second controller are described.
From: Maruthi Srinivas Bayyavarapu Maruthi.Bayyavarapu@amd.com
Added a qurik to assign different base addresses for different dwc controllers present on platform for playback and capture.
Signed-off-by: Maruthi Bayyavarapu maruthi.bayyavarapu@amd.com --- include/sound/designware_i2s.h | 3 +++ sound/soc/dwc/designware_i2s.c | 26 +++++++++++++++++++++++--- 2 files changed, 26 insertions(+), 3 deletions(-)
diff --git a/include/sound/designware_i2s.h b/include/sound/designware_i2s.h index 8966ba7..48c8210 100644 --- a/include/sound/designware_i2s.h +++ b/include/sound/designware_i2s.h @@ -45,6 +45,9 @@ struct i2s_platform_data { u32 snd_fmts; u32 snd_rates;
+ #define DW_I2S_QUIRK_MULTI_DWC (1 << 0) + unsigned int quirks; + void *play_dma_data; void *capture_dma_data; bool (*filter)(struct dma_chan *chan, void *slave); diff --git a/sound/soc/dwc/designware_i2s.c b/sound/soc/dwc/designware_i2s.c index 374a83e..fd18a0e 100644 --- a/sound/soc/dwc/designware_i2s.c +++ b/sound/soc/dwc/designware_i2s.c @@ -94,6 +94,7 @@ struct dw_i2s_dev { struct clk *clk; int active; unsigned int capability; + unsigned int quirks; struct device *dev;
/* data related to DMA transfers b/w i2s and DMAC */ @@ -176,9 +177,20 @@ static void i2s_stop(struct dw_i2s_dev *dev, } }
- if (!dev->active) { - i2s_write_reg(dev->i2s_pbase, CER, 0); - i2s_write_reg(dev->i2s_pbase, IER, 0); + if (dev->quirks & DW_I2S_QUIRK_MULTI_DWC) { + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { + i2s_write_reg(dev->i2s_pbase, CER, 0); + i2s_write_reg(dev->i2s_pbase, IER, 0); + } else { + i2s_write_reg(dev->i2s_cbase, CER, 0); + i2s_write_reg(dev->i2s_cbase, IER, 0); + } + + } else { + if (!dev->active) { + i2s_write_reg(dev->i2s_pbase, CER, 0); + i2s_write_reg(dev->i2s_pbase, IER, 0); + } } }
@@ -599,6 +611,7 @@ static int dw_i2s_probe(struct platform_device *pdev)
if (pdata) { dev->capability = pdata->cap; + dev->quirks = pdata->quirks; clk_id = NULL; ret = dw_configure_dai_by_pd(dev, dw_i2s_dai, res, pdata); } else { @@ -608,6 +621,13 @@ static int dw_i2s_probe(struct platform_device *pdev) if (ret < 0) return ret;
+ if (dev->quirks & DW_I2S_QUIRK_MULTI_DWC) { + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); + dev->i2s_cbase = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(dev->i2s_cbase)) + return PTR_ERR(dev->i2s_cbase); + } + if (dev->capability & DW_I2S_MASTER) { if (pdata) { dev->i2s_clk_cfg = pdata->i2s_clk_cfg;
On Thu, Oct 08, 2015 at 12:12:36PM -0400, Alex Deucher wrote:
From: Maruthi Srinivas Bayyavarapu Maruthi.Bayyavarapu@amd.com
Added a qurik to assign different base addresses for different dwc controllers present on platform for playback and capture.
I don't understand what the above means, sorry. Normally the address ranges for devices are supplied via resources when the device is registered.
- if (dev->quirks & DW_I2S_QUIRK_MULTI_DWC) {
if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
i2s_write_reg(dev->i2s_pbase, CER, 0);
i2s_write_reg(dev->i2s_pbase, IER, 0);
} else {
i2s_write_reg(dev->i2s_cbase, CER, 0);
i2s_write_reg(dev->i2s_cbase, IER, 0);
}
This looks like it might be part of the previous change to merge separate playback and capture instances of the controller into one device rather than a separate change? But perhaps what this is saying is also that the controller instances aren't actually totally separate after all and the prior change maps well onto the hardware...
From: Maruthi Srinivas Bayyavarapu Maruthi.Bayyavarapu@amd.com
AMD CZ platform has different offsets for I2S_COMP_PARAM_* registers. Added a quirk to support the same.
Signed-off-by: Maruthi Bayyavarapu maruthi.bayyavarapu@amd.com --- include/sound/designware_i2s.h | 5 ++++- sound/soc/dwc/designware_i2s.c | 17 +++++++++++++---- 2 files changed, 17 insertions(+), 5 deletions(-)
diff --git a/include/sound/designware_i2s.h b/include/sound/designware_i2s.h index 48c8210..240ef5e 100644 --- a/include/sound/designware_i2s.h +++ b/include/sound/designware_i2s.h @@ -45,8 +45,11 @@ struct i2s_platform_data { u32 snd_fmts; u32 snd_rates;
- #define DW_I2S_QUIRK_MULTI_DWC (1 << 0) + #define DW_I2S_QUIRK_MULTI_DWC (1 << 0) + #define DW_I2S_QUIRK_COMP_REG_OFFSET (1 << 1) unsigned int quirks; + unsigned int i2s_reg_comp1; + unsigned int i2s_reg_comp2;
void *play_dma_data; void *capture_dma_data; diff --git a/sound/soc/dwc/designware_i2s.c b/sound/soc/dwc/designware_i2s.c index fd18a0e..a16b725 100644 --- a/sound/soc/dwc/designware_i2s.c +++ b/sound/soc/dwc/designware_i2s.c @@ -95,6 +95,8 @@ struct dw_i2s_dev { int active; unsigned int capability; unsigned int quirks; + unsigned int i2s_reg_comp1; + unsigned int i2s_reg_comp2; struct device *dev;
/* data related to DMA transfers b/w i2s and DMAC */ @@ -464,8 +466,8 @@ static int dw_configure_dai(struct dw_i2s_dev *dev, * Read component parameter registers to extract * the I2S block's configuration. */ - u32 comp1 = i2s_read_reg(dev->i2s_pbase, I2S_COMP_PARAM_1); - u32 comp2 = i2s_read_reg(dev->i2s_pbase, I2S_COMP_PARAM_2); + u32 comp1 = i2s_read_reg(dev->i2s_pbase, dev->i2s_reg_comp1); + u32 comp2 = i2s_read_reg(dev->i2s_pbase, dev->i2s_reg_comp2); u32 idx;
if (COMP1_TX_ENABLED(comp1)) { @@ -508,7 +510,7 @@ static int dw_configure_dai_by_pd(struct dw_i2s_dev *dev, struct resource *res, const struct i2s_platform_data *pdata) { - u32 comp1 = i2s_read_reg(dev->i2s_pbase, I2S_COMP_PARAM_1); + u32 comp1 = i2s_read_reg(dev->i2s_pbase, dev->i2s_reg_comp1); u32 idx = COMP1_APB_DATA_WIDTH(comp1); int ret;
@@ -610,9 +612,16 @@ static int dw_i2s_probe(struct platform_device *pdev) dev->i2s_cbase = dev->i2s_pbase;
if (pdata) { + clk_id = NULL; dev->capability = pdata->cap; dev->quirks = pdata->quirks; - clk_id = NULL; + if (dev->quirks & DW_I2S_QUIRK_COMP_REG_OFFSET) { + dev->i2s_reg_comp1 = pdata->i2s_reg_comp1; + dev->i2s_reg_comp2 = pdata->i2s_reg_comp2; + } else { + dev->i2s_reg_comp1 = I2S_COMP_PARAM_1; + dev->i2s_reg_comp2 = I2S_COMP_PARAM_2; + } ret = dw_configure_dai_by_pd(dev, dw_i2s_dai, res, pdata); } else { clk_id = "i2sclk";
From: Maruthi Srinivas Bayyavarapu Maruthi.Bayyavarapu@amd.com
dwc IP can be powered off during system suspend in some platforms (Ex: AMD CZ) as per design. After system is resumed, dwc needs to be programmed again to continue audio use case.
Signed-off-by: Maruthi Bayyavarapu maruthi.bayyavarapu@amd.com --- sound/soc/dwc/designware_i2s.c | 71 ++++++++++++++++++++++++++---------------- 1 file changed, 44 insertions(+), 27 deletions(-)
diff --git a/sound/soc/dwc/designware_i2s.c b/sound/soc/dwc/designware_i2s.c index a16b725..f7f38cb 100644 --- a/sound/soc/dwc/designware_i2s.c +++ b/sound/soc/dwc/designware_i2s.c @@ -98,6 +98,8 @@ struct dw_i2s_dev { unsigned int i2s_reg_comp1; unsigned int i2s_reg_comp2; struct device *dev; + u32 ccr; + u32 xfer_resolution;
/* data related to DMA transfers b/w i2s and DMAC */ union dw_i2s_snd_dma_data play_dma_data; @@ -220,31 +222,58 @@ static int dw_i2s_startup(struct snd_pcm_substream *substream, return 0; }
+static void dw_i2s_config(struct dw_i2s_dev *dev, int stream) +{ + u32 ch_reg, irq; + struct i2s_clk_config_data *config = &dev->config; + + + i2s_disable_channels(dev, stream); + + for (ch_reg = 0; ch_reg < (config->chan_nr / 2); ch_reg++) { + if (stream == SNDRV_PCM_STREAM_PLAYBACK) { + i2s_write_reg(dev->i2s_pbase, TCR(ch_reg), + dev->xfer_resolution); + i2s_write_reg(dev->i2s_pbase, TFCR(ch_reg), 0x02); + irq = i2s_read_reg(dev->i2s_pbase, IMR(ch_reg)); + i2s_write_reg(dev->i2s_pbase, IMR(ch_reg), irq & ~0x30); + i2s_write_reg(dev->i2s_pbase, TER(ch_reg), 1); + } else { + i2s_write_reg(dev->i2s_cbase, RCR(ch_reg), + dev->xfer_resolution); + i2s_write_reg(dev->i2s_cbase, RFCR(ch_reg), 0x07); + irq = i2s_read_reg(dev->i2s_cbase, IMR(ch_reg)); + i2s_write_reg(dev->i2s_cbase, IMR(ch_reg), irq & ~0x03); + i2s_write_reg(dev->i2s_cbase, RER(ch_reg), 1); + } + + } +} + static int dw_i2s_hw_params(struct snd_pcm_substream *substream, struct snd_pcm_hw_params *params, struct snd_soc_dai *dai) { struct dw_i2s_dev *dev = snd_soc_dai_get_drvdata(dai); struct i2s_clk_config_data *config = &dev->config; - u32 ccr, xfer_resolution, ch_reg, irq; int ret;
switch (params_format(params)) { case SNDRV_PCM_FORMAT_S16_LE: config->data_width = 16; - ccr = 0x00; - xfer_resolution = 0x02; + dev->ccr = 0x00; + dev->xfer_resolution = 0x02; break;
case SNDRV_PCM_FORMAT_S24_LE: config->data_width = 24; - ccr = 0x08; - xfer_resolution = 0x04; + dev->ccr = 0x08; + dev->xfer_resolution = 0x04; break;
case SNDRV_PCM_FORMAT_S32_LE: config->data_width = 32; - ccr = 0x10; - xfer_resolution = 0x05; + dev->ccr = 0x10; + dev->xfer_resolution = 0x05; break;
default: @@ -265,27 +294,9 @@ static int dw_i2s_hw_params(struct snd_pcm_substream *substream, return -EINVAL; }
- i2s_disable_channels(dev, substream->stream); - - for (ch_reg = 0; ch_reg < (config->chan_nr / 2); ch_reg++) { - if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { - i2s_write_reg(dev->i2s_pbase, TCR(ch_reg), - xfer_resolution); - i2s_write_reg(dev->i2s_pbase, TFCR(ch_reg), 0x02); - irq = i2s_read_reg(dev->i2s_pbase, IMR(ch_reg)); - i2s_write_reg(dev->i2s_pbase, IMR(ch_reg), irq & ~0x30); - i2s_write_reg(dev->i2s_pbase, TER(ch_reg), 1); - } else { - i2s_write_reg(dev->i2s_cbase, RCR(ch_reg), - xfer_resolution); - i2s_write_reg(dev->i2s_cbase, RFCR(ch_reg), 0x07); - irq = i2s_read_reg(dev->i2s_cbase, IMR(ch_reg)); - i2s_write_reg(dev->i2s_cbase, IMR(ch_reg), irq & ~0x03); - i2s_write_reg(dev->i2s_cbase, RER(ch_reg), 1); - } - } + i2s_write_reg(dev->i2s_pbase, CCR, dev->ccr);
- i2s_write_reg(dev->i2s_pbase, CCR, ccr); + dw_i2s_config(dev, substream->stream);
config->sample_rate = params_rate(params);
@@ -417,6 +428,12 @@ static int dw_i2s_resume(struct snd_soc_dai *dai)
if (dev->capability & DW_I2S_MASTER) clk_enable(dev->clk); + + if (dai->playback_active) + dw_i2s_config(dev, SNDRV_PCM_STREAM_PLAYBACK); + if (dai->capture_active) + dw_i2s_config(dev, SNDRV_PCM_STREAM_CAPTURE); + return 0; }
From: Maruthi Bayyavarapu maruthi.bayyavarapu@amd.com
This adds the ACP (Audio CoProcessor) IP driver and wires it up to the amdgpu driver. The ACP block provides the DMA engine for i2s based ALSA driver. This is required for audio on APUs that utilize an i2s codec.
Reviewed-by: Jammy Zhou Jammy.Zhou@amd.com Reviewed-by: Maruthi Bayyavarapu maruthi.bayyavarapu@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Murali Krishna Vemuri murali-krishna.vemuri@amd.com Signed-off-by: Maruthi Bayyavarapu maruthi.bayyavarapu@amd.com Signed-off-by: Chunming Zhou david1.zhou@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com ---
v2: integrate i2s/az check patch v3: s/amd_acp/amdgpu_acp/ v4: update copyright notice v5: squash multiple patches, convert to mfd v6: major changes as below : 1. Pass ACP register base to DMA and dw i2s drivers as IORESOURCE_MEM resources. 2. add dw i2s as a new mfd cell. v7: specify broken out dw quirks that apply to AMD hardware
drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/amd/acp/Kconfig | 10 + drivers/gpu/drm/amd/acp/Makefile | 9 + drivers/gpu/drm/amd/acp/acp_hw.c | 127 +++++++++++++ drivers/gpu/drm/amd/acp/include/acp_gfx_if.h | 49 +++++ drivers/gpu/drm/amd/amdgpu/Makefile | 13 +- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 12 ++ drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c | 275 +++++++++++++++++++++++++++ drivers/gpu/drm/amd/amdgpu/amdgpu_acp.h | 41 ++++ drivers/gpu/drm/amd/amdgpu/vi.c | 12 ++ drivers/gpu/drm/amd/include/amd_shared.h | 1 + include/linux/mfd/amd_acp.h | 43 +++++ 12 files changed, 593 insertions(+), 1 deletion(-) create mode 100644 drivers/gpu/drm/amd/acp/Kconfig create mode 100644 drivers/gpu/drm/amd/acp/Makefile create mode 100644 drivers/gpu/drm/amd/acp/acp_hw.c create mode 100644 drivers/gpu/drm/amd/acp/include/acp_gfx_if.h create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_acp.h create mode 100644 include/linux/mfd/amd_acp.h
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index 1a0a8df..330e9fb 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -161,6 +161,8 @@ config DRM_AMDGPU
source "drivers/gpu/drm/amd/amdgpu/Kconfig"
+source "drivers/gpu/drm/amd/acp/Kconfig" + source "drivers/gpu/drm/nouveau/Kconfig"
config DRM_I810 diff --git a/drivers/gpu/drm/amd/acp/Kconfig b/drivers/gpu/drm/amd/acp/Kconfig new file mode 100644 index 0000000..1de4fe7 --- /dev/null +++ b/drivers/gpu/drm/amd/acp/Kconfig @@ -0,0 +1,10 @@ +menu "ACP Configuration" + +config DRM_AMD_ACP + bool "Enable ACP IP support" + default y + depends on MFD_CORE + help + Choose this option to enable ACP IP support for AMD SOCs. + +endmenu diff --git a/drivers/gpu/drm/amd/acp/Makefile b/drivers/gpu/drm/amd/acp/Makefile new file mode 100644 index 0000000..c8c3303 --- /dev/null +++ b/drivers/gpu/drm/amd/acp/Makefile @@ -0,0 +1,9 @@ +# +# Makefile for the ACP, which is a sub-component +# of AMDSOC/AMDGPU drm driver. +# It provides the HW control for ACP related functionalities. + +ccflags-y += -Idrivers/gpu/drm/amd/include/asic_reg/acp +subdir-ccflags-y += -I$(AMDACPPATH)/ -I$(AMDACPPATH)/include + +AMD_ACP_FILES := $(AMDACPPATH)/acp_hw.o diff --git a/drivers/gpu/drm/amd/acp/acp_hw.c b/drivers/gpu/drm/amd/acp/acp_hw.c new file mode 100644 index 0000000..55220c3 --- /dev/null +++ b/drivers/gpu/drm/amd/acp/acp_hw.c @@ -0,0 +1,127 @@ +/* + * Copyright 2015 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + */ + +#include <linux/mm.h> +#include <linux/slab.h> +#include <linux/device.h> +#include <linux/delay.h> +#include <linux/errno.h> + +#include "acp_gfx_if.h" + +#define ACP_MODE_I2S 0 +#define ACP_MODE_AZ 1 + +#define VISLANDS30_IV_SRCID_ACP 0x000000a2 +#define mmACP_AZALIA_I2S_SELECT 0x51d4 + +static int irq_set_source(void *private_data, unsigned src_id, unsigned type, + int enabled) +{ + struct acp_irq_prv *idata = private_data; + + if (src_id == VISLANDS30_IV_SRCID_ACP) { + idata->enable_intr(idata->acp_mmio, enabled); + return 0; + } else { + return -1; + } +} + +static int irq_handler(void *private_data, unsigned src_id, + const uint32_t *iv_entry) +{ + struct acp_irq_prv *idata = private_data; + + if (src_id == VISLANDS30_IV_SRCID_ACP) + return idata->irq_handler(idata->dev); + else + return -1; +} + +static void acp_irq_register(struct amd_acp_device *acp_dev, void *iprv) +{ + struct amd_acp_private *acp_prv = (struct amd_acp_private *)acp_dev; + + cgs_add_irq_source(acp_prv->cgs_device, VISLANDS30_IV_SRCID_ACP, 1, + irq_set_source, irq_handler, iprv); +} + +static void acp_irq_get(struct amd_acp_device *acp_dev) +{ + struct amd_acp_private *acp_prv = (struct amd_acp_private *)acp_dev; + + cgs_irq_get(acp_prv->cgs_device, VISLANDS30_IV_SRCID_ACP, 0); +} + +static void acp_irq_put(struct amd_acp_device *acp_dev) +{ + struct amd_acp_private *acp_prv = (struct amd_acp_private *)acp_dev; + + cgs_irq_put(acp_prv->cgs_device, VISLANDS30_IV_SRCID_ACP, 0); +} + +void amd_acp_resume(struct amd_acp_private *acp_private) +{ + +} + + +void amd_acp_suspend(struct amd_acp_private *acp_private) +{ + +} + +int amd_acp_hw_init(void *cgs_device, + unsigned acp_version_major, unsigned acp_version_minor, + struct amd_acp_private **acp_private) +{ + unsigned int acp_mode = ACP_MODE_I2S; + + if ((acp_version_major == 2) && (acp_version_minor == 2)) + acp_mode = cgs_read_register(cgs_device, + mmACP_AZALIA_I2S_SELECT); + + if (acp_mode != ACP_MODE_I2S) + return -ENODEV; + + *acp_private = kzalloc(sizeof(struct amd_acp_private), GFP_KERNEL); + if (*acp_private == NULL) + return -ENOMEM; + + (*acp_private)->cgs_device = cgs_device; + (*acp_private)->acp_version_major = acp_version_major; + (*acp_private)->acp_version_minor = acp_version_minor; + + (*acp_private)->public.irq_register = acp_irq_register; + (*acp_private)->public.irq_get = acp_irq_get; + (*acp_private)->public.irq_put = acp_irq_put; + + return 0; +} + +int amd_acp_hw_fini(struct amd_acp_private *acp_private) +{ + kfree(acp_private); + return 0; +} diff --git a/drivers/gpu/drm/amd/acp/include/acp_gfx_if.h b/drivers/gpu/drm/amd/acp/include/acp_gfx_if.h new file mode 100644 index 0000000..d6988c2 --- /dev/null +++ b/drivers/gpu/drm/amd/acp/include/acp_gfx_if.h @@ -0,0 +1,49 @@ +/* + * Copyright 2015 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * +*/ + +#ifndef _ACP_GFX_IF_H +#define _ACP_GFX_IF_H + +#include <linux/types.h> +#include <linux/mfd/amd_acp.h> +#include "cgs_linux.h" +#include "cgs_common.h" + +struct amd_acp_private { + /* The public struture is first, so that pointers can be cast + * between the public and private structure */ + struct amd_acp_device public; + + /* private elements not expose through the bus interface */ + void *cgs_device; + unsigned acp_version_major, acp_version_minor; +}; + +int amd_acp_hw_init(void *cgs_device, + unsigned acp_version_major, unsigned acp_version_minor, + struct amd_acp_private **apriv); +int amd_acp_hw_fini(struct amd_acp_private *apriv); +void amd_acp_suspend(struct amd_acp_private *acp_private); +void amd_acp_resume(struct amd_acp_private *acp_private); + +#endif /* _ACP_GFX_IF_H */ diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile index 04c2707..44de879 100644 --- a/drivers/gpu/drm/amd/amdgpu/Makefile +++ b/drivers/gpu/drm/amd/amdgpu/Makefile @@ -5,7 +5,8 @@ ccflags-y := -Iinclude/drm -Idrivers/gpu/drm/amd/include/asic_reg \ -Idrivers/gpu/drm/amd/include \ -Idrivers/gpu/drm/amd/amdgpu \ - -Idrivers/gpu/drm/amd/scheduler + -Idrivers/gpu/drm/amd/scheduler \ + -Idrivers/gpu/drm/amd/acp/include
amdgpu-y := amdgpu_drv.o
@@ -89,6 +90,16 @@ amdgpu-y += \ ../scheduler/sched_fence.o \ amdgpu_sched.o
+# ACP componet +ifneq ($(CONFIG_DRM_AMD_ACP),) +amdgpu-y += amdgpu_acp.o + +AMDACPPATH := ../acp +include drivers/gpu/drm/amd/acp/Makefile + +amdgpu-y += $(AMD_ACP_FILES) +endif + amdgpu-$(CONFIG_COMPAT) += amdgpu_ioc32.o amdgpu-$(CONFIG_VGA_SWITCHEROO) += amdgpu_atpx_handler.o amdgpu-$(CONFIG_ACPI) += amdgpu_acpi.o diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 6647fb2..1871831 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -52,6 +52,7 @@ #include "amdgpu_irq.h" #include "amdgpu_ucode.h" #include "amdgpu_gds.h" +#include "amdgpu_acp.h"
#include "gpu_scheduler.h"
@@ -1928,6 +1929,13 @@ void amdgpu_cgs_destroy_device(void *cgs_device);
/* + * CGS + */ +void *amdgpu_cgs_create_device(struct amdgpu_device *adev); +void amdgpu_cgs_destroy_device(void *cgs_device); + + +/* * Core structure, functions and helpers. */ typedef uint32_t (*amdgpu_rreg_t)(struct amdgpu_device*, uint32_t); @@ -1948,6 +1956,10 @@ struct amdgpu_device { struct pci_dev *pdev; struct rw_semaphore exclusive_lock;
+#ifdef CONFIG_DRM_AMD_ACP + struct amdgpu_acp acp; +#endif + /* ASIC */ enum amd_asic_type asic_type; uint32_t family; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c new file mode 100644 index 0000000..0eec03d --- /dev/null +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c @@ -0,0 +1,275 @@ +/* + * Copyright 2015 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: AMD + * + */ + +#include <sound/designware_i2s.h> +#include <sound/pcm.h> + +#include "amdgpu.h" +#include "atom.h" +#include "amdgpu_acp.h" + +#include "acp_gfx_if.h" + +#define ACP_DMA_REGS_END 0x146c0 +#define ACP_I2S_PLAY_REGS_START 0x14840 +#define ACP_I2S_PLAY_REGS_END 0x148b4 +#define ACP_I2S_CAP_REGS_START 0x148b8 +#define ACP_I2S_CAP_REGS_END 0x1496c + +#define ACP_I2S_COMP1_REG_OFFSET 0x124 +#define ACP_I2S_COMP2_REG_OFFSET 0x120 + +static int acp_early_init(void *handle) +{ + return 0; +} + +static int acp_sw_init(void *handle) +{ + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + + adev->acp.parent = adev->dev; + + adev->acp.cgs_device = + amdgpu_cgs_create_device(adev); + if (!adev->acp.cgs_device) + return -EINVAL; + + return 0; +} + +static int acp_sw_fini(void *handle) +{ + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + + if (adev->acp.cgs_device) + amdgpu_cgs_destroy_device(adev->acp.cgs_device); + + return 0; +} + +/** + * acp_hw_init - start and test UVD block + * + * @adev: amdgpu_device pointer + * + */ +static int acp_hw_init(void *handle) +{ + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + int r; + uint64_t acp_base; + struct i2s_platform_data *i2s_pdata; + + const struct amdgpu_ip_block_version *ip_version = + amdgpu_get_ip_block(adev, AMD_IP_BLOCK_TYPE_ACP); + + if (!ip_version) + return -EINVAL; + + r = amd_acp_hw_init(adev->acp.cgs_device, + ip_version->major, ip_version->minor, + &adev->acp.private); + /* -ENODEV means board uses AZ rather than ACP */ + if (r == -ENODEV) + return 0; + else if (r) + return r; + + r = cgs_get_pci_resource(adev->acp.cgs_device, CGS_RESOURCE_TYPE_MMIO, + 0x5289, 0, &acp_base); + if (r == -ENODEV) + return 0; + else if (r) + return r; + + adev->acp.acp_cell = kzalloc(sizeof(struct mfd_cell) * 2, GFP_KERNEL); + + if (adev->acp.acp_cell == NULL) + return -ENOMEM; + + adev->acp.acp_res = kzalloc(sizeof(struct resource) * 3, GFP_KERNEL); + + if (adev->acp.acp_res == NULL) { + kfree(adev->acp.acp_cell); + return -ENOMEM; + } + + i2s_pdata = kzalloc(sizeof(struct i2s_platform_data), GFP_KERNEL); + if (i2s_pdata == NULL) { + kfree(adev->acp.acp_res); + kfree(adev->acp.acp_cell); + return -ENOMEM; + } + + i2s_pdata->quirks = DW_I2S_QUIRK_MULTI_DWC | + DW_I2S_QUIRK_COMP_REG_OFFSET; + i2s_pdata->cap = DWC_I2S_PLAY | DWC_I2S_RECORD; + i2s_pdata->snd_rates = SNDRV_PCM_RATE_8000_96000; + i2s_pdata->i2s_reg_comp1 = ACP_I2S_COMP1_REG_OFFSET; + i2s_pdata->i2s_reg_comp2 = ACP_I2S_COMP2_REG_OFFSET; + + adev->acp.acp_res[0].name = "acp2x_dma"; + adev->acp.acp_res[0].flags = IORESOURCE_MEM; + adev->acp.acp_res[0].start = acp_base; + adev->acp.acp_res[0].end = acp_base + ACP_DMA_REGS_END; + + adev->acp.acp_res[1].name = "acp2x_dw_i2s_play"; + adev->acp.acp_res[1].flags = IORESOURCE_MEM; + adev->acp.acp_res[1].start = acp_base + ACP_I2S_PLAY_REGS_START; + adev->acp.acp_res[1].end = acp_base + ACP_I2S_PLAY_REGS_END; + + adev->acp.acp_res[2].name = "acp2x_dw_i2s_cap"; + adev->acp.acp_res[2].flags = IORESOURCE_MEM; + adev->acp.acp_res[2].start = acp_base + ACP_I2S_CAP_REGS_START; + adev->acp.acp_res[2].end = acp_base + ACP_I2S_CAP_REGS_END; + + adev->acp.acp_cell[0].name = "acp_audio_dma"; + adev->acp.acp_cell[0].num_resources = 1; + adev->acp.acp_cell[0].resources = &adev->acp.acp_res[0]; + adev->acp.acp_cell[0].platform_data = adev->acp.private; + adev->acp.acp_cell[0].pdata_size = sizeof(struct amd_acp_private); + + adev->acp.acp_cell[1].name = "designware-i2s"; + adev->acp.acp_cell[1].num_resources = 2; + adev->acp.acp_cell[1].resources = &adev->acp.acp_res[1]; + adev->acp.acp_cell[1].platform_data = i2s_pdata; + adev->acp.acp_cell[1].pdata_size = sizeof(struct i2s_platform_data); + + r = mfd_add_hotplug_devices(adev->acp.parent, adev->acp.acp_cell, 2); + + if (r) { + amd_acp_hw_fini(adev->acp.private); + return r; + } + return 0; +} + +/** + * acp_hw_fini - stop the hardware block + * + * @adev: amdgpu_device pointer + * + */ +static int acp_hw_fini(void *handle) +{ + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + + if (adev->acp.private) { + amd_acp_hw_fini(adev->acp.private); + mfd_remove_devices(adev->acp.parent); + kfree(adev->acp.acp_res); + kfree(adev->acp.acp_cell); + } + + return 0; +} + +static int acp_suspend(void *handle) +{ + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + + if (adev->acp.private) + amd_acp_suspend(adev->acp.private); + + return 0; +} + +static int acp_resume(void *handle) +{ + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + + if (adev->acp.private) + amd_acp_resume(adev->acp.private); + + return 0; +} + +static bool acp_is_idle(void *handle) +{ + return true; +} + +static int acp_wait_for_idle(void *handle) +{ + return 0; +} + +static int acp_soft_reset(void *handle) +{ + return 0; +} + +static void acp_print_status(void *handle) +{ + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + + dev_info(adev->dev, "ACP STATUS\n"); +} + +static int acp_set_clockgating_state(void *handle, + enum amd_clockgating_state state) +{ + return 0; +} + +static int acp_set_powergating_state(void *handle, + enum amd_powergating_state state) +{ + struct amdgpu_device *adev = (struct amdgpu_device *)handle; + + /* This doesn't actually powergate the ACP block. + * That's done in the dpm code via the SMC. This + * just re-inits the block as necessary. The actual + * gating still happens in the dpm code. We should + * revisit this when there is a cleaner line between + * the smc and the hw blocks + */ + if (state == AMD_PG_STATE_GATE) { + if (adev->acp.private) + amd_acp_suspend(adev->acp.private); + } else { + if (adev->acp.private) + amd_acp_resume(adev->acp.private); + } + return 0; +} + +const struct amd_ip_funcs acp_ip_funcs = { + .early_init = acp_early_init, + .late_init = NULL, + .sw_init = acp_sw_init, + .sw_fini = acp_sw_fini, + .hw_init = acp_hw_init, + .hw_fini = acp_hw_fini, + .suspend = acp_suspend, + .resume = acp_resume, + .is_idle = acp_is_idle, + .wait_for_idle = acp_wait_for_idle, + .soft_reset = acp_soft_reset, + .print_status = acp_print_status, + .set_clockgating_state = acp_set_clockgating_state, + .set_powergating_state = acp_set_powergating_state, +}; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.h new file mode 100644 index 0000000..24952ed --- /dev/null +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acp.h @@ -0,0 +1,41 @@ +/* + * Copyright 2015 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: AMD + * + */ + +#ifndef __AMDGPU_ACP_H__ +#define __AMDGPU_ACP_H__ + +#include <linux/mfd/core.h> + +struct amdgpu_acp { + struct device *parent; + void *cgs_device; + struct amd_acp_private *private; + struct mfd_cell *acp_cell; + struct resource *acp_res; +}; + +extern const struct amd_ip_funcs acp_ip_funcs; + +#endif /* __AMDGPU_ACP_H__ */ diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c index b55ceb1..ffd8727 100644 --- a/drivers/gpu/drm/amd/amdgpu/vi.c +++ b/drivers/gpu/drm/amd/amdgpu/vi.c @@ -71,6 +71,9 @@ #include "uvd_v5_0.h" #include "uvd_v6_0.h" #include "vce_v3_0.h" +#if defined(CONFIG_DRM_AMD_ACP) +#include "amdgpu_acp.h" +#endif
/* * Indirect registers accessor @@ -1298,6 +1301,15 @@ static const struct amdgpu_ip_block_version cz_ip_blocks[] = .rev = 0, .funcs = &vce_v3_0_ip_funcs, }, +#if defined(CONFIG_DRM_AMD_ACP) + { + .type = AMD_IP_BLOCK_TYPE_ACP, + .major = 2, + .minor = 2, + .rev = 0, + .funcs = &acp_ip_funcs, + }, +#endif };
int vi_set_ip_blocks(struct amdgpu_device *adev) diff --git a/drivers/gpu/drm/amd/include/amd_shared.h b/drivers/gpu/drm/amd/include/amd_shared.h index 68a8eaa..54bf96a 100644 --- a/drivers/gpu/drm/amd/include/amd_shared.h +++ b/drivers/gpu/drm/amd/include/amd_shared.h @@ -72,6 +72,7 @@ enum amd_ip_block_type { AMD_IP_BLOCK_TYPE_SDMA, AMD_IP_BLOCK_TYPE_UVD, AMD_IP_BLOCK_TYPE_VCE, + AMD_IP_BLOCK_TYPE_ACP, };
enum amd_clockgating_state { diff --git a/include/linux/mfd/amd_acp.h b/include/linux/mfd/amd_acp.h new file mode 100644 index 0000000..8290870 --- /dev/null +++ b/include/linux/mfd/amd_acp.h @@ -0,0 +1,43 @@ +/* + * Copyright 2015 Advanced Micro Devices, Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * +*/ + +#ifndef _AMD_ACP_H +#define _AMD_ACP_H + +#include <linux/types.h> + +struct acp_irq_prv { + struct device *dev; + void __iomem *acp_mmio; + int (*irq_handler)(struct device *dev); + void (*enable_intr)(void __iomem *acp_mmio, int enable); +}; + +/* Public interface of ACP device */ +struct amd_acp_device { + void (*irq_register)(struct amd_acp_device *acp_dev, void *iprv); + void (*irq_get)(struct amd_acp_device *acp_dev); + void (*irq_put)(struct amd_acp_device *acp_dev); +}; + +#endif /* _AMD_ACP_H */
On Thu, Oct 08, 2015 at 12:12:39PM -0400, Alex Deucher wrote:
+config DRM_AMD_ACP
bool "Enable ACP IP support"
default y
depends on MFD_CORE
help
- Choose this option to enable ACP IP support for AMD SOCs.
Why does this depend on rather than select MFD_CORE (as all other users do)? Since MFD_CORE has no help text and hence is not directly user selectable the above means the user won't be able to enable this driver unless they happen to enable something else which uses MFD which doesn't seem ideal.
From: Maruthi Srinivas Bayyavarapu Maruthi.Bayyavarapu@amd.com
ACP IP block consists of dedicated DMA and I2S blocks. The PCM driver provides the platform DMA component to ALSA core.
Signed-off-by: Maruthi Bayyavarapu maruthi.bayyavarapu@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Murali Krishna Vemuri murali-krishna.vemuri@amd.com ---
v2: squash in Kconfig fix v3: squash additional commits, convert to mfd, drop rt286 changes v4: add major changes as below: 1. remove i2s specific changes and add them to dwc i2s driver. 2. add ACP DMA logic to PCM driver.
sound/soc/Kconfig | 1 + sound/soc/Makefile | 1 + sound/soc/amd/Kconfig | 4 + sound/soc/amd/Makefile | 3 + sound/soc/amd/acp-pcm-dma.c | 518 +++++++++++++++++++++++++++++++ sound/soc/amd/acp.c | 736 ++++++++++++++++++++++++++++++++++++++++++++ sound/soc/amd/acp.h | 147 +++++++++ 7 files changed, 1410 insertions(+) create mode 100644 sound/soc/amd/Kconfig create mode 100644 sound/soc/amd/Makefile create mode 100644 sound/soc/amd/acp-pcm-dma.c create mode 100644 sound/soc/amd/acp.c create mode 100644 sound/soc/amd/acp.h
diff --git a/sound/soc/Kconfig b/sound/soc/Kconfig index 225bfda..a278840 100644 --- a/sound/soc/Kconfig +++ b/sound/soc/Kconfig @@ -35,6 +35,7 @@ config SND_SOC_TOPOLOGY
# All the supported SoCs source "sound/soc/adi/Kconfig" +source "sound/soc/amd/Kconfig" source "sound/soc/atmel/Kconfig" source "sound/soc/au1x/Kconfig" source "sound/soc/bcm/Kconfig" diff --git a/sound/soc/Makefile b/sound/soc/Makefile index 134aca1..5927544 100644 --- a/sound/soc/Makefile +++ b/sound/soc/Makefile @@ -17,6 +17,7 @@ obj-$(CONFIG_SND_SOC) += snd-soc-core.o obj-$(CONFIG_SND_SOC) += codecs/ obj-$(CONFIG_SND_SOC) += generic/ obj-$(CONFIG_SND_SOC) += adi/ +obj-$(CONFIG_SND_SOC) += amd/ obj-$(CONFIG_SND_SOC) += atmel/ obj-$(CONFIG_SND_SOC) += au1x/ obj-$(CONFIG_SND_SOC) += bcm/ diff --git a/sound/soc/amd/Kconfig b/sound/soc/amd/Kconfig new file mode 100644 index 0000000..78187eb --- /dev/null +++ b/sound/soc/amd/Kconfig @@ -0,0 +1,4 @@ +config SND_SOC_AMD_ACP + tristate "AMD Audio Coprocessor support" + help + This option enables ACP DMA support on AMD platform. diff --git a/sound/soc/amd/Makefile b/sound/soc/amd/Makefile new file mode 100644 index 0000000..62648cb --- /dev/null +++ b/sound/soc/amd/Makefile @@ -0,0 +1,3 @@ +snd-soc-acp-pcm-objs := acp-pcm-dma.o acp.o + +obj-$(CONFIG_SND_SOC_AMD_ACP) += snd-soc-acp-pcm.o diff --git a/sound/soc/amd/acp-pcm-dma.c b/sound/soc/amd/acp-pcm-dma.c new file mode 100644 index 0000000..5044188 --- /dev/null +++ b/sound/soc/amd/acp-pcm-dma.c @@ -0,0 +1,518 @@ +/* + * AMD ALSA SoC PCM Driver + * + * Copyright 2014-2015 Advanced Micro Devices, Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + */ + +#include <linux/interrupt.h> +#include <linux/platform_device.h> +#include <linux/device.h> +#include <linux/dma-mapping.h> +#include <linux/slab.h> +#include <linux/module.h> +#include <linux/err.h> +#include <linux/io.h> +#include <linux/pci.h> +#include <linux/pm_runtime.h> +#include <linux/mfd/amd_acp.h> + +#include <sound/pcm.h> +#include <sound/pcm_params.h> +#include <sound/soc.h> + +#include "acp.h" + +#define PLAYBACK_MIN_NUM_PERIODS 2 +#define PLAYBACK_MAX_NUM_PERIODS 2 +#define PLAYBACK_MAX_PERIOD_SIZE 16384 +#define PLAYBACK_MIN_PERIOD_SIZE 1024 +#define CAPTURE_MIN_NUM_PERIODS 2 +#define CAPTURE_MAX_NUM_PERIODS 2 +#define CAPTURE_MAX_PERIOD_SIZE 16384 +#define CAPTURE_MIN_PERIOD_SIZE 1024 + +#define NUM_DSCRS_PER_CHANNEL 2 + +#define MAX_BUFFER (PLAYBACK_MAX_PERIOD_SIZE * PLAYBACK_MAX_NUM_PERIODS) +#define MIN_BUFFER MAX_BUFFER + +static const struct snd_pcm_hardware acp_pcm_hardware_playback = { + .info = SNDRV_PCM_INFO_INTERLEAVED | + SNDRV_PCM_INFO_BLOCK_TRANSFER | SNDRV_PCM_INFO_MMAP | + SNDRV_PCM_INFO_MMAP_VALID | SNDRV_PCM_INFO_BATCH | + SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_RESUME, + .formats = SNDRV_PCM_FMTBIT_S16_LE | + SNDRV_PCM_FMTBIT_S24_LE | SNDRV_PCM_FMTBIT_S32_LE, + .channels_min = 1, + .channels_max = 8, + .rates = SNDRV_PCM_RATE_8000_96000, + .rate_min = 8000, + .rate_max = 96000, + .buffer_bytes_max = PLAYBACK_MAX_NUM_PERIODS * PLAYBACK_MAX_PERIOD_SIZE, + .period_bytes_min = PLAYBACK_MIN_PERIOD_SIZE, + .period_bytes_max = PLAYBACK_MAX_PERIOD_SIZE, + .periods_min = PLAYBACK_MIN_NUM_PERIODS, + .periods_max = PLAYBACK_MAX_NUM_PERIODS, +}; + +static const struct snd_pcm_hardware acp_pcm_hardware_capture = { + .info = SNDRV_PCM_INFO_INTERLEAVED | + SNDRV_PCM_INFO_BLOCK_TRANSFER | SNDRV_PCM_INFO_MMAP | + SNDRV_PCM_INFO_MMAP_VALID | SNDRV_PCM_INFO_BATCH | + SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_RESUME, + .formats = SNDRV_PCM_FMTBIT_S16_LE | + SNDRV_PCM_FMTBIT_S24_LE | SNDRV_PCM_FMTBIT_S32_LE, + .channels_min = 1, + .channels_max = 2, + .rates = SNDRV_PCM_RATE_8000_48000, + .rate_min = 8000, + .rate_max = 48000, + .buffer_bytes_max = CAPTURE_MAX_NUM_PERIODS * CAPTURE_MAX_PERIOD_SIZE, + .period_bytes_min = CAPTURE_MIN_PERIOD_SIZE, + .period_bytes_max = CAPTURE_MAX_PERIOD_SIZE, + .periods_min = CAPTURE_MIN_NUM_PERIODS, + .periods_max = CAPTURE_MAX_NUM_PERIODS, +}; + +struct audio_drv_data { + struct snd_pcm_substream *play_stream; + struct snd_pcm_substream *capture_stream; + struct acp_irq_prv *iprv; + void __iomem *acp_mmio; +}; + +/* ACP DMA irq handler routine for playback, capture usecases */ +int dma_irq_handler(struct device *dev) +{ + u16 dscr_idx; + u32 intr_flag; + + int priority_level = 0; + struct audio_drv_data *irq_data = dev_get_drvdata(dev); + void __iomem *acp_mmio = irq_data->acp_mmio; + + intr_flag = acp_get_intr_flag(acp_mmio); + + if ((intr_flag & BIT(ACP_TO_I2S_DMA_CH_NUM)) != 0) { + dscr_idx = get_dscr_idx(acp_mmio, SNDRV_PCM_STREAM_PLAYBACK); + config_acp_dma_channel(acp_mmio, SYSRAM_TO_ACP_CH_NUM, dscr_idx, + 1, priority_level); + acp_dma_start(acp_mmio, SYSRAM_TO_ACP_CH_NUM, false); + + snd_pcm_period_elapsed(irq_data->play_stream); + acp_ext_stat_clear_dmaioc(acp_mmio, ACP_TO_I2S_DMA_CH_NUM); + } + + if ((intr_flag & BIT(I2S_TO_ACP_DMA_CH_NUM)) != 0) { + dscr_idx = get_dscr_idx(acp_mmio, SNDRV_PCM_STREAM_CAPTURE); + config_acp_dma_channel(acp_mmio, ACP_TO_SYSRAM_CH_NUM, dscr_idx, + 1, priority_level); + acp_dma_start(acp_mmio, ACP_TO_SYSRAM_CH_NUM, false); + acp_ext_stat_clear_dmaioc(acp_mmio, I2S_TO_ACP_DMA_CH_NUM); + } + + if ((intr_flag & BIT(ACP_TO_SYSRAM_CH_NUM)) != 0) { + snd_pcm_period_elapsed(irq_data->capture_stream); + acp_ext_stat_clear_dmaioc(acp_mmio, ACP_TO_SYSRAM_CH_NUM); + } + + return 0; +} + +static int acp_dma_open(struct snd_pcm_substream *substream) +{ + int ret = 0; + struct snd_pcm_runtime *runtime = substream->runtime; + struct snd_soc_pcm_runtime *prtd = substream->private_data; + struct audio_drv_data *intr_data = dev_get_drvdata(prtd->platform->dev); + + struct audio_substream_data *adata = + kzalloc(sizeof(struct audio_substream_data), GFP_KERNEL); + if (adata == NULL) + return -ENOMEM; + + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) + runtime->hw = acp_pcm_hardware_playback; + else + runtime->hw = acp_pcm_hardware_capture; + + ret = snd_pcm_hw_constraint_integer(runtime, + SNDRV_PCM_HW_PARAM_PERIODS); + if (ret < 0) { + dev_err(prtd->platform->dev, "set integer constraint failed\n"); + return ret; + } + + adata->acp_mmio = intr_data->acp_mmio; + runtime->private_data = adata; + + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) + intr_data->play_stream = substream; + else + intr_data->capture_stream = substream; + + return 0; +} + +static int acp_dma_hw_params(struct snd_pcm_substream *substream, + struct snd_pcm_hw_params *params) +{ + int status; + uint64_t size; + struct snd_dma_buffer *dma_buffer; + struct page *pg; + u16 num_of_pages; + struct snd_pcm_runtime *runtime; + struct audio_substream_data *rtd; + + dma_buffer = &substream->dma_buffer; + + runtime = substream->runtime; + rtd = runtime->private_data; + + if (WARN_ON(!rtd)) + return -EINVAL; + + size = params_buffer_bytes(params); + status = snd_pcm_lib_malloc_pages(substream, size); + if (status < 0) + return status; + + memset(substream->runtime->dma_area, 0, params_buffer_bytes(params)); + pg = virt_to_page(substream->dma_buffer.area); + + if (pg != NULL) { + /* Save for runtime private data */ + rtd->pg = pg; + rtd->order = get_order(size); + + /* Let ACP know the Allocated memory */ + num_of_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; + + /* Fill the page table entries in ACP SRAM */ + rtd->pg = pg; + rtd->size = size; + rtd->num_of_pages = num_of_pages; + rtd->direction = substream->stream; + + config_acp_dma(rtd->acp_mmio, rtd); + status = 0; + } else { + status = -ENOMEM; + } + return status; +} + +static int acp_dma_hw_free(struct snd_pcm_substream *substream) +{ + return snd_pcm_lib_free_pages(substream); +} + +static snd_pcm_uframes_t acp_dma_pointer(struct snd_pcm_substream *substream) +{ + u32 pos = 0; + struct snd_pcm_runtime *runtime = substream->runtime; + struct audio_substream_data *rtd = runtime->private_data; + + pos = acp_update_dma_pointer(rtd->acp_mmio, substream->stream, + frames_to_bytes(runtime, runtime->period_size)); + return bytes_to_frames(runtime, pos); + +} + +static int acp_dma_mmap(struct snd_pcm_substream *substream, + struct vm_area_struct *vma) +{ + return snd_pcm_lib_default_mmap(substream, vma); +} + +static int acp_dma_prepare(struct snd_pcm_substream *substream) +{ + struct snd_pcm_runtime *runtime = substream->runtime; + struct audio_substream_data *rtd = runtime->private_data; + + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { + config_acp_dma_channel(rtd->acp_mmio, SYSRAM_TO_ACP_CH_NUM, + PLAYBACK_START_DMA_DESCR_CH12, + NUM_DSCRS_PER_CHANNEL, 0); + config_acp_dma_channel(rtd->acp_mmio, ACP_TO_I2S_DMA_CH_NUM, + PLAYBACK_START_DMA_DESCR_CH13, + NUM_DSCRS_PER_CHANNEL, 0); + /* Fill ACP SRAM (2 periods) with zeros from System RAM + * which is zero-ed in hw_params */ + acp_dma_start(rtd->acp_mmio, SYSRAM_TO_ACP_CH_NUM, false); + + /* ACP SRAM (2 periods of buffer size) is intially filled with + * zeros. Before rendering starts, 2nd half of SRAM will be + * filled with valid audio data DMA'ed from first half of system + * RAM and 1st half of SRAM will be filled with Zeros. This is + * the initial scenario when redering starts from SRAM. Later + * on, 2nd half of system memory will be DMA'ed to 1st half of + * SRAM, 1st half of system memory will be DMA'ed to 2nd half of + * SRAM in ping-pong way till rendering stops. */ + config_acp_dma_channel(rtd->acp_mmio, SYSRAM_TO_ACP_CH_NUM, + PLAYBACK_START_DMA_DESCR_CH12, + 1, 0); + } else { + config_acp_dma_channel(rtd->acp_mmio, ACP_TO_SYSRAM_CH_NUM, + CAPTURE_START_DMA_DESCR_CH14, + NUM_DSCRS_PER_CHANNEL, 0); + config_acp_dma_channel(rtd->acp_mmio, I2S_TO_ACP_DMA_CH_NUM, + CAPTURE_START_DMA_DESCR_CH15, + NUM_DSCRS_PER_CHANNEL, 0); + } + return 0; +} + +static int acp_dma_trigger(struct snd_pcm_substream *substream, int cmd) +{ + int ret; + + struct snd_pcm_runtime *runtime = substream->runtime; + struct audio_substream_data *rtd = runtime->private_data; + struct snd_soc_pcm_runtime *prtd = substream->private_data; + struct amd_acp_device *acp_dev = dev_get_platdata(prtd->platform->dev); + + if (!rtd) + return -EINVAL; + switch (cmd) { + case SNDRV_PCM_TRIGGER_START: + case SNDRV_PCM_TRIGGER_RESUME: + case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { + acp_dma_start(rtd->acp_mmio, + SYSRAM_TO_ACP_CH_NUM, false); + prebuffer_audio(rtd->acp_mmio); + acp_dma_start(rtd->acp_mmio, + ACP_TO_I2S_DMA_CH_NUM, true); + acp_dev->irq_get(acp_dev); + + } else { + acp_dma_start(rtd->acp_mmio, + I2S_TO_ACP_DMA_CH_NUM, true); + acp_dev->irq_get(acp_dev); + } + ret = 0; + break; + case SNDRV_PCM_TRIGGER_STOP: + case SNDRV_PCM_TRIGGER_SUSPEND: + case SNDRV_PCM_TRIGGER_PAUSE_PUSH: + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { + acp_dma_stop(rtd->acp_mmio, SYSRAM_TO_ACP_CH_NUM); + acp_dma_stop(rtd->acp_mmio, ACP_TO_I2S_DMA_CH_NUM); + acp_dev->irq_put(acp_dev); + } else { + acp_dma_stop(rtd->acp_mmio, I2S_TO_ACP_DMA_CH_NUM); + acp_dma_stop(rtd->acp_mmio, ACP_TO_SYSRAM_CH_NUM); + acp_dev->irq_put(acp_dev); + } + ret = 0; + break; + default: + ret = -EINVAL; + + } + return ret; +} + +static int acp_dma_new(struct snd_soc_pcm_runtime *rtd) +{ + return snd_pcm_lib_preallocate_pages_for_all(rtd->pcm, + SNDRV_DMA_TYPE_DEV, + NULL, MIN_BUFFER, + MAX_BUFFER); +} + +static int acp_dma_close(struct snd_pcm_substream *substream) +{ + struct snd_pcm_runtime *runtime = substream->runtime; + struct audio_substream_data *rtd = runtime->private_data; + struct snd_soc_pcm_runtime *prtd = substream->private_data; + + kfree(rtd); + + pm_runtime_mark_last_busy(prtd->platform->dev); + return 0; +} + +static struct snd_pcm_ops acp_dma_ops = { + .open = acp_dma_open, + .close = acp_dma_close, + .ioctl = snd_pcm_lib_ioctl, + .hw_params = acp_dma_hw_params, + .hw_free = acp_dma_hw_free, + .trigger = acp_dma_trigger, + .pointer = acp_dma_pointer, + .mmap = acp_dma_mmap, + .prepare = acp_dma_prepare, +}; + +static struct snd_soc_platform_driver acp_asoc_platform = { + .ops = &acp_dma_ops, + .pcm_new = acp_dma_new, +}; + +static int acp_audio_probe(struct platform_device *pdev) +{ + int status; + struct audio_drv_data *audio_drv_data; + struct resource *res; + struct amd_acp_device *acp_dev = dev_get_platdata(&pdev->dev); + + audio_drv_data = devm_kzalloc(&pdev->dev, sizeof(struct audio_drv_data), + GFP_KERNEL); + if (audio_drv_data == NULL) + return -ENOMEM; + + audio_drv_data->iprv = devm_kzalloc(&pdev->dev, + sizeof(struct acp_irq_prv), + GFP_KERNEL); + if (audio_drv_data->iprv == NULL) + return -ENOMEM; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + audio_drv_data->acp_mmio = devm_ioremap_resource(&pdev->dev, res); + + /* The following members gets populated in device 'open' + * function. Till then interrupts are disabled in 'acp_hw_init' + * and device doesn't generate any interrupts. + */ + + audio_drv_data->play_stream = NULL; + audio_drv_data->capture_stream = NULL; + + audio_drv_data->iprv->dev = &pdev->dev; + audio_drv_data->iprv->acp_mmio = audio_drv_data->acp_mmio; + audio_drv_data->iprv->enable_intr = acp_enable_external_interrupts; + audio_drv_data->iprv->irq_handler = dma_irq_handler; + + dev_set_drvdata(&pdev->dev, audio_drv_data); + + /* Initialize the ACP */ + acp_hw_init(audio_drv_data->acp_mmio); + + acp_dev->irq_register(acp_dev, audio_drv_data->iprv); + + status = snd_soc_register_platform(&pdev->dev, &acp_asoc_platform); + if (0 != status) { + dev_err(&pdev->dev, "Fail to register ALSA platform device\n"); + return status; + } + + pm_runtime_set_autosuspend_delay(&pdev->dev, 10000); + pm_runtime_use_autosuspend(&pdev->dev); + pm_runtime_enable(&pdev->dev); + + return status; +} + +static int acp_audio_remove(struct platform_device *pdev) +{ + struct audio_drv_data *adata = dev_get_drvdata(&pdev->dev); + + acp_hw_deinit(adata->acp_mmio); + snd_soc_unregister_platform(&pdev->dev); + pm_runtime_disable(&pdev->dev); + + return 0; +} + +static int acp_pcm_suspend(struct device *dev) +{ + bool pm_rts; + struct audio_drv_data *adata = dev_get_drvdata(dev); + + pm_rts = pm_runtime_status_suspended(dev); + if (pm_rts == false) + acp_suspend(adata->acp_mmio); + + return 0; +} + +static int acp_pcm_resume(struct device *dev) +{ + bool pm_rts; + struct snd_pcm_substream *stream; + struct snd_pcm_runtime *rtd; + struct audio_substream_data *sdata; + struct audio_drv_data *adata = dev_get_drvdata(dev); + + pm_rts = pm_runtime_status_suspended(dev); + if (pm_rts == true) { + /* Resumed from system wide suspend and there is + * no pending audio activity to resume. */ + pm_runtime_disable(dev); + pm_runtime_set_active(dev); + pm_runtime_enable(dev); + + goto out; + } + + acp_resume(adata->acp_mmio); + + stream = adata->play_stream; + rtd = stream ? stream->runtime : NULL; + if (rtd != NULL) { + /* Resume playback stream from a suspended state */ + sdata = rtd->private_data; + config_acp_dma(adata->acp_mmio, sdata); + } + + stream = adata->capture_stream; + rtd = stream ? stream->runtime : NULL; + if (rtd != NULL) { + /* Resume capture stream from a suspended state */ + sdata = rtd->private_data; + config_acp_dma(adata->acp_mmio, sdata); + } +out: + return 0; +} + +static int acp_pcm_runtime_suspend(struct device *dev) +{ + struct audio_drv_data *adata = dev_get_drvdata(dev); + + acp_suspend(adata->acp_mmio); + return 0; +} + +static int acp_pcm_runtime_resume(struct device *dev) +{ + struct audio_drv_data *adata = dev_get_drvdata(dev); + + acp_resume(adata->acp_mmio); + return 0; +} + +static const struct dev_pm_ops acp_pm_ops = { + .suspend = acp_pcm_suspend, + .resume = acp_pcm_resume, + .runtime_suspend = acp_pcm_runtime_suspend, + .runtime_resume = acp_pcm_runtime_resume, +}; + +static struct platform_driver acp_dma_driver = { + .probe = acp_audio_probe, + .remove = acp_audio_remove, + .driver = { + .name = "acp_audio_dma", + .pm = &acp_pm_ops, + }, +}; + +module_platform_driver(acp_dma_driver); + +MODULE_AUTHOR("Maruthi.Bayyavarapu@amd.com"); +MODULE_DESCRIPTION("AMD ACP PCM Driver"); +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS("platform:acp-dma-audio"); diff --git a/sound/soc/amd/acp.c b/sound/soc/amd/acp.c new file mode 100644 index 0000000..59ec312 --- /dev/null +++ b/sound/soc/amd/acp.c @@ -0,0 +1,736 @@ +/* + * AMD ACP module + * + * Copyright 2015 Advanced Micro Devices, Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. +*/ + +#include <linux/mm.h> +#include <linux/slab.h> +#include <linux/io.h> +#include <linux/device.h> +#include <linux/delay.h> +#include <linux/errno.h> +#include <sound/asound.h> +#include "acp.h" + +#include "include/acp_2_2_d.h" +#include "include/acp_2_2_sh_mask.h" + +#define VISLANDS30_IV_SRCID_ACP 0x000000a2 + +static u32 acp_reg_read(void __iomem *acp_mmio, u32 reg) +{ + return readl(acp_mmio + (reg * 4)); +} + +static void acp_reg_write(u32 val, void __iomem *acp_mmio, u32 reg) +{ + writel(val, acp_mmio + (reg * 4)); +} + +/* Configure a given dma channel parameters - enable/disble, + * number of descriptors, priority */ +void config_acp_dma_channel(void __iomem *acp_mmio, u8 ch_num, + u16 dscr_strt_idx, u16 num_dscrs, + enum acp_dma_priority_level priority_level) +{ + u32 dma_ctrl; + + /* disable the channel run field */ + dma_ctrl = acp_reg_read(acp_mmio, mmACP_DMA_CNTL_0 + ch_num); + dma_ctrl &= ~ACP_DMA_CNTL_0__DMAChRun_MASK; + acp_reg_write(dma_ctrl, acp_mmio, mmACP_DMA_CNTL_0 + ch_num); + + /* program a DMA channel with first descriptor to be processed. */ + acp_reg_write((ACP_DMA_DSCR_STRT_IDX_0__DMAChDscrStrtIdx_MASK + & dscr_strt_idx), + acp_mmio, mmACP_DMA_DSCR_STRT_IDX_0 + ch_num); + + /* program a DMA channel with the number of descriptors to be + * processed in the transfer */ + acp_reg_write(ACP_DMA_DSCR_CNT_0__DMAChDscrCnt_MASK & num_dscrs, + acp_mmio, mmACP_DMA_DSCR_CNT_0 + ch_num); + + /* set DMA channel priority */ + acp_reg_write(priority_level, acp_mmio, mmACP_DMA_PRIO_0 + ch_num); +} + +/* Initialize the dma descriptors location in SRAM and page size */ +static void acp_dma_descr_init(void __iomem *acp_mmio) +{ + u32 sram_pte_offset = 0; + + /* SRAM starts at 0x04000000. From that offset one page (4KB) left for + * filling DMA descriptors.sram_pte_offset = 0x04001000 , used for + * filling system RAM's physical pages. + * This becomes the ALSA's Ring buffer start address + */ + sram_pte_offset = ACP_DAGB_GRP_SRAM_BASE_ADDRESS; + + /* snoopable */ + sram_pte_offset |= ACP_DAGB_BASE_ADDR_GRP_1__AXI2DAGBSnoopSel_MASK; + /* Memmory is system mmemory */ + sram_pte_offset |= ACP_DAGB_BASE_ADDR_GRP_1__AXI2DAGBTargetMemSel_MASK; + /* Page Enabled */ + sram_pte_offset |= ACP_DAGB_BASE_ADDR_GRP_1__AXI2DAGBGrpEnable_MASK; + + acp_reg_write(sram_pte_offset, acp_mmio, mmACP_DAGB_BASE_ADDR_GRP_1); + acp_reg_write(PAGE_SIZE_4K_ENABLE, acp_mmio, + mmACP_DAGB_PAGE_SIZE_GRP_1); +} + +/* Initialize a dma descriptor in SRAM based on descritor information passed */ +static void config_dma_descriptor_in_sram(void __iomem *acp_mmio, + u16 descr_idx, + acp_dma_dscr_transfer_t *descr_info) +{ + u32 sram_offset; + + sram_offset = (descr_idx * sizeof(acp_dma_dscr_transfer_t)); + + /* program the source base address. */ + acp_reg_write(sram_offset, acp_mmio, mmACP_SRBM_Targ_Idx_Addr); + acp_reg_write(descr_info->src, acp_mmio, mmACP_SRBM_Targ_Idx_Data); + /* program the destination base address. */ + acp_reg_write(sram_offset + 4, acp_mmio, mmACP_SRBM_Targ_Idx_Addr); + acp_reg_write(descr_info->dest, acp_mmio, mmACP_SRBM_Targ_Idx_Data); + + /* program the number of bytes to be transferred for this descriptor. */ + acp_reg_write(sram_offset + 8, acp_mmio, mmACP_SRBM_Targ_Idx_Addr); + acp_reg_write(descr_info->xfer_val, acp_mmio, mmACP_SRBM_Targ_Idx_Data); +} + +/* Initialize the DMA descriptor information for transfer between + * system memory <-> ACP SRAM + */ +static void set_acp_sysmem_dma_descriptors(void __iomem *acp_mmio, + u32 size, int direction, + u32 pte_offset) +{ + u16 num_descr; + u16 dma_dscr_idx = PLAYBACK_START_DMA_DESCR_CH12; + acp_dma_dscr_transfer_t dmadscr[2]; + + num_descr = 2; + + dmadscr[0].xfer_val = 0; + if (direction == SNDRV_PCM_STREAM_PLAYBACK) { + dma_dscr_idx = PLAYBACK_START_DMA_DESCR_CH12; + dmadscr[0].dest = ACP_SHARED_RAM_BANK_1_ADDRESS + (size / 2); + dmadscr[0].src = ACP_INTERNAL_APERTURE_WINDOW_0_ADDRESS + + (pte_offset * PAGE_SIZE_4K); + dmadscr[0].xfer_val |= (DISABLE << 22) | + (ACP_DMA_ATTRIBUTES_DAGB_ONION_TO_SHAREDMEM << 16) | + (size / 2); + } else { + dma_dscr_idx = CAPTURE_START_DMA_DESCR_CH14; + dmadscr[0].src = ACP_SHARED_RAM_BANK_5_ADDRESS; + dmadscr[0].dest = ACP_INTERNAL_APERTURE_WINDOW_0_ADDRESS + + (pte_offset * PAGE_SIZE_4K); + dmadscr[0].xfer_val |= + (ENABLE << 22) | + (ACP_DMA_ATTRIBUTES_SHAREDMEM_TO_DAGB_ONION << 16) | + (size / 2); + } + + config_dma_descriptor_in_sram(acp_mmio, dma_dscr_idx, &dmadscr[0]); + + dmadscr[1].xfer_val = 0; + if (direction == SNDRV_PCM_STREAM_PLAYBACK) { + dma_dscr_idx = PLAYBACK_END_DMA_DESCR_CH12; + dmadscr[1].dest = ACP_SHARED_RAM_BANK_1_ADDRESS; + dmadscr[1].src = ACP_INTERNAL_APERTURE_WINDOW_0_ADDRESS + + (pte_offset * PAGE_SIZE_4K) + (size / 2); + dmadscr[1].xfer_val |= (DISABLE << 22) | + (ACP_DMA_ATTRIBUTES_DAGB_ONION_TO_SHAREDMEM << 16) | + (size / 2); + } else { + dma_dscr_idx = CAPTURE_END_DMA_DESCR_CH14; + dmadscr[1].dest = dmadscr[0].dest + (size / 2); + dmadscr[1].src = dmadscr[0].src + (size / 2); + dmadscr[1].xfer_val |= (ENABLE << 22) | + (ACP_DMA_ATTRIBUTES_SHAREDMEM_TO_DAGB_ONION << 16) | + (size / 2); + } + + config_dma_descriptor_in_sram(acp_mmio, dma_dscr_idx, &dmadscr[1]); + + if (direction == SNDRV_PCM_STREAM_PLAYBACK) { + /* starting descriptor for this channel */ + dma_dscr_idx = PLAYBACK_START_DMA_DESCR_CH12; + config_acp_dma_channel(acp_mmio, SYSRAM_TO_ACP_CH_NUM, + dma_dscr_idx, num_descr, + ACP_DMA_PRIORITY_LEVEL_NORMAL); + } else { + /* starting descriptor for this channel */ + dma_dscr_idx = CAPTURE_START_DMA_DESCR_CH14; + config_acp_dma_channel(acp_mmio, ACP_TO_SYSRAM_CH_NUM, + dma_dscr_idx, num_descr, + ACP_DMA_PRIORITY_LEVEL_NORMAL); + } +} + +/* Initialize the DMA descriptor information for transfer between + * ACP SRAM <-> I2S + */ +static void set_acp_to_i2s_dma_descriptors(void __iomem *acp_mmio, + u32 size, int direction) +{ + + u16 num_descr; + u16 dma_dscr_idx = PLAYBACK_START_DMA_DESCR_CH13; + acp_dma_dscr_transfer_t dmadscr[2]; + + num_descr = 2; + + dmadscr[0].xfer_val = 0; + if (direction == SNDRV_PCM_STREAM_PLAYBACK) { + dma_dscr_idx = PLAYBACK_START_DMA_DESCR_CH13; + dmadscr[0].src = ACP_SHARED_RAM_BANK_1_ADDRESS; + /* dmadscr[0].dest is unused by hardware. Assgned to 0 to + * remove compiler warning */ + dmadscr[0].dest = 0; + dmadscr[0].xfer_val |= (ENABLE << 22) | (TO_ACP_I2S_1 << 16) | + (size / 2); + } else { + dma_dscr_idx = CAPTURE_START_DMA_DESCR_CH15; + /* dmadscr[0].src is unused by hardware. Assgned to 0 to + * remove compiler warning */ + dmadscr[0].src = 0; + dmadscr[0].dest = ACP_SHARED_RAM_BANK_5_ADDRESS; + dmadscr[0].xfer_val |= (ENABLE << 22) | + (FROM_ACP_I2S_1 << 16) | (size / 2); + } + + config_dma_descriptor_in_sram(acp_mmio, dma_dscr_idx, &dmadscr[0]); + + dmadscr[1].xfer_val = 0; + if (direction == SNDRV_PCM_STREAM_PLAYBACK) { + dma_dscr_idx = PLAYBACK_END_DMA_DESCR_CH13; + dmadscr[1].src = dmadscr[0].src + (size / 2); + /* dmadscr[1].dest is unused by hardware. Assgned to 0 to + * remove compiler warning */ + dmadscr[1].dest = 0; + dmadscr[1].xfer_val |= (ENABLE << 22) | (TO_ACP_I2S_1 << 16) | + (size / 2); + } else { + dma_dscr_idx = CAPTURE_END_DMA_DESCR_CH15; + /* dmadscr[1].src is unused by hardware. Assgned to 0 to + * remove compiler warning */ + dmadscr[1].src = 0; + dmadscr[1].dest = dmadscr[0].dest + (size / 2); + dmadscr[1].xfer_val |= (ENABLE << 22) | + (FROM_ACP_I2S_1 << 16) | (size / 2); + } + + config_dma_descriptor_in_sram(acp_mmio, dma_dscr_idx, &dmadscr[1]); + + /* Configure the DMA channel with the above descriptore */ + if (direction == SNDRV_PCM_STREAM_PLAYBACK) { + /* starting descriptor for this channel */ + dma_dscr_idx = PLAYBACK_START_DMA_DESCR_CH13; + config_acp_dma_channel(acp_mmio, ACP_TO_I2S_DMA_CH_NUM, + dma_dscr_idx, num_descr, + ACP_DMA_PRIORITY_LEVEL_NORMAL); + } else { + /* starting descriptor for this channel */ + dma_dscr_idx = CAPTURE_START_DMA_DESCR_CH15; + config_acp_dma_channel(acp_mmio, I2S_TO_ACP_DMA_CH_NUM, + dma_dscr_idx, num_descr, + ACP_DMA_PRIORITY_LEVEL_NORMAL); + } + +} + +u16 get_dscr_idx(void __iomem *acp_mmio, int direction) +{ + u16 dscr_idx; + + if (direction == SNDRV_PCM_STREAM_PLAYBACK) { + dscr_idx = acp_reg_read(acp_mmio, mmACP_DMA_CUR_DSCR_13); + dscr_idx = (dscr_idx == PLAYBACK_START_DMA_DESCR_CH13) ? + PLAYBACK_START_DMA_DESCR_CH12 : + PLAYBACK_END_DMA_DESCR_CH12; + } else { + dscr_idx = acp_reg_read(acp_mmio, mmACP_DMA_CUR_DSCR_15); + dscr_idx = (dscr_idx == CAPTURE_START_DMA_DESCR_CH15) ? + CAPTURE_END_DMA_DESCR_CH14 : + CAPTURE_START_DMA_DESCR_CH14; + } + + return dscr_idx; +} + +/* Create page table entries in ACP SRAM for the allocated memory */ +static void acp_pte_config(void __iomem *acp_mmio, struct page *pg, + u16 num_of_pages, u32 pte_offset) +{ + u16 page_idx; + u64 addr; + u32 low; + u32 high; + u32 offset; + + offset = ACP_DAGB_GRP_SRBM_SRAM_BASE_OFFSET + (pte_offset * 8); + for (page_idx = 0; page_idx < (num_of_pages); page_idx++) { + /* Load the low address of page int ACP SRAM through SRBM */ + acp_reg_write((offset + (page_idx * 8)), + acp_mmio, mmACP_SRBM_Targ_Idx_Addr); + addr = page_to_phys(pg); + + low = lower_32_bits(addr); + high = upper_32_bits(addr); + + acp_reg_write(low, acp_mmio, mmACP_SRBM_Targ_Idx_Data); + + /* Load the High address of page int ACP SRAM through SRBM */ + acp_reg_write((offset + (page_idx * 8) + 4), + acp_mmio, mmACP_SRBM_Targ_Idx_Addr); + + /* page enable in ACP */ + high |= BIT(31); + acp_reg_write(high, acp_mmio, mmACP_SRBM_Targ_Idx_Data); + + /* Move to next physically contiguos page */ + pg++; + } +} + +/* enables/disables ACP's external interrupt */ +void acp_enable_external_interrupts(void __iomem *acp_mmio, + int enable) +{ + u32 acp_ext_intr_enb; + + acp_ext_intr_enb = enable ? + ACP_EXTERNAL_INTR_ENB__ACPExtIntrEnb_MASK : 0; + + /* Write the Software External Interrupt Enable register */ + acp_reg_write(acp_ext_intr_enb, acp_mmio, mmACP_EXTERNAL_INTR_ENB); +} + +/* Clear (acknowledge) DMA 'Interrupt on Complete' (IOC) in ACP + * external interrupt status register + */ +void acp_ext_stat_clear_dmaioc(void __iomem *acp_mmio, u8 ch_num) +{ + u32 ext_intr_stat; + u32 chmask = BIT(ch_num); + + ext_intr_stat = acp_reg_read(acp_mmio, mmACP_EXTERNAL_INTR_STAT); + if (ext_intr_stat & (chmask << + ACP_EXTERNAL_INTR_STAT__DMAIOCStat__SHIFT)) { + + ext_intr_stat &= (chmask << + ACP_EXTERNAL_INTR_STAT__DMAIOCAck__SHIFT); + acp_reg_write(ext_intr_stat, acp_mmio, + mmACP_EXTERNAL_INTR_STAT); + } +} + +/* Check whether ACP DMA interrupt (IOC) is generated or not */ +u32 acp_get_intr_flag(void __iomem *acp_mmio) +{ + u32 ext_intr_status; + u32 intr_gen; + + ext_intr_status = acp_reg_read(acp_mmio, mmACP_EXTERNAL_INTR_STAT); + intr_gen = (((ext_intr_status & + ACP_EXTERNAL_INTR_STAT__DMAIOCStat_MASK) >> + ACP_EXTERNAL_INTR_STAT__DMAIOCStat__SHIFT)); + + return intr_gen; +} + +void config_acp_dma(void __iomem *acp_mmio, + struct audio_substream_data *audio_config) +{ + u32 pte_offset; + + if (audio_config->direction == SNDRV_PCM_STREAM_PLAYBACK) + pte_offset = PLAYBACK_PTE_OFFSET; + else + pte_offset = CAPTURE_PTE_OFFSET; + + acp_pte_config(acp_mmio, audio_config->pg, audio_config->num_of_pages, + pte_offset); + + /* Configure System memory <-> ACP SRAM DMA descriptors */ + set_acp_sysmem_dma_descriptors(acp_mmio, audio_config->size, + audio_config->direction, pte_offset); + + /* Configure ACP SRAM <-> I2S DMA descriptors */ + set_acp_to_i2s_dma_descriptors(acp_mmio, audio_config->size, + audio_config->direction); +} + +/* Start a given DMA channel transfer */ +void acp_dma_start(void __iomem *acp_mmio, + u16 ch_num, bool is_circular) +{ + u32 dma_ctrl; + + /* read the dma control register and disable the channel run field */ + dma_ctrl = acp_reg_read(acp_mmio, mmACP_DMA_CNTL_0 + ch_num); + + /*Invalidating the DAGB cache */ + acp_reg_write(ENABLE, acp_mmio, mmACP_DAGB_ATU_CTRL); + + /* configure the DMA channel and start the DMA transfer + * set dmachrun bit to start the transfer and enable the + * interrupt on completion of the dma transfer + */ + dma_ctrl |= ACP_DMA_CNTL_0__DMAChRun_MASK; + + if ((ch_num == ACP_TO_I2S_DMA_CH_NUM) || + (ch_num == ACP_TO_SYSRAM_CH_NUM) || + (ch_num == I2S_TO_ACP_DMA_CH_NUM)) + dma_ctrl |= ACP_DMA_CNTL_0__DMAChIOCEn_MASK; + else + dma_ctrl &= ~ACP_DMA_CNTL_0__DMAChIOCEn_MASK; + + /* enable for ACP SRAM to/from I2S DMA channel */ + if (is_circular == true) + dma_ctrl |= ACP_DMA_CNTL_0__Circular_DMA_En_MASK; + else + dma_ctrl &= ~ACP_DMA_CNTL_0__Circular_DMA_En_MASK; + + acp_reg_write(dma_ctrl, acp_mmio, mmACP_DMA_CNTL_0 + ch_num); +} + +/* Stop a given DMA channel transfer */ +void acp_dma_stop(void __iomem *acp_mmio, u8 ch_num) +{ + u32 dma_ctrl; + u32 dma_ch_sts; + u32 delay_time = ACP_DMA_RESET_TIME; + + dma_ctrl = acp_reg_read(acp_mmio, mmACP_DMA_CNTL_0 + ch_num); + + /* clear the dma control register fields before writing zero + * in reset bit + */ + dma_ctrl &= ~ACP_DMA_CNTL_0__DMAChRun_MASK; + dma_ctrl &= ~ACP_DMA_CNTL_0__DMAChIOCEn_MASK; + + acp_reg_write(dma_ctrl, acp_mmio, mmACP_DMA_CNTL_0 + ch_num); + dma_ch_sts = acp_reg_read(acp_mmio, mmACP_DMA_CH_STS); + + if (dma_ch_sts & BIT(ch_num)) { + /* set the reset bit for this channel + * to stop the dma transfer */ + dma_ctrl |= ACP_DMA_CNTL_0__DMAChRst_MASK; + acp_reg_write(dma_ctrl, acp_mmio, mmACP_DMA_CNTL_0 + ch_num); + } + + /* check the channel status bit for some time and return the status */ + while (0 < delay_time) { + dma_ch_sts = acp_reg_read(acp_mmio, mmACP_DMA_CH_STS); + if (!(dma_ch_sts & BIT(ch_num))) { + /* clear the reset flag after successfully stopping + the dma transfer and break from the loop */ + dma_ctrl &= ~ACP_DMA_CNTL_0__DMAChRst_MASK; + + acp_reg_write(dma_ctrl, acp_mmio, mmACP_DMA_CNTL_0 + + ch_num); + break; + } + delay_time--; + } +} + +/* power off a tile/block within ACP */ +static void acp_suspend_tile(void __iomem *acp_mmio, int tile) +{ + u32 val = 0; + u32 timeout = 0; + + if ((tile < ACP_TILE_P1) || (tile > ACP_TILE_DSP2)) + return; + + val = acp_reg_read(acp_mmio, mmACP_PGFSM_READ_REG_0 + tile); + val &= ACP_TILE_ON_MASK; + + if (val == 0x0) { + val = acp_reg_read(acp_mmio, mmACP_PGFSM_RETAIN_REG); + val = val | (1 << tile); + acp_reg_write(val, acp_mmio, mmACP_PGFSM_RETAIN_REG); + acp_reg_write(0x500 + tile, acp_mmio, mmACP_PGFSM_CONFIG_REG); + + timeout = ACP_SOFT_RESET_DONE_TIME_OUT_VALUE; + while (timeout--) { + val = acp_reg_read(acp_mmio, mmACP_PGFSM_READ_REG_0 + + tile); + val = val & ACP_TILE_ON_MASK; + if (val == ACP_TILE_OFF_MASK) + break; + } + + val = acp_reg_read(acp_mmio, mmACP_PGFSM_RETAIN_REG); + + val |= ACP_TILE_OFF_RETAIN_REG_MASK; + acp_reg_write(val, acp_mmio, mmACP_PGFSM_RETAIN_REG); + } +} + +/* power on a tile/block within ACP */ +static void acp_resume_tile(void __iomem *acp_mmio, int tile) +{ + u32 val = 0; + u32 timeout = 0; + + if ((tile < ACP_TILE_P1) || (tile > ACP_TILE_DSP2)) + return; + + val = acp_reg_read(acp_mmio, mmACP_PGFSM_READ_REG_0 + tile); + val = val & ACP_TILE_ON_MASK; + + if (val != 0x0) { + acp_reg_write(0x600 + tile, acp_mmio, mmACP_PGFSM_CONFIG_REG); + timeout = ACP_SOFT_RESET_DONE_TIME_OUT_VALUE; + while (timeout--) { + val = acp_reg_read(acp_mmio, mmACP_PGFSM_READ_REG_0 + + tile); + val = val & ACP_TILE_ON_MASK; + if (val == 0x0) + break; + } + val = acp_reg_read(acp_mmio, mmACP_PGFSM_RETAIN_REG); + if (tile == ACP_TILE_P1) + val = val & (ACP_TILE_P1_MASK); + else if (tile == ACP_TILE_P2) + val = val & (ACP_TILE_P2_MASK); + + acp_reg_write(val, acp_mmio, mmACP_PGFSM_RETAIN_REG); + } +} + +/* Shutdown unused SRAM memory banks in ACP IP */ +static void acp_turnoff_sram_banks(void __iomem *acp_mmio) +{ + /* Bank 0 : used for DMA descriptors + * Bank 1 to 4 : used for playback + * Bank 5 to 8 : used for capture + * Each bank is 8kB and max size allocated for playback/ capture is + * 16kB(max period size) * 2(max periods) reserved for playback/capture + * in ALSA driver + * Turn off all SRAM banks except above banks during playback/capture + */ + u32 val, bank; + + for (bank = 9; bank < 32; bank++) { + val = acp_reg_read(acp_mmio, mmACP_MEM_SHUT_DOWN_REQ_LO); + if (!(val & (1 << bank))) { + val |= 1 << bank; + acp_reg_write(val, acp_mmio, + mmACP_MEM_SHUT_DOWN_REQ_LO); + /* If ACP_MEM_SHUT_DOWN_STS_LO is 0xFFFFFFFF, then + * shutdown sequence is complete. */ + do { + val = acp_reg_read(acp_mmio, + mmACP_MEM_SHUT_DOWN_STS_LO); + } while (val != 0xFFFFFFFF); + } + } + + for (bank = 32; bank < 48; bank++) { + val = acp_reg_read(acp_mmio, mmACP_MEM_SHUT_DOWN_REQ_HI); + if (!(val & (1 << (bank - 32)))) { + val |= 1 << (bank - 32); + acp_reg_write(val, acp_mmio, + mmACP_MEM_SHUT_DOWN_REQ_HI); + /* If ACP_MEM_SHUT_DOWN_STS_HI is 0x0000FFFF, then + * shutdown sequence is complete. */ + do { + val = acp_reg_read(acp_mmio, + mmACP_MEM_SHUT_DOWN_STS_HI); + } while (val != 0x0000FFFF); + } + } +} + +/* Initialize and bring ACP hardware to default state. */ +static void acp_init(void __iomem *acp_mmio) +{ + u32 val; + u32 timeout_value; + + /* Assert Soft reset of ACP */ + val = acp_reg_read(acp_mmio, mmACP_SOFT_RESET); + + val |= ACP_SOFT_RESET__SoftResetAud_MASK; + acp_reg_write(val, acp_mmio, mmACP_SOFT_RESET); + + timeout_value = ACP_SOFT_RESET_DONE_TIME_OUT_VALUE; + while (timeout_value--) { + val = acp_reg_read(acp_mmio, mmACP_SOFT_RESET); + if (ACP_SOFT_RESET__SoftResetAudDone_MASK == + (val & ACP_SOFT_RESET__SoftResetAudDone_MASK)) + break; + } + + /* Enable clock to ACP and wait until the clock is enabled */ + val = acp_reg_read(acp_mmio, mmACP_CONTROL); + val = val | ACP_CONTROL__ClkEn_MASK; + acp_reg_write(val, acp_mmio, mmACP_CONTROL); + + timeout_value = ACP_CLOCK_EN_TIME_OUT_VALUE; + + while (timeout_value--) { + val = acp_reg_read(acp_mmio, mmACP_STATUS); + if (val & (u32) 0x1) + break; + udelay(100); + } + + /* Deassert the SOFT RESET flags */ + val = acp_reg_read(acp_mmio, mmACP_SOFT_RESET); + val &= ~ACP_SOFT_RESET__SoftResetAud_MASK; + acp_reg_write(val, acp_mmio, mmACP_SOFT_RESET); + + /* initiailizing Garlic Control DAGB register */ + acp_reg_write(ONION_CNTL_DEFAULT, acp_mmio, mmACP_AXI2DAGB_ONION_CNTL); + + /* initiailizing Onion Control DAGB registers */ + acp_reg_write(GARLIC_CNTL_DEFAULT, acp_mmio, + mmACP_AXI2DAGB_GARLIC_CNTL); + + acp_dma_descr_init(acp_mmio); + + acp_reg_write(ACP_SRAM_BASE_ADDRESS, acp_mmio, + mmACP_DMA_DESC_BASE_ADDR); + + /* Num of descriptiors in SRAM 0x4, means 256 descriptors;(64 * 4) */ + acp_reg_write(0x4, acp_mmio, mmACP_DMA_DESC_MAX_NUM_DSCR); + acp_reg_write(ACP_EXTERNAL_INTR_CNTL__DMAIOCMask_MASK, + acp_mmio, mmACP_EXTERNAL_INTR_CNTL); + + acp_turnoff_sram_banks(acp_mmio); +} + +void acp_hw_init(void __iomem *acp_mmio) +{ + acp_init(acp_mmio); + + /* Disable DSPs which are not used */ + acp_suspend_tile(acp_mmio, ACP_TILE_DSP0); + acp_suspend_tile(acp_mmio, ACP_TILE_DSP1); + acp_suspend_tile(acp_mmio, ACP_TILE_DSP2); +} + +/* Deintialize ACP */ +void acp_hw_deinit(void __iomem *acp_mmio) +{ + u32 val; + u32 timeout_value; + + /* Assert Soft reset of ACP */ + val = acp_reg_read(acp_mmio, mmACP_SOFT_RESET); + + val |= ACP_SOFT_RESET__SoftResetAud_MASK; + acp_reg_write(val, acp_mmio, mmACP_SOFT_RESET); + + timeout_value = ACP_SOFT_RESET_DONE_TIME_OUT_VALUE; + while (timeout_value--) { + val = acp_reg_read(acp_mmio, mmACP_SOFT_RESET); + if (ACP_SOFT_RESET__SoftResetAudDone_MASK == + (val & ACP_SOFT_RESET__SoftResetAudDone_MASK)) { + break; + } + } + /** Disable ACP clock */ + val = acp_reg_read(acp_mmio, mmACP_CONTROL); + val &= ~ACP_CONTROL__ClkEn_MASK; + acp_reg_write(val, acp_mmio, mmACP_CONTROL); + + timeout_value = ACP_CLOCK_EN_TIME_OUT_VALUE; + + while (timeout_value--) { + val = acp_reg_read(acp_mmio, mmACP_STATUS); + if (!(val & (u32) 0x1)) + break; + udelay(100); + } +} + +/* Update DMA postion in audio ring buffer at period level granularity. + * This will be used by ALSA PCM driver + */ +u32 acp_update_dma_pointer(void __iomem *acp_mmio, int direction, + u32 period_size) +{ + u32 pos; + u16 dscr; + u32 mul; + u32 dma_config; + + pos = 0; + + if (direction == SNDRV_PCM_STREAM_PLAYBACK) { + dscr = acp_reg_read(acp_mmio, mmACP_DMA_CUR_DSCR_13); + + mul = (dscr == PLAYBACK_START_DMA_DESCR_CH13) ? 0 : 1; + pos = (mul * period_size); + + } else { + dma_config = acp_reg_read(acp_mmio, mmACP_DMA_CNTL_14); + if (dma_config != 0) { + dscr = acp_reg_read(acp_mmio, mmACP_DMA_CUR_DSCR_14); + mul = (dscr == CAPTURE_START_DMA_DESCR_CH14) ? 1 : 2; + pos = (mul * period_size); + } + + if (pos >= (2 * period_size)) + pos = 0; + + } + return pos; +} + +/* Wait for initial buffering to complete in HOST to SRAM DMA channel + * for plaback usecase + */ +void prebuffer_audio(void __iomem *acp_mmio) +{ + u32 dma_ch_sts; + u32 channel_mask = BIT(SYSRAM_TO_ACP_CH_NUM); + + do { + /* Read the channel status to poll dma transfer completion + * (System RAM to SRAM) + * In this case, it will be runtime->start_threshold + * (2 ALSA periods) of transfer. Rendering starts after this + * threshold is met. + */ + dma_ch_sts = acp_reg_read(acp_mmio, mmACP_DMA_CH_STS); + udelay(20); + } while (dma_ch_sts & channel_mask); +} + +void acp_suspend(void __iomem *acp_mmio) +{ + acp_suspend_tile(acp_mmio, ACP_TILE_P2); + acp_suspend_tile(acp_mmio, ACP_TILE_P1); +} + +void acp_resume(void __iomem *acp_mmio) +{ + acp_resume_tile(acp_mmio, ACP_TILE_P1); + acp_resume_tile(acp_mmio, ACP_TILE_P2); + + acp_init(acp_mmio); + + /* Disable DSPs which are not going to be used */ + acp_suspend_tile(acp_mmio, ACP_TILE_DSP0); + acp_suspend_tile(acp_mmio, ACP_TILE_DSP1); + acp_suspend_tile(acp_mmio, ACP_TILE_DSP2); +} diff --git a/sound/soc/amd/acp.h b/sound/soc/amd/acp.h new file mode 100644 index 0000000..4e4417f --- /dev/null +++ b/sound/soc/amd/acp.h @@ -0,0 +1,147 @@ +#ifndef __ACP_HW_H +#define __ACP_HW_H + +#define ACP_MODE_I2S 0 +#define ACP_MODE_AZ 1 + +#define DISABLE 0 +#define ENABLE 1 + +#define PAGE_SIZE_4K 4096 +#define PAGE_SIZE_4K_ENABLE 0x02 + +#define PLAYBACK_PTE_OFFSET 10 +#define CAPTURE_PTE_OFFSET 0 + +#define GARLIC_CNTL_DEFAULT 0x00000FB4 +#define ONION_CNTL_DEFAULT 0x00000FB4 + +#define ACP_PHYSICAL_BASE 0x14000 + +/* Playback SRAM address (as a destination in dma descriptor) */ +#define ACP_SHARED_RAM_BANK_1_ADDRESS 0x4002000 + +/* Capture SRAM address (as a source in dma descriptor) */ +#define ACP_SHARED_RAM_BANK_5_ADDRESS 0x400A000 + +#define ACP_DMA_RESET_TIME 10000 +#define ACP_CLOCK_EN_TIME_OUT_VALUE 0x000000FF +#define ACP_SOFT_RESET_DONE_TIME_OUT_VALUE 0x000000FF +#define ACP_DMA_COMPLETE_TIME_OUT_VALUE 0x000000FF + +#define ACP_SRAM_BASE_ADDRESS 0x4000000 +#define ACP_DAGB_GRP_SRAM_BASE_ADDRESS 0x4001000 +#define ACP_DAGB_GRP_SRBM_SRAM_BASE_OFFSET 0x1000 +#define ACP_INTERNAL_APERTURE_WINDOW_0_ADDRESS 0x00000000 +#define ACP_INTERNAL_APERTURE_WINDOW_4_ADDRESS 0x01800000 + +#define TO_ACP_I2S_1 0x2 +#define TO_ACP_I2S_2 0x4 +#define FROM_ACP_I2S_1 0xa +#define FROM_ACP_I2S_2 0xb + +#define ACP_TILE_ON_MASK 0x03 +#define ACP_TILE_OFF_MASK 0x02 +#define ACP_TILE_ON_RETAIN_REG_MASK 0x1f +#define ACP_TILE_OFF_RETAIN_REG_MASK 0x20 + +#define ACP_TILE_P1_MASK 0x3e +#define ACP_TILE_P2_MASK 0x3d +#define ACP_TILE_DSP0_MASK 0x3b +#define ACP_TILE_DSP1_MASK 0x37 + +#define ACP_TILE_DSP2_MASK 0x2f +/* Playback DMA channels */ +#define SYSRAM_TO_ACP_CH_NUM 12 +#define ACP_TO_I2S_DMA_CH_NUM 13 + +/* Capture DMA channels */ +#define ACP_TO_SYSRAM_CH_NUM 14 +#define I2S_TO_ACP_DMA_CH_NUM 15 + +#define PLAYBACK_START_DMA_DESCR_CH12 0 +#define PLAYBACK_END_DMA_DESCR_CH12 1 + +#define PLAYBACK_START_DMA_DESCR_CH13 2 +#define PLAYBACK_END_DMA_DESCR_CH13 3 + + +#define CAPTURE_START_DMA_DESCR_CH14 4 +#define CAPTURE_END_DMA_DESCR_CH14 5 + +#define CAPTURE_START_DMA_DESCR_CH15 6 +#define CAPTURE_END_DMA_DESCR_CH15 7 + +#define STATUS_SUCCESS 0 +#define STATUS_UNSUCCESSFUL -1 + +enum acp_dma_priority_level { + /* 0x0 Specifies the DMA channel is given normal priority */ + ACP_DMA_PRIORITY_LEVEL_NORMAL = 0x0, + /* 0x1 Specifies the DMA channel is given high priority */ + ACP_DMA_PRIORITY_LEVEL_HIGH = 0x1, + ACP_DMA_PRIORITY_LEVEL_FORCESIZE = 0xFF +}; + +struct audio_substream_data { + struct page *pg; + unsigned int order; + u16 num_of_pages; + u16 direction; + uint64_t size; + void __iomem *acp_mmio; +}; + +enum { + ACP_TILE_P1 = 0, + ACP_TILE_P2, + ACP_TILE_DSP0, + ACP_TILE_DSP1, + ACP_TILE_DSP2, +}; + +enum { + ACP_DMA_ATTRIBUTES_SHAREDMEM_TO_DAGB_ONION = 0x0, + ACP_DMA_ATTRIBUTES_SHARED_MEM_TO_DAGB_GARLIC = 0x1, + ACP_DMA_ATTRIBUTES_DAGB_ONION_TO_SHAREDMEM = 0x8, + ACP_DMA_ATTRIBUTES_DAGB_GARLIC_TO_SHAREDMEM = 0x9, + ACP_DMA_ATTRIBUTES_FORCE_SIZE = 0xF +}; + +typedef struct acp_dma_dscr_transfer { + /* Specifies the source memory location for the DMA data transfer. */ + u32 src; + /* Specifies the destination memory location to where the data will + be transferred. + */ + u32 dest; + /* Specifies the number of bytes need to be transferred + * from source to destination memory.Transfer direction & IOC enable + */ + u32 xfer_val; + /** Reserved for future use */ + u32 reserved; +} acp_dma_dscr_transfer_t; + +extern void acp_hw_init(void __iomem *acp_mmio); +extern void acp_hw_deinit(void __iomem *acp_mmio); +extern void config_acp_dma_channel(void __iomem *acp_mmio, u8 ch_num, + u16 dscr_strt_idx, u16 num_dscrs, + enum acp_dma_priority_level priority_level); +extern void config_acp_dma(void __iomem *acp_mmio, + struct audio_substream_data *audio_config); +extern void acp_dma_start(void __iomem *acp_mmio, + u16 ch_num, bool is_circular); +extern void acp_dma_stop(void __iomem *acp_mmio, u8 ch_num); +extern u32 acp_update_dma_pointer(void __iomem *acp_mmio, int direction, + u32 period_size); +extern void prebuffer_audio(void __iomem *acp_mmio); +extern void acp_suspend(void __iomem *acp_mmio); +extern void acp_resume(void __iomem *acp_mmio); +extern void acp_enable_external_interrupts(void __iomem *acp_mmio, + int enable); +extern u32 acp_get_intr_flag(void __iomem *acp_mmio); +extern u16 get_dscr_idx(void __iomem *acp_mmio, int direction); +extern void acp_ext_stat_clear_dmaioc(void __iomem *acp_mmio, u8 ch_num); + +#endif /*__ACP_HW_H */
On Thu, Oct 08, 2015 at 12:12:40PM -0400, Alex Deucher wrote:
ACP IP block consists of dedicated DMA and I2S blocks. The PCM driver provides the platform DMA component to ALSA core.
Overall my main comment on a lot of this code is that it feels like we have created a lot of infrastructure that parallels standard Linux subsystems and interfaces without something that clearly shows why we're doing that. There may be good reasons but they've not been articulated and it's making the code a lot more complex to follow and review. We end up with multiple layers of abstraction and indirection that aren't explained.
This patch is also rather large and appears to contain multiple components which could be split, there's at least the DMA driver and this abstraction layer than the DMA driver builds on.
+/* ACP DMA irq handler routine for playback, capture usecases */ +int dma_irq_handler(struct device *dev) +{
- u16 dscr_idx;
- u32 intr_flag;
This says it's an interrupt handler but it's using some custom, non-genirq interface?
/* Let ACP know the Allocated memory */
num_of_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
/* Fill the page table entries in ACP SRAM */
rtd->pg = pg;
rtd->size = size;
rtd->num_of_pages = num_of_pages;
rtd->direction = substream->stream;
We never reference num_of_pages other than to assign it into the page table entry?
+static int acp_dma_close(struct snd_pcm_substream *substream) +{
- struct snd_pcm_runtime *runtime = substream->runtime;
- struct audio_substream_data *rtd = runtime->private_data;
- struct snd_soc_pcm_runtime *prtd = substream->private_data;
- kfree(rtd);
- pm_runtime_mark_last_busy(prtd->platform->dev);
Why the _mark_last_busy() here?
- /* The following members gets populated in device 'open'
* function. Till then interrupts are disabled in 'acp_hw_init'
* and device doesn't generate any interrupts.
*/
- audio_drv_data->play_stream = NULL;
- audio_drv_data->capture_stream = NULL;
- audio_drv_data->iprv->dev = &pdev->dev;
- audio_drv_data->iprv->acp_mmio = audio_drv_data->acp_mmio;
- audio_drv_data->iprv->enable_intr = acp_enable_external_interrupts;
- audio_drv_data->iprv->irq_handler = dma_irq_handler;
I do not that we never seem to reset any of these in teardown paths and I am slightly worried about races with interrupt handling in teardown,
+static int acp_pcm_suspend(struct device *dev) +{
- bool pm_rts;
- struct audio_drv_data *adata = dev_get_drvdata(dev);
- pm_rts = pm_runtime_status_suspended(dev);
- if (pm_rts == false)
acp_suspend(adata->acp_mmio);
- return 0;
+}
This appears to merely call into the parent/core device (at least it looks like this is the intention, there's a bunch of infrastructure fo the core device which appeaars to replicate standard infrastructure). Isn't whatever this eventually ends up doing handled by the parent device in the normal PM callbacks.
This parallel infrastructure seems like it needs some motivation, especially given that when I look at the implementation of functions like amd_acp_suspend() and amd_acp_resume() in the preceeding patch they are empty and therefore do nothing (they're also not exported so I expect we get build errors if this is a module and the core isn't). The easiest thing is probably to remove the code until there is an immplementation and then review at that time.
+static int acp_pcm_resume(struct device *dev) +{
- bool pm_rts;
- struct snd_pcm_substream *stream;
- struct snd_pcm_runtime *rtd;
- struct audio_substream_data *sdata;
- struct audio_drv_data *adata = dev_get_drvdata(dev);
- pm_rts = pm_runtime_status_suspended(dev);
- if (pm_rts == true) {
/* Resumed from system wide suspend and there is
* no pending audio activity to resume. */
pm_runtime_disable(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
The above looks very strange - why are we bouncing runtime PM like this?
+/* Initialize the dma descriptors location in SRAM and page size */ +static void acp_dma_descr_init(void __iomem *acp_mmio) +{
- u32 sram_pte_offset = 0;
- /* SRAM starts at 0x04000000. From that offset one page (4KB) left for
* filling DMA descriptors.sram_pte_offset = 0x04001000 , used for
This is a device relative address rather than an absolute address? A lot of these numbers seem kind of large...
+u16 get_dscr_idx(void __iomem *acp_mmio, int direction) +{
- u16 dscr_idx;
- if (direction == SNDRV_PCM_STREAM_PLAYBACK) {
dscr_idx = acp_reg_read(acp_mmio, mmACP_DMA_CUR_DSCR_13);
dscr_idx = (dscr_idx == PLAYBACK_START_DMA_DESCR_CH13) ?
PLAYBACK_START_DMA_DESCR_CH12 :
PLAYBACK_END_DMA_DESCR_CH12;
Please write normal if statements rather than using the ternery operator.
+/* Check whether ACP DMA interrupt (IOC) is generated or not */ +u32 acp_get_intr_flag(void __iomem *acp_mmio) +{
- u32 ext_intr_status;
- u32 intr_gen;
- ext_intr_status = acp_reg_read(acp_mmio, mmACP_EXTERNAL_INTR_STAT);
- intr_gen = (((ext_intr_status &
ACP_EXTERNAL_INTR_STAT__DMAIOCStat_MASK) >>
ACP_EXTERNAL_INTR_STAT__DMAIOCStat__SHIFT));
- return intr_gen;
+}
Looking at a lot of the interrupt code I can't help but think there's a genirq interrupt controller lurking in here somewhere.
- /*Invalidating the DAGB cache */
- acp_reg_write(ENABLE, acp_mmio, mmACP_DAGB_ATU_CTRL);
/* spaces around comments please */
- if ((ch_num == ACP_TO_I2S_DMA_CH_NUM) ||
(ch_num == ACP_TO_SYSRAM_CH_NUM) ||
(ch_num == I2S_TO_ACP_DMA_CH_NUM))
dma_ctrl |= ACP_DMA_CNTL_0__DMAChIOCEn_MASK;
- else
dma_ctrl &= ~ACP_DMA_CNTL_0__DMAChIOCEn_MASK;
switch statement please.
- u32 delay_time = ACP_DMA_RESET_TIME;
- /* check the channel status bit for some time and return the status */
- while (0 < delay_time) {
dma_ch_sts = acp_reg_read(acp_mmio, mmACP_DMA_CH_STS);
if (!(dma_ch_sts & BIT(ch_num))) {
/* clear the reset flag after successfully stopping
the dma transfer and break from the loop */
dma_ctrl &= ~ACP_DMA_CNTL_0__DMAChRst_MASK;
acp_reg_write(dma_ctrl, acp_mmio, mmACP_DMA_CNTL_0
+ ch_num);
break;
}
delay_time--;
- }
+}
This isn't really a time, it's a number of spins (the amount of time involved presumably depending on clock speed). If this were a time I'd expect to see a delay or sleep in here.
We're also falling off the end of the loop silently if the hardware fails to respond, if it's worth waiting for the hardware to do it's thing I'd expect it's also worth displaying an error if that happens. This is a common pattern in much of the rest of the driver.
+/* power off a tile/block within ACP */ +static void acp_suspend_tile(void __iomem *acp_mmio, int tile) +{
- u32 val = 0;
- u32 timeout = 0;
- if ((tile < ACP_TILE_P1) || (tile > ACP_TILE_DSP2))
return;
- val = acp_reg_read(acp_mmio, mmACP_PGFSM_READ_REG_0 + tile);
- val &= ACP_TILE_ON_MASK;
This is definitely looking like a SoC that could benefit from the standard kernel power management infrastructure and/or DAPM. There's a lot of code here that looks like it's following very common SoC design patterns and could benefit from using infrastructure more.
+static void acp_resume_tile(void __iomem *acp_mmio, int tile) +{
- u32 val = 0;
- u32 timeout = 0;
- if ((tile < ACP_TILE_P1) || (tile > ACP_TILE_DSP2))
return;
Not worth printing an error if the user passed in something invalid?
+/* Shutdown unused SRAM memory banks in ACP IP */ +static void acp_turnoff_sram_banks(void __iomem *acp_mmio) +{
- /* Bank 0 : used for DMA descriptors
* Bank 1 to 4 : used for playback
* Bank 5 to 8 : used for capture
* Each bank is 8kB and max size allocated for playback/ capture is
* 16kB(max period size) * 2(max periods) reserved for playback/capture
* in ALSA driver
* Turn off all SRAM banks except above banks during playback/capture
*/
- u32 val, bank;
I didn't see any runtime management of the other SRAM bank power, seems like that'd be a good idea?
- /* initiailizing Garlic Control DAGB register */
- acp_reg_write(ONION_CNTL_DEFAULT, acp_mmio, mmACP_AXI2DAGB_ONION_CNTL);
- /* initiailizing Onion Control DAGB registers */
- acp_reg_write(GARLIC_CNTL_DEFAULT, acp_mmio,
mmACP_AXI2DAGB_GARLIC_CNTL);
The comments don't match the code...
+/* Update DMA postion in audio ring buffer at period level granularity.
- This will be used by ALSA PCM driver
- */
+u32 acp_update_dma_pointer(void __iomem *acp_mmio, int direction,
u32 period_size)
+{
- u32 pos;
- u16 dscr;
- u32 mul;
- u32 dma_config;
- pos = 0;
- if (direction == SNDRV_PCM_STREAM_PLAYBACK) {
dscr = acp_reg_read(acp_mmio, mmACP_DMA_CUR_DSCR_13);
mul = (dscr == PLAYBACK_START_DMA_DESCR_CH13) ? 0 : 1;
pos = (mul * period_size);
Don't limit the accuracy to period level if the harwdare can do better, report the current position as accurately as possible please. This is also feeling like we've got an unneeded abstraction here - why was this not just directly the pointer operation?
+/* Wait for initial buffering to complete in HOST to SRAM DMA channel
- for plaback usecase
- */
+void prebuffer_audio(void __iomem *acp_mmio) +{
- u32 dma_ch_sts;
- u32 channel_mask = BIT(SYSRAM_TO_ACP_CH_NUM);
- do {
/* Read the channel status to poll dma transfer completion
* (System RAM to SRAM)
* In this case, it will be runtime->start_threshold
* (2 ALSA periods) of transfer. Rendering starts after this
* threshold is met.
*/
dma_ch_sts = acp_reg_read(acp_mmio, mmACP_DMA_CH_STS);
udelay(20);
- } while (dma_ch_sts & channel_mask);
This will hang hard if the hardware fails to respond for some reason, please have a timeout. A cpu_relax() would also be friendly.
+#define DISABLE 0 +#define ENABLE 1
Please don't do this :(
+#define STATUS_SUCCESS 0 +#define STATUS_UNSUCCESSFUL -1
Please use normal Linux error codes.
On Thu, Oct 22, 2015 at 9:44 PM, Mark Brown broonie@kernel.org wrote:
On Thu, Oct 08, 2015 at 12:12:40PM -0400, Alex Deucher wrote:
ACP IP block consists of dedicated DMA and I2S blocks. The PCM driver provides the platform DMA component to ALSA core.
Overall my main comment on a lot of this code is that it feels like we have created a lot of infrastructure that parallels standard Linux subsystems and interfaces without something that clearly shows why we're doing that. There may be good reasons but they've not been articulated and it's making the code a lot more complex to follow and review. We end up with multiple layers of abstraction and indirection that aren't explained.
This patch is also rather large and appears to contain multiple components which could be split, there's at least the DMA driver and this abstraction layer than the DMA driver builds on.
+/* ACP DMA irq handler routine for playback, capture usecases */ +int dma_irq_handler(struct device *dev) +{
u16 dscr_idx;
u32 intr_flag;
This says it's an interrupt handler but it's using some custom, non-genirq interface?
Irq handling is based on parent device's (part of other subsystem) provided interfaces. I will coordinate with others for this.
Do you mean, using virtual irq assignment for MFD devices (ACP is a MFD device) and registering irq handler for it ?
/* Let ACP know the Allocated memory */
num_of_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
/* Fill the page table entries in ACP SRAM */
rtd->pg = pg;
rtd->size = size;
rtd->num_of_pages = num_of_pages;
rtd->direction = substream->stream;
We never reference num_of_pages other than to assign it into the page table entry?
Sorry, I didn't understand your comment. I used 'num_of_pages' to configure ACP audio device for accessing system memory. The implementation is in 'acp_pte_config' function in the patch.
+static int acp_dma_close(struct snd_pcm_substream *substream) +{
struct snd_pcm_runtime *runtime = substream->runtime;
struct audio_substream_data *rtd = runtime->private_data;
struct snd_soc_pcm_runtime *prtd = substream->private_data;
kfree(rtd);
pm_runtime_mark_last_busy(prtd->platform->dev);
Why the _mark_last_busy() here?
I want to power off ACP audio IP, when the audio usecase is not active for sometime (run time PM). I felt, 'close' is the correct place to mark this.
/* The following members gets populated in device 'open'
* function. Till then interrupts are disabled in 'acp_hw_init'
* and device doesn't generate any interrupts.
*/
audio_drv_data->play_stream = NULL;
audio_drv_data->capture_stream = NULL;
audio_drv_data->iprv->dev = &pdev->dev;
audio_drv_data->iprv->acp_mmio = audio_drv_data->acp_mmio;
audio_drv_data->iprv->enable_intr = acp_enable_external_interrupts;
audio_drv_data->iprv->irq_handler = dma_irq_handler;
I do not that we never seem to reset any of these in teardown paths and I am slightly worried about races with interrupt handling in teardown,
I will recheck this.
+static int acp_pcm_suspend(struct device *dev) +{
bool pm_rts;
struct audio_drv_data *adata = dev_get_drvdata(dev);
pm_rts = pm_runtime_status_suspended(dev);
if (pm_rts == false)
acp_suspend(adata->acp_mmio);
return 0;
+}
This appears to merely call into the parent/core device (at least it looks like this is the intention, there's a bunch of infrastructure fo the core device which appeaars to replicate standard infrastructure). Isn't whatever this eventually ends up doing handled by the parent device in the normal PM callbacks.
ACP device (child) can power off itself, when it receives suspend request. So, the intention is to call 'acp_suspend' (defined in same patch) from the 'suspend' callback of ACP device.
This parallel infrastructure seems like it needs some motivation, especially given that when I look at the implementation of functions like amd_acp_suspend() and amd_acp_resume() in the preceeding patch they are empty and therefore do nothing (they're also not exported so I expect we get build errors if this is a module and the core isn't). The easiest thing is probably to remove the code until there is an immplementation and then review at that time.
There were two different functions with same name in two drivers interacting here.That might have introduced some confusion. Sorry, I will modify that.
Also, amd_acp_*() can be removed later, as they are not expected to add any functonality in future. ACP device can be suspended/resumed using acp_*_tile().
+static int acp_pcm_resume(struct device *dev) +{
bool pm_rts;
struct snd_pcm_substream *stream;
struct snd_pcm_runtime *rtd;
struct audio_substream_data *sdata;
struct audio_drv_data *adata = dev_get_drvdata(dev);
pm_rts = pm_runtime_status_suspended(dev);
if (pm_rts == true) {
/* Resumed from system wide suspend and there is
* no pending audio activity to resume. */
pm_runtime_disable(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
The above looks very strange - why are we bouncing runtime PM like this?
Sorry, I didn't understand your comment. I felt, steps mentioned in kernel documentation : http://lxr.free-electrons.com/source/Documentation/power/runtime_pm.txt#L634 is applicable in this scenario. I maybe wrong, but felt that is applicable.
+/* Initialize the dma descriptors location in SRAM and page size */ +static void acp_dma_descr_init(void __iomem *acp_mmio) +{
u32 sram_pte_offset = 0;
/* SRAM starts at 0x04000000. From that offset one page (4KB) left for
* filling DMA descriptors.sram_pte_offset = 0x04001000 , used for
This is a device relative address rather than an absolute address? A lot of these numbers seem kind of large...
That is SRAM block's offset address. Sorry. I didn't understand the expected change, here.
+u16 get_dscr_idx(void __iomem *acp_mmio, int direction) +{
u16 dscr_idx;
if (direction == SNDRV_PCM_STREAM_PLAYBACK) {
dscr_idx = acp_reg_read(acp_mmio, mmACP_DMA_CUR_DSCR_13);
dscr_idx = (dscr_idx == PLAYBACK_START_DMA_DESCR_CH13) ?
PLAYBACK_START_DMA_DESCR_CH12 :
PLAYBACK_END_DMA_DESCR_CH12;
Please write normal if statements rather than using the ternery operator.
Ok.
+/* Check whether ACP DMA interrupt (IOC) is generated or not */ +u32 acp_get_intr_flag(void __iomem *acp_mmio) +{
u32 ext_intr_status;
u32 intr_gen;
ext_intr_status = acp_reg_read(acp_mmio, mmACP_EXTERNAL_INTR_STAT);
intr_gen = (((ext_intr_status &
ACP_EXTERNAL_INTR_STAT__DMAIOCStat_MASK) >>
ACP_EXTERNAL_INTR_STAT__DMAIOCStat__SHIFT));
return intr_gen;
+}
Looking at a lot of the interrupt code I can't help but think there's a genirq interrupt controller lurking in here somewhere.
I will coordinate with other subsystem driver author, that this driver is dependent on.
/*Invalidating the DAGB cache */
acp_reg_write(ENABLE, acp_mmio, mmACP_DAGB_ATU_CTRL);
/* spaces around comments please */
if ((ch_num == ACP_TO_I2S_DMA_CH_NUM) ||
(ch_num == ACP_TO_SYSRAM_CH_NUM) ||
(ch_num == I2S_TO_ACP_DMA_CH_NUM))
dma_ctrl |= ACP_DMA_CNTL_0__DMAChIOCEn_MASK;
else
dma_ctrl &= ~ACP_DMA_CNTL_0__DMAChIOCEn_MASK;
switch statement please.
Ok.
u32 delay_time = ACP_DMA_RESET_TIME;
/* check the channel status bit for some time and return the status */
while (0 < delay_time) {
dma_ch_sts = acp_reg_read(acp_mmio, mmACP_DMA_CH_STS);
if (!(dma_ch_sts & BIT(ch_num))) {
/* clear the reset flag after successfully stopping
the dma transfer and break from the loop */
dma_ctrl &= ~ACP_DMA_CNTL_0__DMAChRst_MASK;
acp_reg_write(dma_ctrl, acp_mmio, mmACP_DMA_CNTL_0
+ ch_num);
break;
}
delay_time--;
}
+}
This isn't really a time, it's a number of spins (the amount of time involved presumably depending on clock speed). If this were a time I'd expect to see a delay or sleep in here.
We're also falling off the end of the loop silently if the hardware fails to respond, if it's worth waiting for the hardware to do it's thing I'd expect it's also worth displaying an error if that happens. This is a common pattern in much of the rest of the driver.
Ok, will modify.
+/* power off a tile/block within ACP */ +static void acp_suspend_tile(void __iomem *acp_mmio, int tile) +{
u32 val = 0;
u32 timeout = 0;
if ((tile < ACP_TILE_P1) || (tile > ACP_TILE_DSP2))
return;
val = acp_reg_read(acp_mmio, mmACP_PGFSM_READ_REG_0 + tile);
val &= ACP_TILE_ON_MASK;
This is definitely looking like a SoC that could benefit from the standard kernel power management infrastructure and/or DAPM. There's a lot of code here that looks like it's following very common SoC design patterns and could benefit from using infrastructure more.
Sorry, I didn't understand. Could you help in adding more pointers on this. The device can get powered-off, with this helper function, which can be called from device 'suspend' callback.
+static void acp_resume_tile(void __iomem *acp_mmio, int tile) +{
u32 val = 0;
u32 timeout = 0;
if ((tile < ACP_TILE_P1) || (tile > ACP_TILE_DSP2))
return;
Not worth printing an error if the user passed in something invalid?
Ok.
+/* Shutdown unused SRAM memory banks in ACP IP */ +static void acp_turnoff_sram_banks(void __iomem *acp_mmio) +{
/* Bank 0 : used for DMA descriptors
* Bank 1 to 4 : used for playback
* Bank 5 to 8 : used for capture
* Each bank is 8kB and max size allocated for playback/ capture is
* 16kB(max period size) * 2(max periods) reserved for playback/capture
* in ALSA driver
* Turn off all SRAM banks except above banks during playback/capture
*/
u32 val, bank;
I didn't see any runtime management of the other SRAM bank power, seems like that'd be a good idea?
SRAM banks are part of ACP IP. With ACP's runtime PM handling, all blocks within ACP IP can be powered-off and on.
/* initiailizing Garlic Control DAGB register */
acp_reg_write(ONION_CNTL_DEFAULT, acp_mmio, mmACP_AXI2DAGB_ONION_CNTL);
/* initiailizing Onion Control DAGB registers */
acp_reg_write(GARLIC_CNTL_DEFAULT, acp_mmio,
mmACP_AXI2DAGB_GARLIC_CNTL);
The comments don't match the code...
Oops, I will correct it.
+/* Update DMA postion in audio ring buffer at period level granularity.
- This will be used by ALSA PCM driver
- */
+u32 acp_update_dma_pointer(void __iomem *acp_mmio, int direction,
u32 period_size)
+{
u32 pos;
u16 dscr;
u32 mul;
u32 dma_config;
pos = 0;
if (direction == SNDRV_PCM_STREAM_PLAYBACK) {
dscr = acp_reg_read(acp_mmio, mmACP_DMA_CUR_DSCR_13);
mul = (dscr == PLAYBACK_START_DMA_DESCR_CH13) ? 0 : 1;
pos = (mul * period_size);
Don't limit the accuracy to period level if the harwdare can do better, report the current position as accurately as possible please. This is also feeling like we've got an unneeded abstraction here - why was this not just directly the pointer operation?
The current version of hardware has the limitation of accuracy reporting. I will remove abstraction, if suggested.
+/* Wait for initial buffering to complete in HOST to SRAM DMA channel
- for plaback usecase
- */
+void prebuffer_audio(void __iomem *acp_mmio) +{
u32 dma_ch_sts;
u32 channel_mask = BIT(SYSRAM_TO_ACP_CH_NUM);
do {
/* Read the channel status to poll dma transfer completion
* (System RAM to SRAM)
* In this case, it will be runtime->start_threshold
* (2 ALSA periods) of transfer. Rendering starts after this
* threshold is met.
*/
dma_ch_sts = acp_reg_read(acp_mmio, mmACP_DMA_CH_STS);
udelay(20);
} while (dma_ch_sts & channel_mask);
This will hang hard if the hardware fails to respond for some reason, please have a timeout. A cpu_relax() would also be friendly.
I will modify this.
+#define DISABLE 0 +#define ENABLE 1
Please don't do this :(
Ok.
+#define STATUS_SUCCESS 0 +#define STATUS_UNSUCCESSFUL -1
Please use normal Linux error codes.
Ok.
Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
On Sat, Oct 24, 2015 at 12:20:09AM +0530, maruthi srinivas wrote:
On Thu, Oct 22, 2015 at 9:44 PM, Mark Brown broonie@kernel.org wrote:
On Thu, Oct 08, 2015 at 12:12:40PM -0400, Alex Deucher wrote:
+/* ACP DMA irq handler routine for playback, capture usecases */ +int dma_irq_handler(struct device *dev) +{
u16 dscr_idx;
u32 intr_flag;
This says it's an interrupt handler but it's using some custom, non-genirq interface?
Irq handling is based on parent device's (part of other subsystem) provided interfaces. I will coordinate with others for this.
Do you mean, using virtual irq assignment for MFD devices (ACP is a MFD device) and registering irq handler for it ?
Well, I'd expect that if we're exporting interrupts around the system we'd be doing so using genirq rather than open coding something.
/* Let ACP know the Allocated memory */
num_of_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
/* Fill the page table entries in ACP SRAM */
rtd->pg = pg;
rtd->size = size;
rtd->num_of_pages = num_of_pages;
rtd->direction = substream->stream;
We never reference num_of_pages other than to assign it into the page table entry?
Sorry, I didn't understand your comment. I used 'num_of_pages' to configure ACP audio device for accessing system memory. The implementation is in 'acp_pte_config' function in the patch.
In the above code we have two blocks of code, one doing an assignment to a local variable and the other initialising the struct but the local variable in the first block is only ever referenced in the second block.
+static int acp_dma_close(struct snd_pcm_substream *substream) +{
struct snd_pcm_runtime *runtime = substream->runtime;
struct audio_substream_data *rtd = runtime->private_data;
struct snd_soc_pcm_runtime *prtd = substream->private_data;
kfree(rtd);
pm_runtime_mark_last_busy(prtd->platform->dev);
Why the _mark_last_busy() here?
I want to power off ACP audio IP, when the audio usecase is not active for sometime (run time PM). I felt, 'close' is the correct place to mark this.
That's not what _mark_last_busy() does... the core already takes runtime PM references for you.
+static int acp_pcm_suspend(struct device *dev) +{
bool pm_rts;
struct audio_drv_data *adata = dev_get_drvdata(dev);
pm_rts = pm_runtime_status_suspended(dev);
if (pm_rts == false)
acp_suspend(adata->acp_mmio);
return 0;
+}
This appears to merely call into the parent/core device (at least it looks like this is the intention, there's a bunch of infrastructure fo the core device which appeaars to replicate standard infrastructure). Isn't whatever this eventually ends up doing handled by the parent device in the normal PM callbacks.
ACP device (child) can power off itself, when it receives suspend request. So, the intention is to call 'acp_suspend' (defined in same patch) from the 'suspend' callback of ACP device.
This doesn't address why you're open coding this rather than using standard infrastructure.
pm_rts = pm_runtime_status_suspended(dev);
if (pm_rts == true) {
/* Resumed from system wide suspend and there is
* no pending audio activity to resume. */
pm_runtime_disable(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
The above looks very strange - why are we bouncing runtime PM like this?
Sorry, I didn't understand your comment. I felt, steps mentioned in kernel documentation : http://lxr.free-electrons.com/source/Documentation/power/runtime_pm.txt#L634 is applicable in this scenario. I maybe wrong, but felt that is applicable.
Please document this clearly - your comment doesn't appear to relate to the case where system resume powers things on at all.
+/* Initialize the dma descriptors location in SRAM and page size */ +static void acp_dma_descr_init(void __iomem *acp_mmio) +{
u32 sram_pte_offset = 0;
/* SRAM starts at 0x04000000. From that offset one page (4KB) left for
* filling DMA descriptors.sram_pte_offset = 0x04001000 , used for
This is a device relative address rather than an absolute address? A lot of these numbers seem kind of large...
That is SRAM block's offset address. Sorry. I didn't understand the expected change, here.
Offset with regard to what? I'm asking if these addresses are within the IP or global.
+/* power off a tile/block within ACP */ +static void acp_suspend_tile(void __iomem *acp_mmio, int tile) +{
u32 val = 0;
u32 timeout = 0;
if ((tile < ACP_TILE_P1) || (tile > ACP_TILE_DSP2))
return;
val = acp_reg_read(acp_mmio, mmACP_PGFSM_READ_REG_0 + tile);
val &= ACP_TILE_ON_MASK;
This is definitely looking like a SoC that could benefit from the standard kernel power management infrastructure and/or DAPM. There's a lot of code here that looks like it's following very common SoC design patterns and could benefit from using infrastructure more.
Sorry, I didn't understand. Could you help in adding more pointers on this. The device can get powered-off, with this helper function, which can be called from device 'suspend' callback.
Power domains are implemented using the interface in include/linux/pm_domain.h and DAPM is sound/soc/soc-dapm.c. The point here is that it looks very much like you are open coding implementations of concepts that we already have generic support for which adds greatly to both the size and complexity of this code.
+static void acp_turnoff_sram_banks(void __iomem *acp_mmio) +{
/* Bank 0 : used for DMA descriptors
* Bank 1 to 4 : used for playback
* Bank 5 to 8 : used for capture
* Each bank is 8kB and max size allocated for playback/ capture is
* 16kB(max period size) * 2(max periods) reserved for playback/capture
* in ALSA driver
* Turn off all SRAM banks except above banks during playback/capture
*/
u32 val, bank;
I didn't see any runtime management of the other SRAM bank power, seems like that'd be a good idea?
SRAM banks are part of ACP IP. With ACP's runtime PM handling, all blocks within ACP IP can be powered-off and on.
So why can't that cope with these banks then?
+/* Update DMA postion in audio ring buffer at period level granularity.
- This will be used by ALSA PCM driver
- */
+u32 acp_update_dma_pointer(void __iomem *acp_mmio, int direction,
u32 period_size)
+{
Don't limit the accuracy to period level if the harwdare can do better, report the current position as accurately as possible please. This is also feeling like we've got an unneeded abstraction here - why was this not just directly the pointer operation?
The current version of hardware has the limitation of accuracy reporting.
If the hardware is limited why does the comment suggest that we are limiting based on periods?
I will remove abstraction, if suggested.
Yes, it's just adding to the complexity of the code.
On Fri, Oct 23, 2015 at 3:31 PM, Mark Brown broonie@kernel.org wrote:
On Sat, Oct 24, 2015 at 12:20:09AM +0530, maruthi srinivas wrote:
On Thu, Oct 22, 2015 at 9:44 PM, Mark Brown broonie@kernel.org wrote:
On Thu, Oct 08, 2015 at 12:12:40PM -0400, Alex Deucher wrote:
+/* ACP DMA irq handler routine for playback, capture usecases */ +int dma_irq_handler(struct device *dev) +{
u16 dscr_idx;
u32 intr_flag;
This says it's an interrupt handler but it's using some custom, non-genirq interface?
Irq handling is based on parent device's (part of other subsystem) provided interfaces. I will coordinate with others for this.
Do you mean, using virtual irq assignment for MFD devices (ACP is a MFD device) and registering irq handler for it ?
Well, I'd expect that if we're exporting interrupts around the system we'd be doing so using genirq rather than open coding something.
I think the problem is, that in a lot of cases, it's not always readily clear that there is an existing infrastructure to do something. In this particular case, most of the common infrastructure that should be utilized for this particular patch set comes from non-traditional x86 platforms so most of us that come from a more traditional x86 background as not as familiar with them. Searching for information on how to solve these does not always produce particularly useful results (e.g., genirq). So we end up open coding a solution, not out of malice, but ignorance. If there is infrastructure we should be using, please continue to point it out during code review and we'll do our best to take advantage of it.
In the case of this hardware, audio interrupts are triggered on the GPU. The GPU driver's interrupt handler checks the interrupt source and calls the handler registered to handle that source. Until now the ACP audio block was added, all the GPU interrupt sources were stuff handled directly by the GPU driver (vblanks, display hotplug, command submission fences, etc.).
Thanks,
Alex
/* Let ACP know the Allocated memory */
num_of_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
/* Fill the page table entries in ACP SRAM */
rtd->pg = pg;
rtd->size = size;
rtd->num_of_pages = num_of_pages;
rtd->direction = substream->stream;
We never reference num_of_pages other than to assign it into the page table entry?
Sorry, I didn't understand your comment. I used 'num_of_pages' to configure ACP audio device for accessing system memory. The implementation is in 'acp_pte_config' function in the patch.
In the above code we have two blocks of code, one doing an assignment to a local variable and the other initialising the struct but the local variable in the first block is only ever referenced in the second block.
+static int acp_dma_close(struct snd_pcm_substream *substream) +{
struct snd_pcm_runtime *runtime = substream->runtime;
struct audio_substream_data *rtd = runtime->private_data;
struct snd_soc_pcm_runtime *prtd = substream->private_data;
kfree(rtd);
pm_runtime_mark_last_busy(prtd->platform->dev);
Why the _mark_last_busy() here?
I want to power off ACP audio IP, when the audio usecase is not active for sometime (run time PM). I felt, 'close' is the correct place to mark this.
That's not what _mark_last_busy() does... the core already takes runtime PM references for you.
+static int acp_pcm_suspend(struct device *dev) +{
bool pm_rts;
struct audio_drv_data *adata = dev_get_drvdata(dev);
pm_rts = pm_runtime_status_suspended(dev);
if (pm_rts == false)
acp_suspend(adata->acp_mmio);
return 0;
+}
This appears to merely call into the parent/core device (at least it looks like this is the intention, there's a bunch of infrastructure fo the core device which appeaars to replicate standard infrastructure). Isn't whatever this eventually ends up doing handled by the parent device in the normal PM callbacks.
ACP device (child) can power off itself, when it receives suspend request. So, the intention is to call 'acp_suspend' (defined in same patch) from the 'suspend' callback of ACP device.
This doesn't address why you're open coding this rather than using standard infrastructure.
pm_rts = pm_runtime_status_suspended(dev);
if (pm_rts == true) {
/* Resumed from system wide suspend and there is
* no pending audio activity to resume. */
pm_runtime_disable(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
The above looks very strange - why are we bouncing runtime PM like this?
Sorry, I didn't understand your comment. I felt, steps mentioned in kernel documentation : http://lxr.free-electrons.com/source/Documentation/power/runtime_pm.txt#L634 is applicable in this scenario. I maybe wrong, but felt that is applicable.
Please document this clearly - your comment doesn't appear to relate to the case where system resume powers things on at all.
+/* Initialize the dma descriptors location in SRAM and page size */ +static void acp_dma_descr_init(void __iomem *acp_mmio) +{
u32 sram_pte_offset = 0;
/* SRAM starts at 0x04000000. From that offset one page (4KB) left for
* filling DMA descriptors.sram_pte_offset = 0x04001000 , used for
This is a device relative address rather than an absolute address? A lot of these numbers seem kind of large...
That is SRAM block's offset address. Sorry. I didn't understand the expected change, here.
Offset with regard to what? I'm asking if these addresses are within the IP or global.
+/* power off a tile/block within ACP */ +static void acp_suspend_tile(void __iomem *acp_mmio, int tile) +{
u32 val = 0;
u32 timeout = 0;
if ((tile < ACP_TILE_P1) || (tile > ACP_TILE_DSP2))
return;
val = acp_reg_read(acp_mmio, mmACP_PGFSM_READ_REG_0 + tile);
val &= ACP_TILE_ON_MASK;
This is definitely looking like a SoC that could benefit from the standard kernel power management infrastructure and/or DAPM. There's a lot of code here that looks like it's following very common SoC design patterns and could benefit from using infrastructure more.
Sorry, I didn't understand. Could you help in adding more pointers on this. The device can get powered-off, with this helper function, which can be called from device 'suspend' callback.
Power domains are implemented using the interface in include/linux/pm_domain.h and DAPM is sound/soc/soc-dapm.c. The point here is that it looks very much like you are open coding implementations of concepts that we already have generic support for which adds greatly to both the size and complexity of this code.
+static void acp_turnoff_sram_banks(void __iomem *acp_mmio) +{
/* Bank 0 : used for DMA descriptors
* Bank 1 to 4 : used for playback
* Bank 5 to 8 : used for capture
* Each bank is 8kB and max size allocated for playback/ capture is
* 16kB(max period size) * 2(max periods) reserved for playback/capture
* in ALSA driver
* Turn off all SRAM banks except above banks during playback/capture
*/
u32 val, bank;
I didn't see any runtime management of the other SRAM bank power, seems like that'd be a good idea?
SRAM banks are part of ACP IP. With ACP's runtime PM handling, all blocks within ACP IP can be powered-off and on.
So why can't that cope with these banks then?
+/* Update DMA postion in audio ring buffer at period level granularity.
- This will be used by ALSA PCM driver
- */
+u32 acp_update_dma_pointer(void __iomem *acp_mmio, int direction,
u32 period_size)
+{
Don't limit the accuracy to period level if the harwdare can do better, report the current position as accurately as possible please. This is also feeling like we've got an unneeded abstraction here - why was this not just directly the pointer operation?
The current version of hardware has the limitation of accuracy reporting.
If the hardware is limited why does the comment suggest that we are limiting based on periods?
I will remove abstraction, if suggested.
Yes, it's just adding to the complexity of the code.
On Fri, Oct 23, 2015 at 04:39:07PM -0400, Alex Deucher wrote:
something. In this particular case, most of the common infrastructure that should be utilized for this particular patch set comes from non-traditional x86 platforms so most of us that come from a more traditional x86 background as not as familiar with them. Searching
A lot of this infrastructure (like genirq) comes as much from the x86 side as anywhere else :(
On Sat, Oct 24, 2015 at 1:01 AM, Mark Brown broonie@kernel.org wrote:
On Sat, Oct 24, 2015 at 12:20:09AM +0530, maruthi srinivas wrote:
On Thu, Oct 22, 2015 at 9:44 PM, Mark Brown broonie@kernel.org wrote:
On Thu, Oct 08, 2015 at 12:12:40PM -0400, Alex Deucher wrote:
Please document this clearly - your comment doesn't appear to relate to the case where system resume powers things on at all.
ok.
+/* Initialize the dma descriptors location in SRAM and page size */ +static void acp_dma_descr_init(void __iomem *acp_mmio) +{
u32 sram_pte_offset = 0;
/* SRAM starts at 0x04000000. From that offset one page (4KB) left for
* filling DMA descriptors.sram_pte_offset = 0x04001000 , used for
This is a device relative address rather than an absolute address? A lot of these numbers seem kind of large...
That is SRAM block's offset address. Sorry. I didn't understand the expected change, here.
Offset with regard to what? I'm asking if these addresses are within the IP or global.
mentioned addresses are offsets within the ACP IP.
+static void acp_turnoff_sram_banks(void __iomem *acp_mmio) +{
/* Bank 0 : used for DMA descriptors
* Bank 1 to 4 : used for playback
* Bank 5 to 8 : used for capture
* Each bank is 8kB and max size allocated for playback/ capture is
* 16kB(max period size) * 2(max periods) reserved for playback/capture
* in ALSA driver
* Turn off all SRAM banks except above banks during playback/capture
*/
u32 val, bank;
I didn't see any runtime management of the other SRAM bank power, seems like that'd be a good idea?
SRAM banks are part of ACP IP. With ACP's runtime PM handling, all blocks within ACP IP can be powered-off and on.
So why can't that cope with these banks then?
Maybe Iam not clear before. I mean that memory banks which wont be needed at all are turned off forever. Using runtime PM, when complete ACP IP gets powered-off all the banks(including the ones used for play/capture) within IP are turned off. When IP is runtime resumed, though all banks gets turned on, the unused banks are turned off again. With this, Iam trying to achieve runtime management.
+/* Update DMA postion in audio ring buffer at period level granularity.
- This will be used by ALSA PCM driver
- */
+u32 acp_update_dma_pointer(void __iomem *acp_mmio, int direction,
u32 period_size)
+{
Don't limit the accuracy to period level if the harwdare can do better, report the current position as accurately as possible please. This is also feeling like we've got an unneeded abstraction here - why was this not just directly the pointer operation?
The current version of hardware has the limitation of accuracy reporting.
If the hardware is limited why does the comment suggest that we are limiting based on periods?
I will modify accordingly.
On Sat, Oct 24, 2015 at 02:21:04AM +0530, maruthi srinivas wrote:
On Sat, Oct 24, 2015 at 1:01 AM, Mark Brown broonie@kernel.org wrote:
+static void acp_turnoff_sram_banks(void __iomem *acp_mmio) +{
/* Bank 0 : used for DMA descriptors
* Bank 1 to 4 : used for playback
* Bank 5 to 8 : used for capture
SRAM banks are part of ACP IP. With ACP's runtime PM handling, all blocks within ACP IP can be powered-off and on.
So why can't that cope with these banks then?
Maybe Iam not clear before. I mean that memory banks which wont be needed at all are turned off forever. Using runtime PM, when complete ACP IP gets powered-off all the banks(including the ones used for play/capture) within IP are turned off. When IP is runtime resumed, though all banks gets turned on, the unused banks are turned off again. With this, Iam trying to achieve runtime management.
So the initialisation that's done to power off the unused memory banks somehow gets preserved when the block is powered off during runtime power management? It's really not clear, this looks like it's only called once on init.
Obviously it's also less power efficient than it could be too since it's going to (so far as I can tell) keep the playback and capture areas powered up even when only one is in use.
On Thu, Oct 8, 2015 at 12:12 PM, Alex Deucher alexdeucher@gmail.com wrote:
This patch set implements support for i2s audio and new AMD GPUs. The i2s codec is fed by a DMA engine on the GPU. To handle this we create mfd cells which we hang the i2s codec and DMA engine on. Because of this, this patch set covers two subsystems: drm and alsa. The drm patches add support for the ACP hw block which provides the DMA engine for the i2s codec. The alsa patches add the ASoC driver for the i2s codec. Since the alsa changes depend on the drm changes in this patch set as well as some other drm changes queued for 4.3, I'd like to take the alsa patches in via the drm tree.
Ping? Anyone had a chance to look over this latest revision? I think we've addressed all the outstanding comments.
Thanks,
Alex
V2 changes:
- Use the MFD subsystem rather than adding our own bus
- Squash all sub-feature patches together
- fix comments mentioned in previous review
V3 changes:
- Update the designware driver to handle slave mode, amd specific features
- Use the designware driver directly for i2s
- Move the DMA handling from the GPU driver into the AMD ASoC driver
- Change the license on the ASoC driver to GPL
v4 changes:
- patch "ASoC : dwc : support dw i2s in slave mode" accepted
- Add a _dai_fmt() operation that checks to make sure that the mode we're setting corresponds to what we read back from the hardware.
- Split specific quirks into separate patches
- Set the specific quirks that apply to AMD chips in the acp driver
Patch 7 adds the register headers for the ACP block which is a pretty big patch so I've excluded it from email. The entire patch set can be viewed here: http://cgit.freedesktop.org/~agd5f/linux/log/?h=acp-upstream5
Thanks,
Alex
Maruthi Bayyavarapu (1): drm/amd: add ACP driver support
Maruthi Srinivas Bayyavarapu (7): ASoC : dwc : add check for master/slave format ASoC : dwc : add different playback/capture base ASoC : dwc : add quirk for dwc controller instances ASoC : dwc : add quirk for different register offset ASoC : dwc : reconfigure dwc in 'resume' from 'suspend' ASoC : AMD : add ACP 2.2 register headers ASoC: AMD: add AMD ASoC ACP-I2S driver
drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/amd/acp/Kconfig | 10 + drivers/gpu/drm/amd/acp/Makefile | 9 + drivers/gpu/drm/amd/acp/acp_hw.c | 127 ++ drivers/gpu/drm/amd/acp/include/acp_gfx_if.h | 49 + drivers/gpu/drm/amd/amdgpu/Makefile | 13 +- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 12 + drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c | 275 +++ drivers/gpu/drm/amd/amdgpu/amdgpu_acp.h | 41 + drivers/gpu/drm/amd/amdgpu/vi.c | 12 + drivers/gpu/drm/amd/include/amd_shared.h | 1 + include/linux/mfd/amd_acp.h | 43 + include/sound/designware_i2s.h | 6 + sound/soc/Kconfig | 1 + sound/soc/Makefile | 1 + sound/soc/amd/Kconfig | 4 + sound/soc/amd/Makefile | 3 + sound/soc/amd/acp-pcm-dma.c | 518 ++++++ sound/soc/amd/acp.c | 736 +++++++++ sound/soc/amd/acp.h | 147 ++ sound/soc/amd/include/acp_2_2_d.h | 609 +++++++ sound/soc/amd/include/acp_2_2_enum.h | 1068 ++++++++++++ sound/soc/amd/include/acp_2_2_sh_mask.h | 2292 ++++++++++++++++++++++++++ sound/soc/dwc/designware_i2s.c | 206 ++- 24 files changed, 6121 insertions(+), 64 deletions(-) create mode 100644 drivers/gpu/drm/amd/acp/Kconfig create mode 100644 drivers/gpu/drm/amd/acp/Makefile create mode 100644 drivers/gpu/drm/amd/acp/acp_hw.c create mode 100644 drivers/gpu/drm/amd/acp/include/acp_gfx_if.h create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_acp.h create mode 100644 include/linux/mfd/amd_acp.h create mode 100644 sound/soc/amd/Kconfig create mode 100644 sound/soc/amd/Makefile create mode 100644 sound/soc/amd/acp-pcm-dma.c create mode 100644 sound/soc/amd/acp.c create mode 100644 sound/soc/amd/acp.h create mode 100644 sound/soc/amd/include/acp_2_2_d.h create mode 100644 sound/soc/amd/include/acp_2_2_enum.h create mode 100644 sound/soc/amd/include/acp_2_2_sh_mask.h
-- 1.8.3.1
On Mon, Oct 19, 2015 at 11:11:19AM -0400, Alex Deucher wrote:
Ping? Anyone had a chance to look over this latest revision? I think we've addressed all the outstanding comments.
Please don't send content free pings and please allow a reasonable time for review. People get busy, go on holiday, attend conferences and so on so unless there is some reason for urgency (like critical bug fixes) please allow at least a couple of weeks for review. Sending content free pings just adds to the mail volume (if they are seen at all) and if something has gone wrong you'll have to resend the patches anyway.
On 20 October 2015 at 10:14, Mark Brown broonie@kernel.org wrote:
On Mon, Oct 19, 2015 at 11:11:19AM -0400, Alex Deucher wrote:
Ping? Anyone had a chance to look over this latest revision? I think we've addressed all the outstanding comments.
Please don't send content free pings and please allow a reasonable time for review. People get busy, go on holiday, attend conferences and so on so unless there is some reason for urgency (like critical bug fixes) please allow at least a couple of weeks for review. Sending content free pings just adds to the mail volume (if they are seen at all) and if something has gone wrong you'll have to resend the patches anyway.
Hi Mark,
To be honest, maintainers also forget, get busy doing other things, get diverted down rabbit holes, get tasked to do something else. We also don't know when maintainers are at conferences, or on holidays.
I don't think repinging patches after 11 days is that unreasonable if you've gotten replies quicker than that previously.
I'd prefer a content free ping, rather than another round of the patches with two spelling mistakes fixed etc.
Dave.
On Tue, Oct 20, 2015 at 10:19:38AM +1000, Dave Airlie wrote:
On 20 October 2015 at 10:14, Mark Brown broonie@kernel.org wrote:
Please don't send content free pings and please allow a reasonable time for review. People get busy, go on holiday, attend conferences and so on so unless there is some reason for urgency (like critical bug fixes) please allow at least a couple of weeks for review. Sending content free pings just adds to the mail volume (if they are seen at all) and if something has gone wrong you'll have to resend the patches anyway.
To be honest, maintainers also forget, get busy doing other things, get diverted down rabbit holes, get tasked to do something else. We also don't know when maintainers are at conferences, or on holidays.
This is why I'm saying leave a reasonable amount of time, for relatively short delays on non-critical stuff the most likely explanation is that something like the above has happened and there is no problem.
I don't think repinging patches after 11 days is that unreasonable if you've gotten replies quicker than that previously.
It's a waste of time, if it's just a delay then at best what's going to happen is that the mail is going to get threaded in with the original posting and make handling the series take longer when it does get looked at (something made worse by the tendency for content free pings to do things like not bother trimming context or top post). If things have been dropped then all it does is require that the maintainer ask that the patch be resent again which just makes things take even longer. Either way a content free ping, especially a rapid one, is at best slowing things down.
When people chase me to review their patches ahead of other people's without some reason for it my standard response is to defer reviewing those changes, I don't want to create the impression that this is a good way for people to get their work prioritised.
If people are taking the fact that sometimes responses come faster as a sign that content free pings are a good idea then I begin to see why some maintainers are just generally unresponsive. :(
I'd prefer a content free ping, rather than another round of the patches with two spelling mistakes fixed etc.
It's very easy to just discard old serieses and never even look at them, we have to do that all the time anyway when other people help out with review and identify issues.
participants (4)
-
Alex Deucher
-
Dave Airlie
-
Mark Brown
-
maruthi srinivas