[alsa-devel] fsl_ssi.c: Getting channel slips with fsl_ssi.c in TDM (network) mode.
Hello Arnaud, all,
I'm trying to get the i.MX6 SSI working in 16-channel TDM mode. FYI, I'm working with the stock 4.3 kernel at https://github.com/torvalds/linux.
When I apply the patch below, the SSI does all configure properly and even starts streaming properly with all the bits in the right place on the SSI pins. However, given a little bit of time and/or IO to the SD card, the bit-stream slips by 1 slot, causing all of the channels to be misaligned.
My changes to the SSI driver are very minimal (shown below), and amount to forcing it into network (TDM) mode, and changing the maximum channels, and setting the STCCR DC mask.
There is no indication from user space that anything has slipped, so the data stream just continues on shifted by 1 slot.
You (Arnaud) mentioned in a previous thread ("Multiple codecs on one sound card for multi-channel sound card"), that I should just have to set channels_max (and presumably the other changes I mentioned), and it'll work mostly. However, it's very unreliable at the moment.
Any thoughts to how I can diagnose this problem would be greatly appreciated!
So, what happens is this:
The SSI Starts sending data like this: SLOT 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 DATA 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
But then after time, something slips and without warning it goes to: SLOT 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 DATA 15 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c index 37c5cd4..73778c2 100644 --- a/sound/soc/fsl/fsl_ssi.c +++ b/sound/soc/fsl/fsl_ssi.c @@ -749,7 +749,10 @@ static int fsl_ssi_hw_params(struct snd_pcm_substream *substream, CCSR_SSI_SCR_NET | CCSR_SSI_SCR_I2S_MODE_MASK, channels == 1 ? 0 : i2smode); } - + ssi_private->i2s_mode = CCSR_SSI_SCR_I2S_MODE_NORMAL | CCSR_SSI_SCR_NET; + regmap_update_bits(regs, CCSR_SSI_SCR, + CCSR_SSI_SCR_NET | CCSR_SSI_SCR_I2S_MODE_MASK, + ssi_private->i2s_mode); /* * FIXME: The documentation says that SxCCR[WL] should not be * modified while the SSI is enabled. The only time this can @@ -863,6 +866,15 @@ static int _fsl_ssi_set_dai_fmt(struct device *dev, return -EINVAL; } scr |= ssi_private->i2s_mode; + // Set to 16 slots/frame + regmap_update_bits(regs, CCSR_SSI_STCCR, + CCSR_SSI_SxCCR_DC_MASK, + CCSR_SSI_SxCCR_DC(16)); + + regmap_update_bits(regs, CCSR_SSI_SRCCR, + CCSR_SSI_SxCCR_DC_MASK, + CCSR_SSI_SxCCR_DC(16)); +
/* DAI clock inversion */ switch (fmt & SND_SOC_DAIFMT_INV_MASK) { @@ -1084,14 +1099,14 @@ static struct snd_soc_dai_driver fsl_ssi_dai_template = { .playback = { .stream_name = "CPU-Playback", .channels_min = 1, - .channels_max = 2, + .channels_max = 16, .rates = FSLSSI_I2S_RATES, .formats = FSLSSI_I2S_FORMATS, }, .capture = { .stream_name = "CPU-Capture", .channels_min = 1, - .channels_max = 2, + .channels_max = 16, .rates = FSLSSI_I2S_RATES, .formats = FSLSSI_I2S_FORMATS, },
Another thing I have tried is changing the watermark level for the fifo to give the DMA interrupt some extra time. The problem still happens, but seems to be a bit better.
The fifo watermark change is this: diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c index 73778c2..7c2e4b0 100644 --- a/sound/soc/fsl/fsl_ssi.c +++ b/sound/soc/fsl/fsl_ssi.c @@ -54,6 +54,8 @@ #include "fsl_ssi.h" #include "imx-pcm.h"
+#define WATERMARK 8 + /** * FSLSSI_I2S_RATES: sample rates supported by the I2S * @@ -943,7 +950,7 @@ static int _fsl_ssi_set_dai_fmt(struct device *dev, * size. */ if (ssi_private->use_dma) - wm = ssi_private->fifo_depth - 2; + wm = ssi_private->fifo_depth - WATERMARK; else wm = ssi_private->fifo_depth;
@@ -1260,8 +1267,8 @@ static int fsl_ssi_imx_probe(struct platform_device *pdev, * We have burstsize be "fifo_depth - 2" to match the SSI * watermark setting in fsl_ssi_startup(). */ - ssi_private->dma_params_tx.maxburst = ssi_private->fifo_depth - 2; - ssi_private->dma_params_rx.maxburst = ssi_private->fifo_depth - 2; + ssi_private->dma_params_tx.maxburst = ssi_private->fifo_depth - WATERMARK; + ssi_private->dma_params_rx.maxburst = ssi_private->fifo_depth - WATERMARK; ssi_private->dma_params_tx.addr = ssi_private->ssi_phys + CCSR_SSI_STX0; ssi_private->dma_params_rx.addr = ssi_private->ssi_phys + CCSR_SSI_SRX0;
Thanks, -Caleb
Hello Caleb,
I go through all [few] patchs we apply to the 4.0 linux tree (didn't jump to 4.2 yet). There is one concerning the DMA firmware you can find in the freescale tree, available at git://git.freescale.com/imx/linux-2.6-imx.git
commit 619bfca89908b90cd6606ed894c180df0c481508 Author: Shawn Guo shawn.guo@freescale.com Date: Tue Jul 16 22:53:18 2013 +0800
ENGR00269945: firwmare: imx: add imx6q sdma script
Add imx6q sdma script which will be used by all i.MX6 series.
Signed-off-by: Shawn Guo shawn.guo@freescale.com
firmware/Makefile | 1 + firmware/imx/sdma/sdma-imx6q.bin.ihex | 116 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 117 insertions(+)
I don't know how 4.3rcN catch some of the freescale patch. but you may check if you need to apply this one. I know there are some other SDMA firmware available all around in this freescale tree. I didn't find proper release note or documentation. May be you can test them also.
Arnaud
Le 19/10/2015 17:55, Caleb Crome a écrit :
Hello Arnaud, all,
I'm trying to get the i.MX6 SSI working in 16-channel TDM mode. FYI, I'm working with the stock 4.3 kernel at https://github.com/torvalds/linux.
When I apply the patch below, the SSI does all configure properly and even starts streaming properly with all the bits in the right place on the SSI pins. However, given a little bit of time and/or IO to the SD card, the bit-stream slips by 1 slot, causing all of the channels to be misaligned.
My changes to the SSI driver are very minimal (shown below), and amount to forcing it into network (TDM) mode, and changing the maximum channels, and setting the STCCR DC mask.
There is no indication from user space that anything has slipped, so the data stream just continues on shifted by 1 slot.
You (Arnaud) mentioned in a previous thread ("Multiple codecs on one sound card for multi-channel sound card"), that I should just have to set channels_max (and presumably the other changes I mentioned), and it'll work mostly. However, it's very unreliable at the moment.
Any thoughts to how I can diagnose this problem would be greatly appreciated!
So, what happens is this:
The SSI Starts sending data like this: SLOT 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 DATA 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
But then after time, something slips and without warning it goes to: SLOT 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 DATA 15 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c index 37c5cd4..73778c2 100644 --- a/sound/soc/fsl/fsl_ssi.c +++ b/sound/soc/fsl/fsl_ssi.c @@ -749,7 +749,10 @@ static int fsl_ssi_hw_params(struct snd_pcm_substream *substream, CCSR_SSI_SCR_NET | CCSR_SSI_SCR_I2S_MODE_MASK, channels == 1 ? 0 : i2smode); }
- ssi_private->i2s_mode = CCSR_SSI_SCR_I2S_MODE_NORMAL |
CCSR_SSI_SCR_NET;
- regmap_update_bits(regs, CCSR_SSI_SCR,
CCSR_SSI_SCR_NET | CCSR_SSI_SCR_I2S_MODE_MASK,
/* * FIXME: The documentation says that SxCCR[WL] should not be * modified while the SSI is enabled. The only time this canssi_private->i2s_mode);
@@ -863,6 +866,15 @@ static int _fsl_ssi_set_dai_fmt(struct device *dev, return -EINVAL; } scr |= ssi_private->i2s_mode;
// Set to 16 slots/frame
regmap_update_bits(regs, CCSR_SSI_STCCR,
CCSR_SSI_SxCCR_DC_MASK,
CCSR_SSI_SxCCR_DC(16));
regmap_update_bits(regs, CCSR_SSI_SRCCR,
CCSR_SSI_SxCCR_DC_MASK,
CCSR_SSI_SxCCR_DC(16));
/* DAI clock inversion */ switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
@@ -1084,14 +1099,14 @@ static struct snd_soc_dai_driver fsl_ssi_dai_template = { .playback = { .stream_name = "CPU-Playback", .channels_min = 1,
.channels_max = 2,
}, .capture = { .stream_name = "CPU-Capture", .channels_min = 1,.channels_max = 16, .rates = FSLSSI_I2S_RATES, .formats = FSLSSI_I2S_FORMATS,
.channels_max = 2,
},.channels_max = 16, .rates = FSLSSI_I2S_RATES, .formats = FSLSSI_I2S_FORMATS,
Another thing I have tried is changing the watermark level for the fifo to give the DMA interrupt some extra time. The problem still happens, but seems to be a bit better.
The fifo watermark change is this: diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c index 73778c2..7c2e4b0 100644 --- a/sound/soc/fsl/fsl_ssi.c +++ b/sound/soc/fsl/fsl_ssi.c @@ -54,6 +54,8 @@ #include "fsl_ssi.h" #include "imx-pcm.h"
+#define WATERMARK 8
/**
- FSLSSI_I2S_RATES: sample rates supported by the I2S
@@ -943,7 +950,7 @@ static int _fsl_ssi_set_dai_fmt(struct device *dev, * size. */ if (ssi_private->use_dma)
wm = ssi_private->fifo_depth - 2;
else wm = ssi_private->fifo_depth;wm = ssi_private->fifo_depth - WATERMARK;
@@ -1260,8 +1267,8 @@ static int fsl_ssi_imx_probe(struct platform_device *pdev, * We have burstsize be "fifo_depth - 2" to match the SSI * watermark setting in fsl_ssi_startup(). */
- ssi_private->dma_params_tx.maxburst = ssi_private->fifo_depth - 2;
- ssi_private->dma_params_rx.maxburst = ssi_private->fifo_depth - 2;
- ssi_private->dma_params_tx.maxburst = ssi_private->fifo_depth -
WATERMARK;
- ssi_private->dma_params_rx.maxburst = ssi_private->fifo_depth -
WATERMARK; ssi_private->dma_params_tx.addr = ssi_private->ssi_phys + CCSR_SSI_STX0; ssi_private->dma_params_rx.addr = ssi_private->ssi_phys + CCSR_SSI_SRX0;
Thanks, -Caleb
On Tue, Oct 20, 2015 at 12:36 AM, arnaud.mouiche@invoxia.com arnaud.mouiche@invoxia.com wrote:
Hello Caleb,
I go through all [few] patchs we apply to the 4.0 linux tree (didn't jump to 4.2 yet). There is one concerning the DMA firmware you can find in the freescale tree, available at git://git.freescale.com/imx/linux-2.6-imx.git
commit 619bfca89908b90cd6606ed894c180df0c481508 Author: Shawn Guo shawn.guo@freescale.com Date: Tue Jul 16 22:53:18 2013 +0800
ENGR00269945: firwmare: imx: add imx6q sdma script Add imx6q sdma script which will be used by all i.MX6 series. Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
firmware/Makefile | 1 + firmware/imx/sdma/sdma-imx6q.bin.ihex | 116 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 117 insertions(+)
I don't know how 4.3rcN catch some of the freescale patch. but you may check if you need to apply this one. I know there are some other SDMA firmware available all around in this freescale tree. I didn't find proper release note or documentation. May be you can test them also.
Arnaud
Hi Arnaud, My root filesystem already had that firmware in it (the kernel didn't have the kernel patch, but when I applied that patch, the generated sdma script was identical.
So, unfortunately, that's not the problem with the channel slipping. Any other thoughts on why the channel would slip? Or pointers on how to diagnose? I have an oscilloscope & know how to use it :-) Also, I can flip a GPIO to watch for timing of interrupts, etc (although I haven't done that yet).
Thanks, -Caleb
Le 19/10/2015 17:55, Caleb Crome a écrit :
Hello Arnaud, all,
I'm trying to get the i.MX6 SSI working in 16-channel TDM mode. FYI, I'm working with the stock 4.3 kernel at https://github.com/torvalds/linux.
When I apply the patch below, the SSI does all configure properly and even starts streaming properly with all the bits in the right place on the SSI pins. However, given a little bit of time and/or IO to the SD card, the bit-stream slips by 1 slot, causing all of the channels to be misaligned.
My changes to the SSI driver are very minimal (shown below), and amount to forcing it into network (TDM) mode, and changing the maximum channels, and setting the STCCR DC mask.
There is no indication from user space that anything has slipped, so the data stream just continues on shifted by 1 slot.
You (Arnaud) mentioned in a previous thread ("Multiple codecs on one sound card for multi-channel sound card"), that I should just have to set channels_max (and presumably the other changes I mentioned), and it'll work mostly. However, it's very unreliable at the moment.
Any thoughts to how I can diagnose this problem would be greatly appreciated!
So, what happens is this:
The SSI Starts sending data like this: SLOT 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 DATA 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
But then after time, something slips and without warning it goes to: SLOT 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 DATA 15 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c index 37c5cd4..73778c2 100644 --- a/sound/soc/fsl/fsl_ssi.c +++ b/sound/soc/fsl/fsl_ssi.c @@ -749,7 +749,10 @@ static int fsl_ssi_hw_params(struct snd_pcm_substream *substream, CCSR_SSI_SCR_NET | CCSR_SSI_SCR_I2S_MODE_MASK, channels == 1 ? 0 : i2smode); }
- ssi_private->i2s_mode = CCSR_SSI_SCR_I2S_MODE_NORMAL |
CCSR_SSI_SCR_NET;
- regmap_update_bits(regs, CCSR_SSI_SCR,
CCSR_SSI_SCR_NET | CCSR_SSI_SCR_I2S_MODE_MASK,
/* * FIXME: The documentation says that SxCCR[WL] should not be * modified while the SSI is enabled. The only time this canssi_private->i2s_mode);
@@ -863,6 +866,15 @@ static int _fsl_ssi_set_dai_fmt(struct device *dev, return -EINVAL; } scr |= ssi_private->i2s_mode;
// Set to 16 slots/frame
regmap_update_bits(regs, CCSR_SSI_STCCR,
CCSR_SSI_SxCCR_DC_MASK,
CCSR_SSI_SxCCR_DC(16));
regmap_update_bits(regs, CCSR_SSI_SRCCR,
CCSR_SSI_SxCCR_DC_MASK,
CCSR_SSI_SxCCR_DC(16));
/* DAI clock inversion */ switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
@@ -1084,14 +1099,14 @@ static struct snd_soc_dai_driver fsl_ssi_dai_template = { .playback = { .stream_name = "CPU-Playback", .channels_min = 1,
.channels_max = 2,
}, .capture = { .stream_name = "CPU-Capture", .channels_min = 1,.channels_max = 16, .rates = FSLSSI_I2S_RATES, .formats = FSLSSI_I2S_FORMATS,
.channels_max = 2,
},.channels_max = 16, .rates = FSLSSI_I2S_RATES, .formats = FSLSSI_I2S_FORMATS,
Another thing I have tried is changing the watermark level for the fifo to give the DMA interrupt some extra time. The problem still happens, but seems to be a bit better.
The fifo watermark change is this: diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c index 73778c2..7c2e4b0 100644 --- a/sound/soc/fsl/fsl_ssi.c +++ b/sound/soc/fsl/fsl_ssi.c @@ -54,6 +54,8 @@ #include "fsl_ssi.h" #include "imx-pcm.h"
+#define WATERMARK 8
/**
- FSLSSI_I2S_RATES: sample rates supported by the I2S
@@ -943,7 +950,7 @@ static int _fsl_ssi_set_dai_fmt(struct device *dev, * size. */ if (ssi_private->use_dma)
wm = ssi_private->fifo_depth - 2;
else wm = ssi_private->fifo_depth;wm = ssi_private->fifo_depth - WATERMARK;
@@ -1260,8 +1267,8 @@ static int fsl_ssi_imx_probe(struct platform_device *pdev, * We have burstsize be "fifo_depth - 2" to match the SSI * watermark setting in fsl_ssi_startup(). */
- ssi_private->dma_params_tx.maxburst = ssi_private->fifo_depth - 2;
- ssi_private->dma_params_rx.maxburst = ssi_private->fifo_depth - 2;
- ssi_private->dma_params_tx.maxburst = ssi_private->fifo_depth -
WATERMARK;
- ssi_private->dma_params_rx.maxburst = ssi_private->fifo_depth -
WATERMARK; ssi_private->dma_params_tx.addr = ssi_private->ssi_phys + CCSR_SSI_STX0; ssi_private->dma_params_rx.addr = ssi_private->ssi_phys + CCSR_SSI_SRX0;
Thanks, -Caleb
Le 20/10/2015 19:43, Caleb Crome a écrit :
On Tue, Oct 20, 2015 at 12:36 AM, arnaud.mouiche@invoxia.com arnaud.mouiche@invoxia.com wrote:
Hello Caleb,
I go through all [few] patchs we apply to the 4.0 linux tree (didn't jump to 4.2 yet). There is one concerning the DMA firmware you can find in the freescale tree, available at git://git.freescale.com/imx/linux-2.6-imx.git
commit 619bfca89908b90cd6606ed894c180df0c481508 Author: Shawn Guo shawn.guo@freescale.com Date: Tue Jul 16 22:53:18 2013 +0800
ENGR00269945: firwmare: imx: add imx6q sdma script Add imx6q sdma script which will be used by all i.MX6 series. Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
firmware/Makefile | 1 + firmware/imx/sdma/sdma-imx6q.bin.ihex | 116 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 117 insertions(+)
I don't know how 4.3rcN catch some of the freescale patch. but you may check if you need to apply this one. I know there are some other SDMA firmware available all around in this freescale tree. I didn't find proper release note or documentation. May be you can test them also.
Arnaud
Hi Arnaud, My root filesystem already had that firmware in it (the kernel didn't have the kernel patch, but when I applied that patch, the generated sdma script was identical.
So, unfortunately, that's not the problem with the channel slipping. Any other thoughts on why the channel would slip? Or pointers on how to diagnose? I have an oscilloscope & know how to use it :-) Also, I can flip a GPIO to watch for timing of interrupts, etc (although I haven't done that yet).
Thanks, -Caleb
Hello Caleb,
In your situation, I would: - check if TUE0/1 flag never rise (Transmitter Underrun) by activating the TUE0/1IE bit to generate related interrupts. It looks like already enabled in 4.0 by collecting statistics with fsl_ssi_dbg_isr(). Despite there is no printk message on underrun, stats can be read from /sys/kernel/debug/xxxx.ssi/stats.
- I suspect the dma is not fast enough to fill the FIFO. may be you should dig to check how SDMA priority are configured amongs the differents DMA channels. Not something I already look at before. A quick look suggest that DMA_PRIO_HIGH is _NOT_ configured by the fsl_ssi.c driver (wheras the imx-ssi.c did)
Regards, Arnaud
On Wed, Oct 21, 2015 at 12:32 AM, arnaud.mouiche@invoxia.com arnaud.mouiche@invoxia.com wrote:
Le 20/10/2015 19:43, Caleb Crome a écrit :
Hi Arnaud, My root filesystem already had that firmware in it (the kernel didn't have the kernel patch, but when I applied that patch, the generated sdma script was identical.
So, unfortunately, that's not the problem with the channel slipping. Any other thoughts on why the channel would slip? Or pointers on how to diagnose? I have an oscilloscope & know how to use it :-) Also, I can flip a GPIO to watch for timing of interrupts, etc (although I haven't done that yet).
Thanks, -Caleb
Hello Caleb,
In your situation, I would:
- check if TUE0/1 flag never rise (Transmitter Underrun) by activating the
TUE0/1IE bit to generate related interrupts. It looks like already enabled in 4.0 by collecting statistics with fsl_ssi_dbg_isr(). Despite there is no printk message on underrun, stats can be read from /sys/kernel/debug/xxxx.ssi/stats.
Heh, I checked that and I couldn't get the fsl_ssi_dbg_isr to trigger ever, for any reason. Somehow interrupts seem to be disabled in the SSI driver, and I can't figure out how to enable them. It appears that the only interrupt required is the DMA interrupt, and SSI interrupts are not checked. The /sys/kernel/debug/xxxx.ssi/stats file reads all zeros no matter what, even during playing, and even after the user space detects underruns.
- I suspect the dma is not fast enough to fill the FIFO. may be you should
dig to check how SDMA priority are configured amongs the differents DMA channels. Not something I already look at before. A quick look suggest that DMA_PRIO_HIGH is _NOT_ configured by the fsl_ssi.c driver (wheras the imx-ssi.c did)
Ah ha! Perhaps that's it. I will check into that. Maybe that's the root cause. Thanks so much.
-Caleb
Regards, Arnaud
On Wed, Oct 21, 2015 at 12:37 PM, Caleb Crome caleb@crome.org wrote:
On Wed, Oct 21, 2015 at 12:32 AM, arnaud.mouiche@invoxia.com arnaud.mouiche@invoxia.com wrote:
Le 20/10/2015 19:43, Caleb Crome a écrit :
Hi Arnaud, My root filesystem already had that firmware in it (the kernel didn't have the kernel patch, but when I applied that patch, the generated sdma script was identical.
So, unfortunately, that's not the problem with the channel slipping. Any other thoughts on why the channel would slip? Or pointers on how to diagnose? I have an oscilloscope & know how to use it :-) Also, I can flip a GPIO to watch for timing of interrupts, etc (although I haven't done that yet).
Thanks, -Caleb
Hello Caleb,
In your situation, I would:
- check if TUE0/1 flag never rise (Transmitter Underrun) by activating the
TUE0/1IE bit to generate related interrupts. It looks like already enabled in 4.0 by collecting statistics with fsl_ssi_dbg_isr(). Despite there is no printk message on underrun, stats can be read from /sys/kernel/debug/xxxx.ssi/stats.
Heh, I checked that and I couldn't get the fsl_ssi_dbg_isr to trigger ever, for any reason. Somehow interrupts seem to be disabled in the SSI driver, and I can't figure out how to enable them. It appears that the only interrupt required is the DMA interrupt, and SSI interrupts are not checked. The /sys/kernel/debug/xxxx.ssi/stats file reads all zeros no matter what, even during playing, and even after the user space detects underruns.
- I suspect the dma is not fast enough to fill the FIFO. may be you should
dig to check how SDMA priority are configured amongs the differents DMA channels. Not something I already look at before. A quick look suggest that DMA_PRIO_HIGH is _NOT_ configured by the fsl_ssi.c driver (wheras the imx-ssi.c did)
Ah ha! Perhaps that's it. I will check into that. Maybe that's the root cause. Thanks so much.
So, the dma priority doesn't seem to be the issue. It's now set in the device tree, and strangely it's set to priority 0 (the highest) along with the UARTS. priority 0 is just the highest in the device tree -- it gets remapped to priority 3 in the sdma driver. the DT exposes only 3 levels of DMA priority, low, medium, and high. I created a new level that maps to DMA priroity 7 (the highest in the hardware), but still got the problem.
So, still something unknown causing dma to miss samples. must be in the dma ISR I would assume. I guess it's time to look into that.
-Caleb
Hi,
On Mon, Oct 26, 2015 at 10:31:08AM -0700, Caleb Crome wrote:
On Wed, Oct 21, 2015 at 12:37 PM, Caleb Crome caleb@crome.org wrote:
On Wed, Oct 21, 2015 at 12:32 AM, arnaud.mouiche@invoxia.com arnaud.mouiche@invoxia.com wrote:
Le 20/10/2015 19:43, Caleb Crome a écrit :
Hi Arnaud, My root filesystem already had that firmware in it (the kernel didn't have the kernel patch, but when I applied that patch, the generated sdma script was identical.
So, unfortunately, that's not the problem with the channel slipping. Any other thoughts on why the channel would slip? Or pointers on how to diagnose? I have an oscilloscope & know how to use it :-) Also, I can flip a GPIO to watch for timing of interrupts, etc (although I haven't done that yet).
Thanks, -Caleb
Hello Caleb,
In your situation, I would:
- check if TUE0/1 flag never rise (Transmitter Underrun) by activating the
TUE0/1IE bit to generate related interrupts. It looks like already enabled in 4.0 by collecting statistics with fsl_ssi_dbg_isr(). Despite there is no printk message on underrun, stats can be read from /sys/kernel/debug/xxxx.ssi/stats.
Heh, I checked that and I couldn't get the fsl_ssi_dbg_isr to trigger ever, for any reason. Somehow interrupts seem to be disabled in the SSI driver, and I can't figure out how to enable them. It appears that the only interrupt required is the DMA interrupt, and SSI interrupts are not checked. The /sys/kernel/debug/xxxx.ssi/stats file reads all zeros no matter what, even during playing, and even after the user space detects underruns.
- I suspect the dma is not fast enough to fill the FIFO. may be you should
dig to check how SDMA priority are configured amongs the differents DMA channels. Not something I already look at before. A quick look suggest that DMA_PRIO_HIGH is _NOT_ configured by the fsl_ssi.c driver (wheras the imx-ssi.c did)
Ah ha! Perhaps that's it. I will check into that. Maybe that's the root cause. Thanks so much.
So, the dma priority doesn't seem to be the issue. It's now set in the device tree, and strangely it's set to priority 0 (the highest) along with the UARTS. priority 0 is just the highest in the device tree -- it gets remapped to priority 3 in the sdma driver. the DT exposes only 3 levels of DMA priority, low, medium, and high. I created a new level that maps to DMA priroity 7 (the highest in the hardware), but still got the problem.
So, still something unknown causing dma to miss samples. must be in the dma ISR I would assume. I guess it's time to look into that.
Cc Nicolin, Fabio, Shawn
Perhaps you have an idea about this?
Regards,
Markus
-Caleb _______________________________________________ Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
Hi Markus/Caleb,
On Tue, Oct 27, 2015 at 5:13 AM, Markus Pargmann mpa@pengutronix.de wrote:
Hi,
Cc Nicolin, Fabio, Shawn
Perhaps you have an idea about this?
Could you please try it without using the external SDMA firmware?
Regards,
Fabio Estevam
On Tue, Oct 27, 2015 at 2:41 AM, Fabio Estevam festevam@gmail.com wrote:
Hi Markus/Caleb,
On Tue, Oct 27, 2015 at 5:13 AM, Markus Pargmann mpa@pengutronix.de wrote:
Hi,
Cc Nicolin, Fabio, Shawn
Perhaps you have an idea about this?
Could you please try it without using the external SDMA firmware?
I do need *some* SDMA firmware, correct? The firmware that I'm using ends up in /lib/firmware/imx/sdma/sdma-imx6q.bin and is md5sum 5d4584134cc4cba62e1be2f382cd6f3a.
It's the exact same file that came on the root filesystem as is generated from the freescale kernel tree, literally the same MD5SUM.
Or, should I simply remove that file? I haven't looked into how the kernel driver and the .bin file interact, but I do see that the imx6qdl.dtsi references the imx/sdma/sdma-imx6q.bin file.
Thanks, -Caleb
Regards,
Fabio Estevam
On Tue, Oct 27, 2015 at 2:02 PM, Caleb Crome caleb@crome.org wrote:
Could you please try it without using the external SDMA firmware?
I do need *some* SDMA firmware, correct? The firmware that I'm using ends up in /lib/firmware/imx/sdma/sdma-imx6q.bin and is md5sum 5d4584134cc4cba62e1be2f382cd6f3a.
SSI can operate with the ROM SDMA firmware.
I would like to know if this issue also happens if you don't pass the external firmware and use the internal ROM SDMA firmware instead.
Also, could you try bumping the SSI and SDMA clock rates at the maximum?
Adding Shengjiu in case he has any thoughts on getting TDM support in the SSI driver.
Regards,
Fabio Estevam
On Tue, Oct 27, 2015 at 9:10 AM, Fabio Estevam festevam@gmail.com wrote:
On Tue, Oct 27, 2015 at 2:02 PM, Caleb Crome caleb@crome.org wrote:
Could you please try it without using the external SDMA firmware?
I do need *some* SDMA firmware, correct? The firmware that I'm using ends up in /lib/firmware/imx/sdma/sdma-imx6q.bin and is md5sum 5d4584134cc4cba62e1be2f382cd6f3a.
SSI can operate with the ROM SDMA firmware.
I would like to know if this issue also happens if you don't pass the external firmware and use the internal ROM SDMA firmware instead.
Ah, good to know. Do I just remove reference in the .dtsi file? Remove the file from the filesystem? I'll do both to be doubly sure :-)
Also, could you try bumping the SSI and SDMA clock rates at the maximum?
Any idea how I do that? I guess it's in the .dtsi file perhaps? I'll poke around.
Thanks so much for your help. We've got several systems where we'd really like to use the MX6, but this issues is blocking us.
-caleb
Adding Shengjiu in case he has any thoughts on getting TDM support in the SSI driver.
Regards,
Fabio Estevam
On Tue, Oct 27, 2015 at 2:42 PM, Caleb Crome caleb@crome.org wrote:
On Tue, Oct 27, 2015 at 9:10 AM, Fabio Estevam festevam@gmail.com wrote:
On Tue, Oct 27, 2015 at 2:02 PM, Caleb Crome caleb@crome.org wrote:
Could you please try it without using the external SDMA firmware?
I do need *some* SDMA firmware, correct? The firmware that I'm using ends up in /lib/firmware/imx/sdma/sdma-imx6q.bin and is md5sum 5d4584134cc4cba62e1be2f382cd6f3a.
SSI can operate with the ROM SDMA firmware.
I would like to know if this issue also happens if you don't pass the external firmware and use the internal ROM SDMA firmware instead.
Ah, good to know. Do I just remove reference in the .dtsi file? Remove the file from the filesystem? I'll do both to be doubly sure :-)
Just remove it from the rootfs. Then you will see a message from the kernel saying that no external SDMA firmware could be found and that the internal one is going to be used.
Also, could you try bumping the SSI and SDMA clock rates at the maximum?
Any idea how I do that? I guess it's in the .dtsi file perhaps? I'll poke around.
You can try to call clk_set_rate() with the maximum allowed frequency inside the ssi driver. I don't recall on top of my head what is this value though.
Regards,
Fabio Estevam
[Adding Roberto in the thread as he is also trying to get SSI TDM support/
On Tue, Oct 27, 2015 at 2:45 PM, Fabio Estevam festevam@gmail.com wrote:
On Tue, Oct 27, 2015 at 2:42 PM, Caleb Crome caleb@crome.org wrote:
On Tue, Oct 27, 2015 at 9:10 AM, Fabio Estevam festevam@gmail.com wrote:
On Tue, Oct 27, 2015 at 2:02 PM, Caleb Crome caleb@crome.org wrote:
Could you please try it without using the external SDMA firmware?
I do need *some* SDMA firmware, correct? The firmware that I'm using ends up in /lib/firmware/imx/sdma/sdma-imx6q.bin and is md5sum 5d4584134cc4cba62e1be2f382cd6f3a.
SSI can operate with the ROM SDMA firmware.
I would like to know if this issue also happens if you don't pass the external firmware and use the internal ROM SDMA firmware instead.
Ah, good to know. Do I just remove reference in the .dtsi file? Remove the file from the filesystem? I'll do both to be doubly sure :-)
Just remove it from the rootfs. Then you will see a message from the kernel saying that no external SDMA firmware could be found and that the internal one is going to be used.
Also, could you try bumping the SSI and SDMA clock rates at the maximum?
Any idea how I do that? I guess it's in the .dtsi file perhaps? I'll poke around.
You can try to call clk_set_rate() with the maximum allowed frequency inside the ssi driver. I don't recall on top of my head what is this value though.
Regards,
Fabio Estevam
On 10/27/2015 07:57 PM, Fabio Estevam wrote:
[Adding Roberto in the thread as he is also trying to get SSI TDM support/
Thanks Fabio,
I'm also having the same issue but employing SSI in TDM master mode against a SLIC Si32178 using its PCM mode. PCLK is at 2048KHz, FSYNC is 8KHz slot length is 32 bits (SSI wants this since when in master mode) but valid data set to be 8bits in the SSI register.
My Current situation is that I've a custom fsl_ssi.c driver to control the SSI in TDM master mode both PCLK and FSYNC works perfectly fine, the SLIC has a register that I can check via SPI for such purpose, I can see the clocking status from its side. The main problem I've is exactly the same Caleb is having, after a certain amount of SDMA transfers, roughly 1000 or so, everything stops without any apparent reason.
On Tue, Oct 27, 2015 at 2:45 PM, Fabio Estevam festevam@gmail.com wrote:
On Tue, Oct 27, 2015 at 2:42 PM, Caleb Crome caleb@crome.org wrote:
On Tue, Oct 27, 2015 at 9:10 AM, Fabio Estevam festevam@gmail.com wrote:
On Tue, Oct 27, 2015 at 2:02 PM, Caleb Crome caleb@crome.org wrote:
Could you please try it without using the external SDMA firmware?
I do need *some* SDMA firmware, correct? The firmware that I'm using ends up in /lib/firmware/imx/sdma/sdma-imx6q.bin and is md5sum 5d4584134cc4cba62e1be2f382cd6f3a.
SSI can operate with the ROM SDMA firmware.
I would like to know if this issue also happens if you don't pass the external firmware and use the internal ROM SDMA firmware instead.
Ah, good to know. Do I just remove reference in the .dtsi file? Remove the file from the filesystem? I'll do both to be doubly sure :-)
Just remove it from the rootfs. Then you will see a message from the kernel saying that no external SDMA firmware could be found and that the internal one is going to be used.
Also, could you try bumping the SSI and SDMA clock rates at the maximum?
Any idea how I do that? I guess it's in the .dtsi file perhaps? I'll poke around.
You can try to call clk_set_rate() with the maximum allowed frequency inside the ssi driver. I don't recall on top of my head what is this value though.
Regards,
Fabio Estevam
Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
On Wed, Oct 28, 2015 at 1:11 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/27/2015 07:57 PM, Fabio Estevam wrote:
[Adding Roberto in the thread as he is also trying to get SSI TDM support/
Thanks Fabio,
I'm also having the same issue but employing SSI in TDM master mode against a SLIC Si32178 using its PCM mode. PCLK is at 2048KHz, FSYNC is 8KHz slot length is 32 bits (SSI wants this since when in master mode) but valid data set to be 8bits in the SSI register.
My Current situation is that I've a custom fsl_ssi.c driver to control the SSI in TDM master mode both PCLK and FSYNC works perfectly fine, the SLIC has a register that I can check via SPI for such purpose, I can see the clocking status from its side. The main problem I've is exactly the same Caleb is having, after a certain amount of SDMA transfers, roughly 1000 or so, everything stops without any apparent reason.
My problem is that the channels randomly slip a slot and all words end up in the wrong slot. I suspect this is a DMA issue, but I really haven't diagnosed it yet. I don't get a full stop on the data.
FYI, I'm using a very recent 4.3 kernel from linus's repo, but 4.2 behaved the same.
-Caleb
On Tue, Oct 27, 2015 at 2:45 PM, Fabio Estevam festevam@gmail.com wrote:
On Tue, Oct 27, 2015 at 2:42 PM, Caleb Crome caleb@crome.org wrote:
On Tue, Oct 27, 2015 at 9:10 AM, Fabio Estevam festevam@gmail.com wrote:
On Tue, Oct 27, 2015 at 2:02 PM, Caleb Crome caleb@crome.org wrote:
> Could you please try it without using the external SDMA firmware? I do need *some* SDMA firmware, correct? The firmware that I'm using ends up in /lib/firmware/imx/sdma/sdma-imx6q.bin and is md5sum 5d4584134cc4cba62e1be2f382cd6f3a.
SSI can operate with the ROM SDMA firmware.
I would like to know if this issue also happens if you don't pass the external firmware and use the internal ROM SDMA firmware instead.
Ah, good to know. Do I just remove reference in the .dtsi file? Remove the file from the filesystem? I'll do both to be doubly sure :-)
Just remove it from the rootfs. Then you will see a message from the kernel saying that no external SDMA firmware could be found and that the internal one is going to be used.
Also, could you try bumping the SSI and SDMA clock rates at the maximum?
Any idea how I do that? I guess it's in the .dtsi file perhaps? I'll poke around.
You can try to call clk_set_rate() with the maximum allowed frequency inside the ssi driver. I don't recall on top of my head what is this value though.
Regards,
Fabio Estevam
Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
On 10/28/2015 02:59 PM, Caleb Crome wrote:
On Wed, Oct 28, 2015 at 1:11 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/27/2015 07:57 PM, Fabio Estevam wrote:
[Adding Roberto in the thread as he is also trying to get SSI TDM support/
Thanks Fabio,
I'm also having the same issue but employing SSI in TDM master mode against a SLIC Si32178 using its PCM mode. PCLK is at 2048KHz, FSYNC is 8KHz slot length is 32 bits (SSI wants this since when in master mode) but valid data set to be 8bits in the SSI register.
My Current situation is that I've a custom fsl_ssi.c driver to control the SSI in TDM master mode both PCLK and FSYNC works perfectly fine, the SLIC has a register that I can check via SPI for such purpose, I can see the clocking status from its side. The main problem I've is exactly the same Caleb is having, after a certain amount of SDMA transfers, roughly 1000 or so, everything stops without any apparent reason.
My problem is that the channels randomly slip a slot and all words end up in the wrong slot. I suspect this is a DMA issue, but I really haven't diagnosed it yet. I don't get a full stop on the data.
Ah! Ok!
FYI, I'm using a very recent 4.3 kernel from linus's repo, but 4.2 behaved the same.
Can you please post the code you are using to setup the SSI, what PCLK and FSYNC rates? Did you have your own DMA handling?
-Caleb
On Tue, Oct 27, 2015 at 2:45 PM, Fabio Estevam festevam@gmail.com wrote:
On Tue, Oct 27, 2015 at 2:42 PM, Caleb Crome caleb@crome.org wrote:
On Tue, Oct 27, 2015 at 9:10 AM, Fabio Estevam festevam@gmail.com wrote:
On Tue, Oct 27, 2015 at 2:02 PM, Caleb Crome caleb@crome.org wrote:
>> Could you please try it without using the external SDMA firmware? > I do need *some* SDMA firmware, correct? The firmware that I'm using > ends up in /lib/firmware/imx/sdma/sdma-imx6q.bin and is md5sum > 5d4584134cc4cba62e1be2f382cd6f3a. SSI can operate with the ROM SDMA firmware.
I would like to know if this issue also happens if you don't pass the external firmware and use the internal ROM SDMA firmware instead.
Ah, good to know. Do I just remove reference in the .dtsi file? Remove the file from the filesystem? I'll do both to be doubly sure :-)
Just remove it from the rootfs. Then you will see a message from the kernel saying that no external SDMA firmware could be found and that the internal one is going to be used.
Also, could you try bumping the SSI and SDMA clock rates at the maximum?
Any idea how I do that? I guess it's in the .dtsi file perhaps? I'll poke around.
You can try to call clk_set_rate() with the maximum allowed frequency inside the ssi driver. I don't recall on top of my head what is this value though.
Regards,
Fabio Estevam
Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
On Wed, Oct 28, 2015 at 7:05 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/28/2015 02:59 PM, Caleb Crome wrote:
On Wed, Oct 28, 2015 at 1:11 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/27/2015 07:57 PM, Fabio Estevam wrote:
[Adding Roberto in the thread as he is also trying to get SSI TDM support/
Thanks Fabio,
I'm also having the same issue but employing SSI in TDM master mode against a SLIC Si32178 using its PCM mode. PCLK is at 2048KHz, FSYNC is 8KHz slot length is 32 bits (SSI wants this since when in master mode) but valid data set to be 8bits in the SSI register.
My Current situation is that I've a custom fsl_ssi.c driver to control the SSI in TDM master mode both PCLK and FSYNC works perfectly fine, the SLIC has a register that I can check via SPI for such purpose, I can see the clocking status from its side. The main problem I've is exactly the same Caleb is having, after a certain amount of SDMA transfers, roughly 1000 or so, everything stops without any apparent reason.
My problem is that the channels randomly slip a slot and all words end up in the wrong slot. I suspect this is a DMA issue, but I really haven't diagnosed it yet. I don't get a full stop on the data.
Ah! Ok!
FYI, I'm using a very recent 4.3 kernel from linus's repo, but 4.2 behaved the same.
Can you please post the code you are using to setup the SSI, what PCLK and FSYNC rates?
My codec is generating the clocks and the MX6 is in slave mode. PCLK (I assume that's the bit clock or BCLK in my workld) is 12.288MHz, and the FSYNC is 48kHz. 16 channels/frame, 16 bits/channel.
I hardly changed the SSI driver at all. It's goofy now for sure because I force it to 16 slots/frame no matter what, so beware. Other than that, I also set the STCCR for 16 channels and set channels_max to 16.
diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c index 37c5cd4..73778c2 100644 --- a/sound/soc/fsl/fsl_ssi.c +++ b/sound/soc/fsl/fsl_ssi.c @@ -749,7 +749,10 @@ static int fsl_ssi_hw_params(struct snd_pcm_substream *substream, CCSR_SSI_SCR_NET | CCSR_SSI_SCR_I2S_MODE_MASK, channels == 1 ? 0 : i2smode); } - + ssi_private->i2s_mode = CCSR_SSI_SCR_I2S_MODE_NORMAL | CCSR_SSI_SCR_NET; + regmap_update_bits(regs, CCSR_SSI_SCR, + CCSR_SSI_SCR_NET | CCSR_SSI_SCR_I2S_MODE_MASK, + ssi_private->i2s_mode); /* * FIXME: The documentation says that SxCCR[WL] should not be * modified while the SSI is enabled. The only time this can @@ -863,6 +866,15 @@ static int _fsl_ssi_set_dai_fmt(struct device *dev, return -EINVAL; } scr |= ssi_private->i2s_mode; + // Set to 16 slots/frame + regmap_update_bits(regs, CCSR_SSI_STCCR, + CCSR_SSI_SxCCR_DC_MASK, + CCSR_SSI_SxCCR_DC(16)); + + regmap_update_bits(regs, CCSR_SSI_SRCCR, + CCSR_SSI_SxCCR_DC_MASK, + CCSR_SSI_SxCCR_DC(16)); +
/* DAI clock inversion */ switch (fmt & SND_SOC_DAIFMT_INV_MASK) { @@ -1084,14 +1099,14 @@ static struct snd_soc_dai_driver fsl_ssi_dai_template = { .playback = { .stream_name = "CPU-Playback", .channels_min = 1, - .channels_max = 2, + .channels_max = 16, .rates = FSLSSI_I2S_RATES, .formats = FSLSSI_I2S_FORMATS, }, .capture = { .stream_name = "CPU-Capture", .channels_min = 1, - .channels_max = 2, + .channels_max = 16, .rates = FSLSSI_I2S_RATES, .formats = FSLSSI_I2S_FORMATS, },
There are other changes I've tried, including watermark changes (check out the alsa-dev archives on this thread for what I did before). This morning I am about to try the watermark changes suggested by Nicolin Chen.
Did you have your own DMA handling?
Nope, I don't really know how to do that. I'm relying on the built in sdma driver (drivers/dma/imx-sdma.c) + fsl pcm (sound/soc/fsl/imx-pcm-dma.c) and my modified fsl_ssi.c driver.
-Caleb
-Caleb
On Tue, Oct 27, 2015 at 2:45 PM, Fabio Estevam festevam@gmail.com wrote:
On Tue, Oct 27, 2015 at 2:42 PM, Caleb Crome caleb@crome.org wrote:
On Tue, Oct 27, 2015 at 9:10 AM, Fabio Estevam festevam@gmail.com wrote: > On Tue, Oct 27, 2015 at 2:02 PM, Caleb Crome caleb@crome.org wrote: > >>> Could you please try it without using the external SDMA firmware? >> I do need *some* SDMA firmware, correct? The firmware that I'm using >> ends up in /lib/firmware/imx/sdma/sdma-imx6q.bin and is md5sum >> 5d4584134cc4cba62e1be2f382cd6f3a. > SSI can operate with the ROM SDMA firmware. > > I would like to know if this issue also happens if you don't pass the > external firmware and use the internal ROM SDMA firmware instead. Ah, good to know. Do I just remove reference in the .dtsi file? Remove the file from the filesystem? I'll do both to be doubly sure :-)
Just remove it from the rootfs. Then you will see a message from the kernel saying that no external SDMA firmware could be found and that the internal one is going to be used.
> Also, could you try bumping the SSI and SDMA clock rates at the maximum? Any idea how I do that? I guess it's in the .dtsi file perhaps? I'll poke around.
You can try to call clk_set_rate() with the maximum allowed frequency inside the ssi driver. I don't recall on top of my head what is this value though.
Regards,
Fabio Estevam
Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
On 10/28/2015 03:24 PM, Caleb Crome wrote:
On Wed, Oct 28, 2015 at 7:05 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/28/2015 02:59 PM, Caleb Crome wrote:
On Wed, Oct 28, 2015 at 1:11 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/27/2015 07:57 PM, Fabio Estevam wrote:
[Adding Roberto in the thread as he is also trying to get SSI TDM support/
Thanks Fabio,
I'm also having the same issue but employing SSI in TDM master mode against a SLIC Si32178 using its PCM mode. PCLK is at 2048KHz, FSYNC is 8KHz slot length is 32 bits (SSI wants this since when in master mode) but valid data set to be 8bits in the SSI register.
My Current situation is that I've a custom fsl_ssi.c driver to control the SSI in TDM master mode both PCLK and FSYNC works perfectly fine, the SLIC has a register that I can check via SPI for such purpose, I can see the clocking status from its side. The main problem I've is exactly the same Caleb is having, after a certain amount of SDMA transfers, roughly 1000 or so, everything stops without any apparent reason.
My problem is that the channels randomly slip a slot and all words end up in the wrong slot. I suspect this is a DMA issue, but I really haven't diagnosed it yet. I don't get a full stop on the data.
Ah! Ok!
FYI, I'm using a very recent 4.3 kernel from linus's repo, but 4.2 behaved the same.
Can you please post the code you are using to setup the SSI, what PCLK and FSYNC rates?
My codec is generating the clocks and the MX6 is in slave mode. PCLK (I assume that's the bit clock or BCLK in my workld) is 12.288MHz,
Yes! I meant BCLK.
and the FSYNC is 48kHz. 16 channels/frame, 16 bits/channel.
Ok! In my case it's BCLK at 2048KHz and FSYNC 8KHz, 8 slots at 8bits
I hardly changed the SSI driver at all. It's goofy now for sure because I force it to 16 slots/frame no matter what, so beware. Other than that, I also set the STCCR for 16 channels and set channels_max to 16.
diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c index 37c5cd4..73778c2 100644 --- a/sound/soc/fsl/fsl_ssi.c +++ b/sound/soc/fsl/fsl_ssi.c @@ -749,7 +749,10 @@ static int fsl_ssi_hw_params(struct snd_pcm_substream *substream, CCSR_SSI_SCR_NET | CCSR_SSI_SCR_I2S_MODE_MASK, channels == 1 ? 0 : i2smode); }
- ssi_private->i2s_mode = CCSR_SSI_SCR_I2S_MODE_NORMAL | CCSR_SSI_SCR_NET;
- regmap_update_bits(regs, CCSR_SSI_SCR,
CCSR_SSI_SCR_NET | CCSR_SSI_SCR_I2S_MODE_MASK,
/* * FIXME: The documentation says that SxCCR[WL] should not be * modified while the SSI is enabled. The only time this canssi_private->i2s_mode);
@@ -863,6 +866,15 @@ static int _fsl_ssi_set_dai_fmt(struct device *dev, return -EINVAL; } scr |= ssi_private->i2s_mode;
// Set to 16 slots/frame
regmap_update_bits(regs, CCSR_SSI_STCCR,
CCSR_SSI_SxCCR_DC_MASK,
CCSR_SSI_SxCCR_DC(16));
regmap_update_bits(regs, CCSR_SSI_SRCCR,
CCSR_SSI_SxCCR_DC_MASK,
CCSR_SSI_SxCCR_DC(16));
/* DAI clock inversion */ switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
@@ -1084,14 +1099,14 @@ static struct snd_soc_dai_driver fsl_ssi_dai_template = { .playback = { .stream_name = "CPU-Playback", .channels_min = 1,
.channels_max = 2,
}, .capture = { .stream_name = "CPU-Capture", .channels_min = 1,.channels_max = 16, .rates = FSLSSI_I2S_RATES, .formats = FSLSSI_I2S_FORMATS,
.channels_max = 2,
},.channels_max = 16, .rates = FSLSSI_I2S_RATES, .formats = FSLSSI_I2S_FORMATS,
There are other changes I've tried, including watermark changes (check out the alsa-dev archives on this thread for what I did before). This morning I am about to try the watermark changes suggested by Nicolin Chen.
Did you have your own DMA handling?
Nope, I don't really know how to do that. I'm relying on the built in sdma driver (drivers/dma/imx-sdma.c) + fsl pcm (sound/soc/fsl/imx-pcm-dma.c) and my modified fsl_ssi.c driver.
In case you need to increase the SSI clock, have a look at CLK_SSI1_PODF within arch/arm/mach-imx/clk-imx6*.c depending by your SoC, in this file you can change the max clock rate supported by the given SSI clock.
Have you tried to set the SSI in I2S_MODE_SLAVE BTW?
-Caleb
-Caleb
On Tue, Oct 27, 2015 at 2:45 PM, Fabio Estevam festevam@gmail.com wrote:
On Tue, Oct 27, 2015 at 2:42 PM, Caleb Crome caleb@crome.org wrote: > On Tue, Oct 27, 2015 at 9:10 AM, Fabio Estevam festevam@gmail.com wrote: >> On Tue, Oct 27, 2015 at 2:02 PM, Caleb Crome caleb@crome.org wrote: >> >>>> Could you please try it without using the external SDMA firmware? >>> I do need *some* SDMA firmware, correct? The firmware that I'm using >>> ends up in /lib/firmware/imx/sdma/sdma-imx6q.bin and is md5sum >>> 5d4584134cc4cba62e1be2f382cd6f3a. >> SSI can operate with the ROM SDMA firmware. >> >> I would like to know if this issue also happens if you don't pass the >> external firmware and use the internal ROM SDMA firmware instead. > Ah, good to know. Do I just remove reference in the .dtsi file? > Remove the file from the filesystem? I'll do both to be doubly sure > :-) Just remove it from the rootfs. Then you will see a message from the kernel saying that no external SDMA firmware could be found and that the internal one is going to be used.
>> Also, could you try bumping the SSI and SDMA clock rates at the maximum? > Any idea how I do that? I guess it's in the .dtsi file perhaps? I'll > poke around. You can try to call clk_set_rate() with the maximum allowed frequency inside the ssi driver. I don't recall on top of my head what is this value though.
Regards,
Fabio Estevam
Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
On Wed, Oct 28, 2015 at 6:59 AM, Caleb Crome caleb@crome.org wrote:
On Wed, Oct 28, 2015 at 1:11 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/27/2015 07:57 PM, Fabio Estevam wrote:
[Adding Roberto in the thread as he is also trying to get SSI TDM support/
Thanks Fabio,
I'm also having the same issue but employing SSI in TDM master mode against a SLIC Si32178 using its PCM mode. PCLK is at 2048KHz, FSYNC is 8KHz slot length is 32 bits (SSI wants this since when in master mode) but valid data set to be 8bits in the SSI register.
My Current situation is that I've a custom fsl_ssi.c driver to control the SSI in TDM master mode both PCLK and FSYNC works perfectly fine, the SLIC has a register that I can check via SPI for such purpose, I can see the clocking status from its side. The main problem I've is exactly the same Caleb is having, after a certain amount of SDMA transfers, roughly 1000 or so, everything stops without any apparent reason.
My problem is that the channels randomly slip a slot and all words end up in the wrong slot. I suspect this is a DMA issue, but I really haven't diagnosed it yet. I don't get a full stop on the data.
FYI, I'm using a very recent 4.3 kernel from linus's repo, but 4.2 behaved the same.
Now I'm recalling when I tried the patches on 4.1, everything definitely froze. What kernel are you using?. I'm basically using the 4.3-rc7
-Caleb
On 10/28/2015 11:09 PM, Caleb Crome wrote:
On Wed, Oct 28, 2015 at 6:59 AM, Caleb Crome caleb@crome.org wrote:
On Wed, Oct 28, 2015 at 1:11 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/27/2015 07:57 PM, Fabio Estevam wrote:
[Adding Roberto in the thread as he is also trying to get SSI TDM support/
Thanks Fabio,
I'm also having the same issue but employing SSI in TDM master mode against a SLIC Si32178 using its PCM mode. PCLK is at 2048KHz, FSYNC is 8KHz slot length is 32 bits (SSI wants this since when in master mode) but valid data set to be 8bits in the SSI register.
My Current situation is that I've a custom fsl_ssi.c driver to control the SSI in TDM master mode both PCLK and FSYNC works perfectly fine, the SLIC has a register that I can check via SPI for such purpose, I can see the clocking status from its side. The main problem I've is exactly the same Caleb is having, after a certain amount of SDMA transfers, roughly 1000 or so, everything stops without any apparent reason.
My problem is that the channels randomly slip a slot and all words end up in the wrong slot. I suspect this is a DMA issue, but I really haven't diagnosed it yet. I don't get a full stop on the data.
FYI, I'm using a very recent 4.3 kernel from linus's repo, but 4.2 behaved the same.
Now I'm recalling when I tried the patches on 4.1, everything definitely froze. What kernel are you using?. I'm basically using the 4.3-rc7
Currently the Freescale offical v3.14.28_ga_1.0.0 coming with Yocto Fido.
-Caleb _______________________________________________ Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
On Wed, Oct 28, 2015 at 09:11:39AM +0100, Roberto Fichera wrote:
I'm also having the same issue but employing SSI in TDM master mode against a SLIC Si32178 using its PCM mode. PCLK is at 2048KHz, FSYNC is 8KHz slot length is 32 bits (SSI wants this since when in master mode) but valid data set to be 8bits in the SSI register.
My Current situation is that I've a custom fsl_ssi.c driver to control the SSI in TDM master mode both PCLK and FSYNC works perfectly fine, the SLIC has a register that I can check via SPI for such purpose, I can see the clocking status from its side. The main problem I've is exactly the same Caleb is having, after a certain amount of SDMA transfers, roughly 1000 or so, everything stops without any apparent reason.
I will start to help you to figure out your problem. But it seems that you are having a different issue here with clock generation. I don't get why you said *same issue*. For double confirm, the the "everything stops" mentioned, does it mean that clock from SSI stops?
On 10/30/2015 12:04 AM, Nicolin Chen wrote:
On Wed, Oct 28, 2015 at 09:11:39AM +0100, Roberto Fichera wrote:
I'm also having the same issue but employing SSI in TDM master mode against a SLIC Si32178 using its PCM mode. PCLK is at 2048KHz, FSYNC is 8KHz slot length is 32 bits (SSI wants this since when in master mode) but valid data set to be 8bits in the SSI register. My Current situation is that I've a custom fsl_ssi.c driver to control the SSI in TDM master mode both PCLK and FSYNC works perfectly fine, the SLIC has a register that I can check via SPI for such purpose, I can see the clocking status from its side. The main problem I've is exactly the same Caleb is having, after a certain amount of SDMA transfers, roughly 1000 or so, everything stops without any apparent reason.
I will start to help you to figure out your problem. But it seems that you are having a different issue here with clock generation. I don't get why you said *same issue*. For double confirm, the the "everything stops" mentioned, does it mean that clock from SSI stops?
Definitively yes! My problem is different than Caleb's one. Just to summarize the things. I've the SSI1 connected to a SiLabs SLIC Si32178 via AUDMUX6 padmux is below:
pinctrl_audmux_1: audmuxgrp-3 { fsl,pins = < MX6SX_PAD_SD3_DATA1__AUDMUX_AUD6_TXC 0x130b0 /* PCLK */ MX6SX_PAD_SD3_DATA2__AUDMUX_AUD6_TXFS 0x130b0 /* FSYNC */ MX6SX_PAD_SD3_DATA0__AUDMUX_AUD6_RXD 0x130b0 /* DTX */ MX6SX_PAD_SD3_DATA3__AUDMUX_AUD6_TXD 0x120b0 /* DRX */ >; };
The Si32178 is slave device so the SSI1 has to generate both BCLK and FSYNC. I've configured the AUDMUX as:
int si3217x_audmux_config(unsigned int master, unsigned int slave) { unsigned int ptcr, pdcr;
ptcr = IMX_AUDMUX_V2_PTCR_SYN | IMX_AUDMUX_V2_PTCR_TFSDIR | IMX_AUDMUX_V2_PTCR_TFSEL(master) | IMX_AUDMUX_V2_PTCR_TCLKDIR | IMX_AUDMUX_V2_PTCR_TCSEL(master); pdcr = IMX_AUDMUX_V2_PDCR_RXDSEL(master); si3217x_audmux_v2_configure_port(slave, ptcr, pdcr); /* configure internal port */
ptcr = IMX_AUDMUX_V2_PTCR_SYN; pdcr = IMX_AUDMUX_V2_PDCR_RXDSEL(slave); si3217x_audmux_v2_configure_port(master, ptcr, pdcr); /* configure external port */
return 0; }
BCLK is 2048KHz, FSYNC@8KHz, frame is 32 slots at 8bits each. Looking at TXC and TXFS with a logical analyzer everything looks ok.
The SSI is setup at beginning as:
unsigned long flags; struct ccsr_ssi __iomem *ssi = ssi_private->ssi; u32 srcr; u8 wm;
clk_prepare_enable(ssi_private->clk);
/* * Section 16.5 of the MPC8610 reference manual says that the * SSI needs to be disabled before updating the registers we set * here. */ write_ssi_mask(&ssi->scr, CCSR_SSI_SCR_SSIEN, 0);
/* * Program the SSI into I2S Master Network Synchronous mode. * Also enable the transmit and receive FIFO. */ write_ssi_mask(&ssi->scr, CCSR_SSI_SCR_I2S_MODE_MASK | CCSR_SSI_SCR_SYN, CCSR_SSI_SCR_I2S_MODE_NORMAL | CCSR_SSI_SCR_SYN | CCSR_SSI_SCR_NET | CCSR_SSI_SCR_SYS_CLK_EN);
/* * TX falling edge PCLK is mandatory because the RX SLIC side works in this way */ writel( CCSR_SSI_STCR_TXBIT0 /* LSB Aligned */ | CCSR_SSI_STCR_TFEN0 /* Enable TX FIFO0 */ | CCSR_SSI_STCR_TSCKP /* Transmit Clock Polarity - Data Clocked out on falling edge */ | CCSR_SSI_STCR_TFDIR /* Transmit Frame Direction Internal - generated internally */ | CCSR_SSI_STCR_TXDIR, /* Transmit Clock Direction Internal - generated internally */ &ssi->stcr);
srcr = readl(&ssi->srcr);
/* * clear out RFDIR and RXDIR because the clock is synchronous */ srcr &= ~(CCSR_SSI_SRCR_RFDIR | CCSR_SSI_SRCR_RXDIR);
srcr |= CCSR_SSI_SRCR_RXBIT0 /* LSB Aligned */ | CCSR_SSI_SRCR_RFEN0 /* Enable RX FIFO0 */ | CCSR_SSI_SRCR_RSCKP /* Receive Clock Polarity - Data latched on rising edge */ ;
writel(srcr, &ssi->srcr);
/* do not service the isr yet */ writel(0, &ssi->sier);
/* * Set the watermark for transmit FIFI 0 and receive FIFO 0. We * don't use FIFO 1. We program the transmit water to signal a * DMA transfer if there are only two (or fewer) elements left * in the FIFO. */
/* * tdm_real_slots is 2 because mask all except first 2 slots * our buffer is 2 slots * 8 bytes each, so set watermarks to a multiple of it * 8 words in our case */
wm = ssi_private->tdm_real_slots * 4; //ssi_private->use_dma ? ssi_private->fifo_depth - 2 : ssi_private->fifo_depth;
writel(CCSR_SSI_SFCSR_TFWM0(wm) | CCSR_SSI_SFCSR_RFWM0(wm) | CCSR_SSI_SFCSR_TFWM1(wm) | CCSR_SSI_SFCSR_RFWM1(wm), &ssi->sfcsr);
/* enable one FIFO */ write_ssi_mask(&ssi->srcr, CCSR_SSI_SRCR_RFEN1, 0); write_ssi_mask(&ssi->stcr, CCSR_SSI_STCR_TFEN1, 0);
/* disable SSI two-channel mode operation */ write_ssi_mask(&ssi->scr, CCSR_SSI_SCR_TCH_EN, 0);
/* * We keep the SSI disabled because if we enable it, then the * DMA controller will start. It's not supposed to start until * the SCR.TE (or SCR.RE) bit is set, but it does anyway. The * DMA controller will transfer one "BWC" of data (i.e. the * amount of data that the MR.BWC bits are set to). The reason * this is bad is because at this point, the PCM driver has not * finished initializing the DMA controller. */
/* Set default slot number -- 32 in our case */ write_ssi_mask(&ssi->stccr, CCSR_SSI_SxCCR_DC_MASK, CCSR_SSI_SxCCR_DC(ssi_private->tdm_slots)); write_ssi_mask(&ssi->srccr, CCSR_SSI_SxCCR_DC_MASK, CCSR_SSI_SxCCR_DC(ssi_private->tdm_slots));
/* Set default word length -- 8 bits */ write_ssi_mask(&ssi->stccr, CCSR_SSI_SxCCR_WL_MASK, CCSR_SSI_SxCCR_WL(ssi_private->tdm_word_size)); write_ssi_mask(&ssi->srccr, CCSR_SSI_SxCCR_WL_MASK, CCSR_SSI_SxCCR_WL(ssi_private->tdm_word_size));
/* enable the SSI */ write_ssi_mask(&ssi->scr, CCSR_SSI_SCR_SSIEN, CCSR_SSI_SCR_SSIEN);
/* * we are interested only at first 2 slots */ writel(~ssi_private->tdm_slots_enabled, &ssi->stmsk); writel(~ssi_private->tdm_slots_enabled, &ssi->srmsk);
return 0; }
SSI clock calculated and then enabled. Both TX and RX DMA channel are requested in the probe() function as below. and the corresponding TX and RX SDMA event in DTS are using the default from imx6sx.dtsi:
slave_config.direction = DMA_MEM_TO_DEV; slave_config.dst_addr = ssi_private->ssi_phys + offsetof(struct ccsr_ssi, stx0); slave_config.dst_addr_width = width; slave_config.dst_maxburst = ssi_private->tdm_real_slots * 4; ret = dmaengine_slave_config(ssi_private->tx_chan, &slave_config);
ssi_private->rx_chan = dma_request_slave_channel_reason(&pdev->dev, "rx"); slave_config.direction = DMA_DEV_TO_MEM; slave_config.src_addr = ssi_private->ssi_phys + offsetof(struct ccsr_ssi, srx0); slave_config.src_addr_width = width; slave_config.src_maxburst = ssi_private->tdm_real_slots * 4; ret = dmaengine_slave_config(ssi_private->rx_chan, &slave_config);
and setup before RDMAE and TDMAE bits, like this:
ssi_private->tx_buf = dma_alloc_coherent(NULL, buffer_len, &ssi_private->tx_dmaaddr, GFP_KERNEL); desc = dmaengine_prep_dma_cyclic(ssi_private->tx_chan, ssi_private->tx_dmaaddr, buffer_len, ssi_private->tdm_real_slots*4, DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT);
desc->callback = dma_tx_callback; desc->callback_param = ssi_private;
printk("TX: prepare for the DMA.\n"); dmaengine_submit(desc); dma_async_issue_pending(ssi_private->tx_chan);
ssi_private->rx_buf = dma_alloc_coherent(NULL, buffer_len, &ssi_private->rx_dmaaddr, GFP_KERNEL);
desc = dmaengine_prep_dma_cyclic(ssi_private->rx_chan, ssi_private->rx_dmaaddr, buffer_len, ssi_private->tdm_real_slots*4, DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT);
desc->callback = dma_rx_callback; desc->callback_param = ssi_private;
printk("RX: prepare for the DMA.\n"); dmaengine_submit(desc); dma_async_issue_pending(ssi_private->rx_chan);
Finally, the SSI's TX and RX parts are now enabled
scr = readl(&ssi->scr);
scr |= CCSR_SSI_SCR_TE | CCSR_SSI_SCR_RE; /* enable both TX and RX SSI sections */
writel(scr, &ssi->scr);
Finally the SIER si programmed as:
struct ccsr_ssi __iomem *ssi = ssi_private->ssi; u32 sier = CCSR_SSI_SIER_RFF0_EN | CCSR_SSI_SIER_TFE0_EN;
/* * if DMA is enabled than allow SSI request for DMA transfers * otherwise normal interrupt requests */
if (ssi_private->use_dma>0) { sier |= CCSR_SSI_SIER_RDMAE | CCSR_SSI_SIER_TDMAE; }
if (ssi_private->use_dma>1 || !ssi_private->use_dma) { sier |= CCSR_SSI_SIER_RIE | CCSR_SSI_SIER_TIE; }
sier &= ~(CCSR_SSI_SIER_TDE1_EN | CCSR_SSI_SIER_TFE1_EN | CCSR_SSI_SIER_TFE0_EN | CCSR_SSI_SIER_TDE0_EN);
writel(sier, &ssi->sier);
At this time I should see the DMA callbacks called every burst_size words. This behaviour doesn't really happen as I wish because I can see from a proc file that such callbacks are called from 1 to 20000 times and then anymore. This is also confirmed by the fact that the interrupt 34 (sdma) doesn't increase anymore but matches my internal counters collected within my callbacks. Here is what I can inspect from the data I have collected:
root@voneus-domus-imx6sx:~# cat /proc/domus_ssi_stats SSI TDM Info: PLL clk=66000000 SSI baudclk=49152000 ssi_phy=0x02028000 irq=78 fifo_depth=15 <---- this is what is read from DTS but not as watermark tdm_frame_rate=8000 tdm_slots=32 (real 2) tdm_word_size=8 tdm_slots_enabled=00000000000000000000000000000011 clk_frequency=2048000 clock_running=yes DMA=yes Dual FIFO=no RX DMA frame count=17121 RX DMA addr=0x9c692000 RX DMA buffer len=16 TX DMA frame count=17121 TX DMA addr=0x9c4aa000 TX DMA buffer len=16
SSI Registers: ssi_scr=0x0000009f ssi_sier=0x00500004 ssi_stcr=0x000002e8 ssi_srcr=0x00000288 ssi_stccr=0x00007f0b ssi_srccr=0x00007f0b ssi_sfcsr=0x0088f088 ssi_stmsk=0xfffffffc ssi_srmsk=0xfffffffc
Cheers, Roberto Fichera.
On Fri, Oct 30, 2015 at 12:42:53PM +0100, Roberto Fichera wrote:
/* * Set the watermark for transmit FIFI 0 and receive FIFO 0. We * don't use FIFO 1. We program the transmit water to signal a * DMA transfer if there are only two (or fewer) elements left * in the FIFO. */
SSI clock calculated and then enabled. Both TX and RX DMA channel are requested in the probe() function as below. and the corresponding TX and RX SDMA event in DTS are using the default from imx6sx.dtsi:
Since you are using single FIFO configuration, which SDMA script are you using? This should reflects in the Device Tree. As far as I learned, FSL 3.14 is using number 22 for SSIs which is the one for Dual FIFO Mode.
At this time I should see the DMA callbacks called every burst_size words. This behaviour doesn't really happen as I wish because I can see from a proc file that such callbacks are called from 1 to 20000 times and then anymore. This is also confirmed by the fact that the interrupt 34 (sdma) doesn't increase anymore but matches my internal counters collected within my callbacks. Here is what I can inspect from the data I have collected:
Just for clarification, the behaviour doesn't happen as you wish is just the DMA stopped? I remember you also mentioned bit clock has stopped as you can check the clock status from the Codec chip.
SSI Registers: ssi_sfcsr=0x0088f088
At this point you have data in RxFIFO and get empty in TxFIFO, so the DMA requests from both side should be issued. If the DMA stops as you described, you must check those two channels from the SDMA side by dumping SDMAARM_STOP_STAT, SDMAARM_HSTART, SDMAARM_EVTOVR, SDMAARM_EVTPEND, SDMAARM_EVTERR, SDMAARM_DSPOVR and SDMAARM_HOSTOVR registers.
Overall, I don't see an obvious defect from you SSI side, but you may also try to toggle TDMAE and RDMAE at the point that callback stops -- re-raise the DMA requests by disabling and enabling TDMAE and RDMAE again and see if it works. I think either something did intervene register controls of SDMA or SSI, or SDMA have missed the request signals from SSI.
On Tue, Oct 27, 2015 at 9:45 AM, Fabio Estevam festevam@gmail.com wrote:
On Tue, Oct 27, 2015 at 2:42 PM, Caleb Crome caleb@crome.org wrote:
On Tue, Oct 27, 2015 at 9:10 AM, Fabio Estevam festevam@gmail.com wrote:
On Tue, Oct 27, 2015 at 2:02 PM, Caleb Crome caleb@crome.org wrote:
Could you please try it without using the external SDMA firmware?
I do need *some* SDMA firmware, correct? The firmware that I'm using ends up in /lib/firmware/imx/sdma/sdma-imx6q.bin and is md5sum 5d4584134cc4cba62e1be2f382cd6f3a.
SSI can operate with the ROM SDMA firmware.
I would like to know if this issue also happens if you don't pass the external firmware and use the internal ROM SDMA firmware instead.
Ah, good to know. Do I just remove reference in the .dtsi file? Remove the file from the filesystem? I'll do both to be doubly sure :-)
Just remove it from the rootfs. Then you will see a message from the kernel saying that no external SDMA firmware could be found and that the internal one is going to be used.
I gave it a try. No noticeable change in behavior.
Also, could you try bumping the SSI and SDMA clock rates at the maximum?
Any idea how I do that? I guess it's in the .dtsi file perhaps? I'll poke around.
You can try to call clk_set_rate() with the maximum allowed frequency inside the ssi driver. I don't recall on top of my head what is this value though.
I don't know what to use as a parameter to clk_set_rate(). The 2 clocks that I see in the SDMA configuration are clk_ipg, and clk_ahb. The IMX6SDLRM syas, "configurable clock options for the SDMA core and the ARM platform DMA units. 1:2 ratio with maximum of SDMA core running at ARM platform peripheral bus speed and DMS running at max DMA frequency. 1:1 ratio when both SDMA core and ARM platform DMA clocks are set to the ARM platform peripheral bus speed"
But, I have a hard time reconsiling that statement with the code in sdma_init, which references only the ipg and ahb clocks. I put in a printk, and found the clk_ipg and clk_ahb to be 132 MHz.
The IMX6SDLRM.pdf, page 4717 says: "...but the SDMA core is physically limited to a maximum 104 MHz frequency...".
So, I just don't know what clock to set to 104 MHz, or if the 104MHz really is the right limit. Any ideas?
This made me think that possibly the problem is with cpufreq dynamically scaling the core frequency. So I tried: echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
To ensure that the clocks don't dynamically switch on me, but I still get channel slips. Unfortunately, it's hard to get statistical measures with my eyeballs (watching a scope with TDM decode). It might be a little better, but still fails.
Thanks, -Caleb
Regards,
Fabio Estevam
On Tue, Oct 27, 2015 at 08:13:44AM +0100, Markus Pargmann wrote:
So, the dma priority doesn't seem to be the issue. It's now set in the device tree, and strangely it's set to priority 0 (the highest) along with the UARTS. priority 0 is just the highest in the device tree -- it gets remapped to priority 3 in the sdma driver. the DT exposes only 3 levels of DMA priority, low, medium, and high. I created a new level that maps to DMA priroity 7 (the highest in the hardware), but still got the problem.
So, still something unknown causing dma to miss samples. must be in the dma ISR I would assume. I guess it's time to look into that.
Cc Nicolin, Fabio, Shawn
Perhaps you have an idea about this?
Off the top of my head:
1) Enable TUE0, TUE1, ROE0, ROE1 to see if there is any IRQ trigged.
2) Set the watermarks for both TX and RX to 8 while using burst sizes of 6. It'd be nicer to provisionally set these numbers using hard code than your current change depending on fifo_depth as it might be an odd value.
3) Try to enlarge the ALSA period size in the asound.conf or passing parameters when you do the playback/capture so that the number of interrupts from SDMA may reduce.
You may also see if the reproducibility is somehow reduced or not.
Nicolin
On 10/27/2015 09:11 PM, Nicolin Chen wrote:
On Tue, Oct 27, 2015 at 08:13:44AM +0100, Markus Pargmann wrote:
So, the dma priority doesn't seem to be the issue. It's now set in the device tree, and strangely it's set to priority 0 (the highest) along with the UARTS. priority 0 is just the highest in the device tree -- it gets remapped to priority 3 in the sdma driver. the DT exposes only 3 levels of DMA priority, low, medium, and high. I created a new level that maps to DMA priroity 7 (the highest in the hardware), but still got the problem.
So, still something unknown causing dma to miss samples. must be in the dma ISR I would assume. I guess it's time to look into that.
Cc Nicolin, Fabio, Shawn
Perhaps you have an idea about this?
Off the top of my head:
- Enable TUE0, TUE1, ROE0, ROE1 to see if there is any IRQ trigged.
I'm my case I was never able to see an interrupt triggered when setting both RDMAE and TDMAE bits in the SIER register.
Set the watermarks for both TX and RX to 8 while using burst sizes of 6. It'd be nicer to provisionally set these numbers using hard code than your current change depending on fifo_depth as it might be an odd value.
Try to enlarge the ALSA period size in the asound.conf or passing parameters when you do the playback/capture so that the number of interrupts from SDMA may reduce.
You may also see if the reproducibility is somehow reduced or not.
Nicolin _______________________________________________ Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
On Wed, Oct 28, 2015 at 09:23:01AM +0100, Roberto Fichera wrote:
On 10/27/2015 09:11 PM, Nicolin Chen wrote:
On Tue, Oct 27, 2015 at 08:13:44AM +0100, Markus Pargmann wrote:
So, the dma priority doesn't seem to be the issue. It's now set in the device tree, and strangely it's set to priority 0 (the highest) along with the UARTS. priority 0 is just the highest in the device tree -- it gets remapped to priority 3 in the sdma driver. the DT exposes only 3 levels of DMA priority, low, medium, and high. I created a new level that maps to DMA priroity 7 (the highest in the hardware), but still got the problem.
So, still something unknown causing dma to miss samples. must be in the dma ISR I would assume. I guess it's time to look into that.
Cc Nicolin, Fabio, Shawn
Perhaps you have an idea about this?
Off the top of my head:
- Enable TUE0, TUE1, ROE0, ROE1 to see if there is any IRQ trigged.
I'm my case I was never able to see an interrupt triggered when setting both RDMAE and TDMAE bits in the SIER register.
Your problem may not involve with hardware FIFO underrun at all so it's quite normal for you to have no IRQ in my opinion.
On Tue, Oct 27, 2015 at 1:11 PM, Nicolin Chen nicoleotsuka@gmail.com wrote:
On Tue, Oct 27, 2015 at 08:13:44AM +0100, Markus Pargmann wrote:
So, the dma priority doesn't seem to be the issue. It's now set in the device tree, and strangely it's set to priority 0 (the highest) along with the UARTS. priority 0 is just the highest in the device tree -- it gets remapped to priority 3 in the sdma driver. the DT exposes only 3 levels of DMA priority, low, medium, and high. I created a new level that maps to DMA priroity 7 (the highest in the hardware), but still got the problem.
So, still something unknown causing dma to miss samples. must be in the dma ISR I would assume. I guess it's time to look into that.
Cc Nicolin, Fabio, Shawn
Perhaps you have an idea about this?
Off the top of my head:
- Enable TUE0, TUE1, ROE0, ROE1 to see if there is any IRQ trigged.
Ah, I found that SIER TIE & RIE were not enabled. I enabled them (and just submitted a patch to the list, which will need to be fixed).
With my 2 patches, the
/sys/kernel/debug/2028000.ssi/stats
file now shows the proper interrupts.
- Set the watermarks for both TX and RX to 8 while using burst sizes of 6. It'd be nicer to provisionally set these numbers using hard code than your current change depending on fifo_depth as it might be an odd value.
Ah, this is fascinating you say this. fifo_depth is definitely odd, it's 15 as set in imx6qdl.dtsi: fsl,fifo-depth = <15>; But the DMA maxburst is made even later in the code...
Setting the watermark to 8 and maxburst to 8 dramatically reduces the channel slip rate, in fact, i didn't see a slip for more than 30 minutes of playing. That's a new record for sure. But, eventually, there was an underrun, and the channels slipped.
Setting watermark to 8 and maxburst to 6 still had some slips, seemingly more than 8 & 8.
I feel like a monkey randomly typing at my keyboard though. I don't know why maxburst=8 worked better. I get the feeling that I was just lucky.
There does seem to be a correlation between user space reported underruns and this channel slip, although they definitely are not 1:1 ratio: underruns happen without slips and slips happen without underruns. The latter is very disturbing because user space has no idea something is wrong.
My test is simply to run aplay with a 1000 second, 16 channel sound file, and watch the data decoded on my scope. The sound file has the channel number encoded as the most significant nibble of each word, and a do a conditional trigger to watch to make sure the most significant nibble after the fram sync is '0'. i.e. trigger if there is a rising edge on data within 300ns of the rising edge of fsync.
Here's the patch that has worked the best so far.
diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c index 73778c2..b834f77 100644 --- a/sound/soc/fsl/fsl_ssi.c +++ b/sound/soc/fsl/fsl_ssi.c @@ -943,7 +943,7 @@ static int _fsl_ssi_set_dai_fmt(struct device *dev, * size. */ if (ssi_private->use_dma) - wm = ssi_private->fifo_depth - 2; + wm = 8; else wm = ssi_private->fifo_depth;
@@ -1260,8 +1260,8 @@ static int fsl_ssi_imx_probe(struct platform_device *pdev, * We have burstsize be "fifo_depth - 2" to match the SSI * watermark setting in fsl_ssi_startup(). */ - ssi_private->dma_params_tx.maxburst = ssi_private->fifo_depth - 2; - ssi_private->dma_params_rx.maxburst = ssi_private->fifo_depth - 2; + ssi_private->dma_params_tx.maxburst = 8; + ssi_private->dma_params_rx.maxburst = 8; ssi_private->dma_params_tx.addr = ssi_private->ssi_phys + CCSR_SSI_STX0; ssi_private->dma_params_rx.addr = ssi_private->ssi_phys + CCSR_SSI_SRX0;
- Try to enlarge the ALSA period size in the asound.conf or passing parameters when you do the playback/capture so that the number of interrupts from SDMA may reduce.
I checked this earlier and it seemed to help, but didn't solve the issue. I will check it again with my latest updates.
-Caleb
You may also see if the reproducibility is somehow reduced or not.
Nicolin
On Wed, Oct 28, 2015 at 03:06:40PM -0700, Caleb Crome wrote:
- Set the watermarks for both TX and RX to 8 while using burst sizes of 6. It'd be nicer to provisionally set these numbers using hard code than your current change depending on fifo_depth as it might be an odd value.
Ah, this is fascinating you say this. fifo_depth is definitely odd, it's 15 as set in imx6qdl.dtsi:
fsl,fifo-depth = <15>; But the DMA maxburst is made even later in the code...
And odd number for burst size may course a similar problem like channel swapping in two-channel cases because the number of data FIFO is 2 -- an even number. But it seems not to be related to your problem here.
Setting the watermark to 8 and maxburst to 8 dramatically reduces the channel slip rate, in fact, i didn't see a slip for more than 30 minutes of playing. That's a new record for sure. But, eventually, there was an underrun, and the channels slipped.
Setting watermark to 8 and maxburst to 6 still had some slips, seemingly more than 8 & 8.
I feel like a monkey randomly typing at my keyboard though. I don't know why maxburst=8 worked better. I get the feeling that I was just lucky.
That's actually another possible root cause -- performance issue. burst=8 will have less bus transaction number than the case when burst=6. As you have quite a lot channels comparing to normal 2 channels, you need to feed the FIFO more frequently. If SDMA does not feed the data before the input FIFO gets underrun, a channel swapping might happen: in your case, channel slip.
There does seem to be a correlation between user space reported underruns and this channel slip, although they definitely are not 1:1
Reported by user space? Are you saying that's an ALSA underrun in the user space, not a hardware underrun reported by the IRQ in the driver? They are quite different. ALSA underrun comes from the DMA buffer gets underrun while the other one results from FIFO feeding efficiency. For ALSA underrun, enlarging the playback period size and period number will ease the problem:
period number = buffer size / period size;
An ALSA underrun may not be companied by a hardware underrun but they may co-exist.
ratio: underruns happen without slips and slips happen without underruns. The latter is very disturbing because user space has no idea something is wrong.
@@ -1260,8 +1260,8 @@ static int fsl_ssi_imx_probe(struct platform_device *pdev, * We have burstsize be "fifo_depth - 2" to match the SSI * watermark setting in fsl_ssi_startup(). */
ssi_private->dma_params_tx.maxburst = ssi_private->fifo_depth - 2;
ssi_private->dma_params_rx.maxburst = ssi_private->fifo_depth - 2;
ssi_private->dma_params_tx.maxburst = 8;
ssi_private->dma_params_rx.maxburst = 8;
I am actually thinking about setting a watermark to a larger number. I forgot how the SDMA script handles this number. But if this burst size means the overall data count per transaction, it might indicate that each FIFO only gets half of the burst size due to dual FIFOs.
Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space left, the largest safe burst size could be 14 (7 * 2) actually.
Yes. That's kind of fine tunning the parameters. And for your case, you may try a larger number as the SSI is simultaneously consuming a large amount of data even though it sounds risky. But it's worth trying since you are using SSI which only has tight FIFOs not like ESAI has 128 depth.
Nicolin
On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen nicoleotsuka@gmail.com wrote:
On Wed, Oct 28, 2015 at 03:06:40PM -0700, Caleb Crome wrote:
- Set the watermarks for both TX and RX to 8 while using burst sizes of 6. It'd be nicer to provisionally set these numbers using hard code than your current change depending on fifo_depth as it might be an odd value.
Ah, this is fascinating you say this. fifo_depth is definitely odd, it's 15 as set in imx6qdl.dtsi:
fsl,fifo-depth = <15>; But the DMA maxburst is made even later in the code...
And odd number for burst size may course a similar problem like channel swapping in two-channel cases because the number of data FIFO is 2 -- an even number. But it seems not to be related to your problem here.
Setting the watermark to 8 and maxburst to 8 dramatically reduces the channel slip rate, in fact, i didn't see a slip for more than 30 minutes of playing. That's a new record for sure. But, eventually, there was an underrun, and the channels slipped.
Setting watermark to 8 and maxburst to 6 still had some slips, seemingly more than 8 & 8.
I feel like a monkey randomly typing at my keyboard though. I don't know why maxburst=8 worked better. I get the feeling that I was just lucky.
That's actually another possible root cause -- performance issue. burst=8 will have less bus transaction number than the case when burst=6. As you have quite a lot channels comparing to normal 2 channels, you need to feed the FIFO more frequently. If SDMA does not feed the data before the input FIFO gets underrun, a channel swapping might happen: in your case, channel slip.
There does seem to be a correlation between user space reported underruns and this channel slip, although they definitely are not 1:1
Reported by user space? Are you saying that's an ALSA underrun in the user space, not a hardware underrun reported by the IRQ in the driver? They are quite different. ALSA underrun comes from the DMA buffer gets underrun while the other one results from FIFO feeding efficiency. For ALSA underrun, enlarging the playback period size and period number will ease the problem:
period number = buffer size / period size;
An ALSA underrun may not be companied by a hardware underrun but they may co-exist.
Sometimes they happen at the same time. So, I run aplay, and all is fine. Then the user space app will underrun, and then I look at the scope, and the channels have slipped. So somehow the start/restart after the underrun is not always perfect I guess.
Is there any mechanism for the DMA fifo underruns to be reported back to user space? There certainly should be, because the consequences are catastrophic, yet the user space app goes on as if everything is just great. Much, much worse than the underrun that is reported (i.e. a skip in audio is bad but sometimes tolerable. A channel slip is permanent and absolutely intolerable).
ratio: underruns happen without slips and slips happen without underruns. The latter is very disturbing because user space has no idea something is wrong.
@@ -1260,8 +1260,8 @@ static int fsl_ssi_imx_probe(struct platform_device *pdev, * We have burstsize be "fifo_depth - 2" to match the SSI * watermark setting in fsl_ssi_startup(). */
ssi_private->dma_params_tx.maxburst = ssi_private->fifo_depth - 2;
ssi_private->dma_params_rx.maxburst = ssi_private->fifo_depth - 2;
ssi_private->dma_params_tx.maxburst = 8;
ssi_private->dma_params_rx.maxburst = 8;
I am actually thinking about setting a watermark to a larger number. I forgot how the SDMA script handles this number. But if this burst size means the overall data count per transaction, it might indicate that each FIFO only gets half of the burst size due to dual FIFOs.
Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space left, the largest safe burst size could be 14 (7 * 2) actually.
Oh, does this depend on the data size? I'm using 16-bit data, so I guess the bursts are measured in 2 byte units? Does this mean that the burst size should be dynamically adjusted depending on word size (I guess done in hw_params)?
Yes. That's kind of fine tunning the parameters. And for your case, you may try a larger number as the SSI is simultaneously consuming a large amount of data even though it sounds risky. But it's worth trying since you are using SSI which only has tight FIFOs not like ESAI has 128 depth.
Nicolin
On Thu, Oct 29, 2015 at 6:44 AM, Caleb Crome caleb@crome.org wrote:
On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen nicoleotsuka@gmail.com wrote:
I am actually thinking about setting a watermark to a larger number. I forgot how the SDMA script handles this number. But if this burst size means the overall data count per transaction, it might indicate that each FIFO only gets half of the burst size due to dual FIFOs.
Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space left, the largest safe burst size could be 14 (7 * 2) actually.
Oh, does this depend on the data size? I'm using 16-bit data, so I guess the bursts are measured in 2 byte units? Does this mean that the burst size should be dynamically adjusted depending on word size (I guess done in hw_params)?
Nicolin
Okay, so wm=8 and maxburst=14 definitely does not work at all,. wm=8, maxburst=8 works okay, but still not perfect.
I just discovered some new information:
With wm=8 and maxburst=8 (which is my best setting so far), I just captured a problem at the very start of playing a file, and restarted enough times to capture it starting wrong:
Instead of the playback starting with
(hex numbers: my ramp file has first nibble as channel, second nibble as frame)
frame 0: 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0, f0 frame 1: 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1, f1
It started with:
frame 0: 00, 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0 frame 1: f0, 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1
So, the transfer started wrong right out of the gate -- with an extra sample inserted at the beginning. Again, my setup is: 1) use scope to capture the TDM bus. Trigger on first data change 2) aplay myramp.wav 3) If okay, ctrl-c and goto 2. 4) The capture below shows everything off by 1 sample.
The capture is here: https://drive.google.com/open?id=0B-KUa9Yf1o7iOXFtWXk2ZXdoUXc
This test definitely reveals that there is a startup issue. Now for the $64,000 question: what to do with this knowledge? I'm quite unfamiliar with how the DMA works at all.
I'll start poking around the DMA I guess.
Thanks, -Caleb
On 10/29/2015 03:55 PM, Caleb Crome wrote:
On Thu, Oct 29, 2015 at 6:44 AM, Caleb Crome caleb@crome.org wrote:
On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen nicoleotsuka@gmail.com wrote:
I am actually thinking about setting a watermark to a larger number. I forgot how the SDMA script handles this number. But if this burst size means the overall data count per transaction, it might indicate that each FIFO only gets half of the burst size due to dual FIFOs.
Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space left, the largest safe burst size could be 14 (7 * 2) actually.
Oh, does this depend on the data size? I'm using 16-bit data, so I guess the bursts are measured in 2 byte units? Does this mean that the burst size should be dynamically adjusted depending on word size (I guess done in hw_params)?
Nicolin
Okay, so wm=8 and maxburst=14 definitely does not work at all,. wm=8, maxburst=8 works okay, but still not perfect.
I just discovered some new information:
With wm=8 and maxburst=8 (which is my best setting so far), I just captured a problem at the very start of playing a file, and restarted enough times to capture it starting wrong:
Instead of the playback starting with
(hex numbers: my ramp file has first nibble as channel, second nibble as frame)
frame 0: 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0, f0 frame 1: 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1, f1
It started with:
frame 0: 00, 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0 frame 1: f0, 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1
So, the transfer started wrong right out of the gate -- with an extra sample inserted at the beginning. Again, my setup is:
- use scope to capture the TDM bus. Trigger on first data change
- aplay myramp.wav
- If okay, ctrl-c and goto 2.
- The capture below shows everything off by 1 sample.
The capture is here: https://drive.google.com/open?id=0B-KUa9Yf1o7iOXFtWXk2ZXdoUXc
This test definitely reveals that there is a startup issue. Now for the $64,000 question: what to do with this knowledge? I'm quite unfamiliar with how the DMA works at all.
I'm my case for example, I'm using a iMX6SX SoC, I've changed fsl_ssi.c to start the SSI clock generated internally by setting both RDMAE and TDMAE just once I'm pretty sure that everything has been setup (DMA and callback). Note that I'm not using alsa because, my target is to integrate SSI in TDM network mode with my DAHDI driver for VoIP app.
Back to the DMA question, in your case shouldn't be really a problem since all DMA stuff is handled by the linux audio framework.
Regarding my SSI problem, I was able to keep the DMA working for few second once before it get stopped and never retriggered. Currently I've 2 DMA channel one for TX and another for RX I've changed my DTS and update my fsl_ssi to handle new clocks, I guess only the CLK_SPBA has improved my situation. I've also tried to enable both RIE and TIE to service the ISR, with and without SSI DMA support, but this end with a full system freeze. The ISR was never changed in my fsl_ssi.c.
ssi1: ssi@02028000 { compatible = "fsl,imx6sx-ssi", "fsl,imx21-ssi"; reg = <0x02028000 0x4000>; interrupts = <GIC_SPI 46 IRQ_TYPE_LEVEL_HIGH>; clocks = <&clks IMX6SX_CLK_SSI1_IPG>, <&clks IMX6SX_CLK_SSI1>, --->>> <&clks IMX6SX_CLK_SPBA>, <&clks IMX6SX_CLK_SDMA>; clock-names = "ipg", "baud", "dma", "ahb"; dmas = <&sdma 37 1 0>, <&sdma 38 1 0>; dma-names = "rx", "tx";
Another thing I'm looking is the sdma events (37 and 38) which are reported by the reference manual to
37 -> SSI1 Receive 0 DMA request 38 -> SSI1 Transmit 0 DMA request
along that there are also
35 -> SSI1 Receive 1 DMA request 36 -> SSI1 Transmit 1 DMA request
I don't know actually how the two events types will behaves from the SDMA point of view.
I'm also considering to make plain new audio driver to at least try to use something which is supposed to work fine with SSI.
I'll start poking around the DMA I guess.
I guess it's a SSI startup problem.
Thanks, -Caleb _______________________________________________ Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
On Thu, Oct 29, 2015 at 8:37 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/29/2015 03:55 PM, Caleb Crome wrote:
On Thu, Oct 29, 2015 at 6:44 AM, Caleb Crome caleb@crome.org wrote:
On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen nicoleotsuka@gmail.com wrote:
I am actually thinking about setting a watermark to a larger number. I forgot how the SDMA script handles this number. But if this burst size means the overall data count per transaction, it might indicate that each FIFO only gets half of the burst size due to dual FIFOs.
Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space left, the largest safe burst size could be 14 (7 * 2) actually.
Oh, does this depend on the data size? I'm using 16-bit data, so I guess the bursts are measured in 2 byte units? Does this mean that the burst size should be dynamically adjusted depending on word size (I guess done in hw_params)?
Nicolin
Okay, so wm=8 and maxburst=14 definitely does not work at all,. wm=8, maxburst=8 works okay, but still not perfect.
I just discovered some new information:
With wm=8 and maxburst=8 (which is my best setting so far), I just captured a problem at the very start of playing a file, and restarted enough times to capture it starting wrong:
Instead of the playback starting with
(hex numbers: my ramp file has first nibble as channel, second nibble as frame)
frame 0: 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0, f0 frame 1: 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1, f1
It started with:
frame 0: 00, 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0 frame 1: f0, 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1
So, the transfer started wrong right out of the gate -- with an extra sample inserted at the beginning. Again, my setup is:
- use scope to capture the TDM bus. Trigger on first data change
- aplay myramp.wav
- If okay, ctrl-c and goto 2.
- The capture below shows everything off by 1 sample.
The capture is here: https://drive.google.com/open?id=0B-KUa9Yf1o7iOXFtWXk2ZXdoUXc
This test definitely reveals that there is a startup issue. Now for the $64,000 question: what to do with this knowledge? I'm quite unfamiliar with how the DMA works at all.
I'm my case for example, I'm using a iMX6SX SoC, I've changed fsl_ssi.c to start the SSI clock generated internally by setting both RDMAE and TDMAE just once I'm pretty sure that everything has been setup (DMA and callback). Note that I'm not using alsa because, my target is to integrate SSI in TDM network mode with my DAHDI driver for VoIP app.
Back to the DMA question, in your case shouldn't be really a problem since all DMA stuff is handled by the linux audio framework.
Regarding my SSI problem, I was able to keep the DMA working for few second once before it get stopped and never retriggered. Currently I've 2 DMA channel one for TX and another for RX I've changed my DTS and update my fsl_ssi to handle new clocks, I guess only the CLK_SPBA has improved my situation. I've also tried to enable both RIE and TIE to service the ISR, with and without SSI DMA support, but this end with a full system freeze.
I got this system freeze too when enabling RIE and TIE because the interrupts TFE1IE, TFE0IE, TDE1IE, TDE0IE are *enabled* at reset. (Check ref manual 61.9.5). which I suspect was a livelock kind of situation where the ISR is just called infinitely often. After disabling those, then the system worked okay. Check out the previous patch I sent on the issue yesterday or the day before.
Another thing I'm looking is the sdma events (37 and 38) which are reported by the reference manual to
37 -> SSI1 Receive 0 DMA request 38 -> SSI1 Transmit 0 DMA request
along that there are also
35 -> SSI1 Receive 1 DMA request 36 -> SSI1 Transmit 1 DMA request
I don't know actually how the two events types will behaves from the SDMA point of view.
The 35 and 36 are for Dual fifo mode only, and no current system (with fsl_ssi.c anyway) uses dual fifo mode. How do I know? Because the it's definitely broken in the fsl_ssi.c. I was just about to report that bug.
hint: fsl_ssi.c: if (ssi_private->use_dma && !ret && dmas[3] == IMX_DMATYPE_SSI_DUAL) { should read if (ssi_private->use_dma && !ret && dmas[4] == IMX_DMATYPE_SSI_DUAL) {
I'm also considering to make plain new audio driver to at least try to use something which is supposed to work fine with SSI.
Yeah, maybe that's the easiest way to go just to get operational. Start with just the bare minimum ssi driver so you know all the registers are locked into place the way you like.
-caleb
On 10/29/2015 04:54 PM, Caleb Crome wrote:
On Thu, Oct 29, 2015 at 8:37 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/29/2015 03:55 PM, Caleb Crome wrote:
On Thu, Oct 29, 2015 at 6:44 AM, Caleb Crome caleb@crome.org wrote:
On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen nicoleotsuka@gmail.com wrote:
I am actually thinking about setting a watermark to a larger number. I forgot how the SDMA script handles this number. But if this burst size means the overall data count per transaction, it might indicate that each FIFO only gets half of the burst size due to dual FIFOs.
Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space left, the largest safe burst size could be 14 (7 * 2) actually.
Oh, does this depend on the data size? I'm using 16-bit data, so I guess the bursts are measured in 2 byte units? Does this mean that the burst size should be dynamically adjusted depending on word size (I guess done in hw_params)?
Nicolin
Okay, so wm=8 and maxburst=14 definitely does not work at all,. wm=8, maxburst=8 works okay, but still not perfect.
I just discovered some new information:
With wm=8 and maxburst=8 (which is my best setting so far), I just captured a problem at the very start of playing a file, and restarted enough times to capture it starting wrong:
Instead of the playback starting with
(hex numbers: my ramp file has first nibble as channel, second nibble as frame)
frame 0: 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0, f0 frame 1: 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1, f1
It started with:
frame 0: 00, 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0 frame 1: f0, 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1
So, the transfer started wrong right out of the gate -- with an extra sample inserted at the beginning. Again, my setup is:
- use scope to capture the TDM bus. Trigger on first data change
- aplay myramp.wav
- If okay, ctrl-c and goto 2.
- The capture below shows everything off by 1 sample.
The capture is here: https://drive.google.com/open?id=0B-KUa9Yf1o7iOXFtWXk2ZXdoUXc
This test definitely reveals that there is a startup issue. Now for the $64,000 question: what to do with this knowledge? I'm quite unfamiliar with how the DMA works at all.
I'm my case for example, I'm using a iMX6SX SoC, I've changed fsl_ssi.c to start the SSI clock generated internally by setting both RDMAE and TDMAE just once I'm pretty sure that everything has been setup (DMA and callback). Note that I'm not using alsa because, my target is to integrate SSI in TDM network mode with my DAHDI driver for VoIP app.
Back to the DMA question, in your case shouldn't be really a problem since all DMA stuff is handled by the linux audio framework.
Regarding my SSI problem, I was able to keep the DMA working for few second once before it get stopped and never retriggered. Currently I've 2 DMA channel one for TX and another for RX I've changed my DTS and update my fsl_ssi to handle new clocks, I guess only the CLK_SPBA has improved my situation. I've also tried to enable both RIE and TIE to service the ISR, with and without SSI DMA support, but this end with a full system freeze.
I got this system freeze too when enabling RIE and TIE because the interrupts TFE1IE, TFE0IE, TDE1IE, TDE0IE are *enabled* at reset. (Check ref manual 61.9.5). which I suspect was a livelock kind of situation where the ISR is just called infinitely often. After disabling those, then the system worked okay. Check out the previous patch I sent on the issue yesterday or the day before.
Ooohh!!! Forgot to check this!!! I'm now going to mask them!!!
Another thing I'm looking is the sdma events (37 and 38) which are reported by the reference manual to
37 -> SSI1 Receive 0 DMA request 38 -> SSI1 Transmit 0 DMA request
along that there are also
35 -> SSI1 Receive 1 DMA request 36 -> SSI1 Transmit 1 DMA request
I don't know actually how the two events types will behaves from the SDMA point of view.
The 35 and 36 are for Dual fifo mode only, and no current system (with fsl_ssi.c anyway) uses dual fifo mode. How do I know? Because the it's definitely broken in the fsl_ssi.c. I was just about to report that bug.
Ah! Thanks! The reference manual is really clear to explain it :-D !
hint: fsl_ssi.c: if (ssi_private->use_dma && !ret && dmas[3] == IMX_DMATYPE_SSI_DUAL) { should read if (ssi_private->use_dma && !ret && dmas[4] == IMX_DMATYPE_SSI_DUAL) {
Yep! I know such piece of code.
I'm also considering to make plain new audio driver to at least try to use something which is supposed to work fine with SSI.
Yeah, maybe that's the easiest way to go just to get operational. Start with just the bare minimum ssi driver so you know all the registers are locked into place the way you like.
-caleb _______________________________________________ Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
On Thu, Oct 29, 2015 at 9:02 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/29/2015 04:54 PM, Caleb Crome wrote:
On Thu, Oct 29, 2015 at 8:37 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/29/2015 03:55 PM, Caleb Crome wrote: I don't know actually how the two events types will behaves from the SDMA point of view.
The 35 and 36 are for Dual fifo mode only, and no current system (with fsl_ssi.c anyway) uses dual fifo mode. How do I know? Because the it's definitely broken in the fsl_ssi.c. I was just about to report that bug.
Ah! Thanks! The reference manual is really clear to explain it :-D !
hint: fsl_ssi.c: if (ssi_private->use_dma && !ret && dmas[2] == IMX_DMATYPE_SSI_DUAL) { should read if (ssi_private->use_dma && !ret && dmas[3] == IMX_DMATYPE_SSI_DUAL) {
Oops, nevermid. I was looking at that wrong. It's correct as is. -Caleb
On 10/29/2015 05:02 PM, Roberto Fichera wrote:
On 10/29/2015 04:54 PM, Caleb Crome wrote:
On Thu, Oct 29, 2015 at 8:37 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/29/2015 03:55 PM, Caleb Crome wrote:
On Thu, Oct 29, 2015 at 6:44 AM, Caleb Crome caleb@crome.org wrote:
On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen nicoleotsuka@gmail.com wrote:
I am actually thinking about setting a watermark to a larger number. I forgot how the SDMA script handles this number. But if this burst size means the overall data count per transaction, it might indicate that each FIFO only gets half of the burst size due to dual FIFOs.
Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space left, the largest safe burst size could be 14 (7 * 2) actually.
Oh, does this depend on the data size? I'm using 16-bit data, so I guess the bursts are measured in 2 byte units? Does this mean that the burst size should be dynamically adjusted depending on word size (I guess done in hw_params)?
Nicolin
Okay, so wm=8 and maxburst=14 definitely does not work at all,. wm=8, maxburst=8 works okay, but still not perfect.
I just discovered some new information:
With wm=8 and maxburst=8 (which is my best setting so far), I just captured a problem at the very start of playing a file, and restarted enough times to capture it starting wrong:
Instead of the playback starting with
(hex numbers: my ramp file has first nibble as channel, second nibble as frame)
frame 0: 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0, f0 frame 1: 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1, f1
It started with:
frame 0: 00, 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0 frame 1: f0, 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1
So, the transfer started wrong right out of the gate -- with an extra sample inserted at the beginning. Again, my setup is:
- use scope to capture the TDM bus. Trigger on first data change
- aplay myramp.wav
- If okay, ctrl-c and goto 2.
- The capture below shows everything off by 1 sample.
The capture is here: https://drive.google.com/open?id=0B-KUa9Yf1o7iOXFtWXk2ZXdoUXc
This test definitely reveals that there is a startup issue. Now for the $64,000 question: what to do with this knowledge? I'm quite unfamiliar with how the DMA works at all.
I'm my case for example, I'm using a iMX6SX SoC, I've changed fsl_ssi.c to start the SSI clock generated internally by setting both RDMAE and TDMAE just once I'm pretty sure that everything has been setup (DMA and callback). Note that I'm not using alsa because, my target is to integrate SSI in TDM network mode with my DAHDI driver for VoIP app.
Back to the DMA question, in your case shouldn't be really a problem since all DMA stuff is handled by the linux audio framework.
Regarding my SSI problem, I was able to keep the DMA working for few second once before it get stopped and never retriggered. Currently I've 2 DMA channel one for TX and another for RX I've changed my DTS and update my fsl_ssi to handle new clocks, I guess only the CLK_SPBA has improved my situation. I've also tried to enable both RIE and TIE to service the ISR, with and without SSI DMA support, but this end with a full system freeze.
I got this system freeze too when enabling RIE and TIE because the interrupts TFE1IE, TFE0IE, TDE1IE, TDE0IE are *enabled* at reset. (Check ref manual 61.9.5). which I suspect was a livelock kind of situation where the ISR is just called infinitely often. After disabling those, then the system worked okay. Check out the previous patch I sent on the issue yesterday or the day before.
Ooohh!!! Forgot to check this!!! I'm now going to mask them!!!
Doesn't work for me! Still freeze the system! SIER=0x01d005f4
Another thing I'm looking is the sdma events (37 and 38) which are reported by the reference manual to
37 -> SSI1 Receive 0 DMA request 38 -> SSI1 Transmit 0 DMA request
along that there are also
35 -> SSI1 Receive 1 DMA request 36 -> SSI1 Transmit 1 DMA request
I don't know actually how the two events types will behaves from the SDMA point of view.
The 35 and 36 are for Dual fifo mode only, and no current system (with fsl_ssi.c anyway) uses dual fifo mode. How do I know? Because the it's definitely broken in the fsl_ssi.c. I was just about to report that bug.
Ah! Thanks! The reference manual is really clear to explain it :-D !
hint: fsl_ssi.c: if (ssi_private->use_dma && !ret && dmas[3] == IMX_DMATYPE_SSI_DUAL) { should read if (ssi_private->use_dma && !ret && dmas[4] == IMX_DMATYPE_SSI_DUAL) {
Yep! I know such piece of code.
I'm also considering to make plain new audio driver to at least try to use something which is supposed to work fine with SSI.
Yeah, maybe that's the easiest way to go just to get operational. Start with just the bare minimum ssi driver so you know all the registers are locked into place the way you like.
-caleb _______________________________________________ Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
On Thu, Oct 29, 2015 at 9:34 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/29/2015 05:02 PM, Roberto Fichera wrote:
On 10/29/2015 04:54 PM, Caleb Crome wrote:
On Thu, Oct 29, 2015 at 8:37 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/29/2015 03:55 PM, Caleb Crome wrote:
On Thu, Oct 29, 2015 at 6:44 AM, Caleb Crome caleb@crome.org wrote:
On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen nicoleotsuka@gmail.com wrote: > I am actually thinking about setting a watermark to a larger number. > I forgot how the SDMA script handles this number. But if this burst > size means the overall data count per transaction, it might indicate > that each FIFO only gets half of the burst size due to dual FIFOs. > > Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space > left, the largest safe burst size could be 14 (7 * 2) actually. Oh, does this depend on the data size? I'm using 16-bit data, so I guess the bursts are measured in 2 byte units? Does this mean that the burst size should be dynamically adjusted depending on word size (I guess done in hw_params)?
> Nicolin
Okay, so wm=8 and maxburst=14 definitely does not work at all,. wm=8, maxburst=8 works okay, but still not perfect.
I just discovered some new information:
With wm=8 and maxburst=8 (which is my best setting so far), I just captured a problem at the very start of playing a file, and restarted enough times to capture it starting wrong:
Instead of the playback starting with
(hex numbers: my ramp file has first nibble as channel, second nibble as frame)
frame 0: 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0, f0 frame 1: 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1, f1
It started with:
frame 0: 00, 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0 frame 1: f0, 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1
So, the transfer started wrong right out of the gate -- with an extra sample inserted at the beginning. Again, my setup is:
- use scope to capture the TDM bus. Trigger on first data change
- aplay myramp.wav
- If okay, ctrl-c and goto 2.
- The capture below shows everything off by 1 sample.
The capture is here: https://drive.google.com/open?id=0B-KUa9Yf1o7iOXFtWXk2ZXdoUXc
This test definitely reveals that there is a startup issue. Now for the $64,000 question: what to do with this knowledge? I'm quite unfamiliar with how the DMA works at all.
I'm my case for example, I'm using a iMX6SX SoC, I've changed fsl_ssi.c to start the SSI clock generated internally by setting both RDMAE and TDMAE just once I'm pretty sure that everything has been setup (DMA and callback). Note that I'm not using alsa because, my target is to integrate SSI in TDM network mode with my DAHDI driver for VoIP app.
Back to the DMA question, in your case shouldn't be really a problem since all DMA stuff is handled by the linux audio framework.
Regarding my SSI problem, I was able to keep the DMA working for few second once before it get stopped and never retriggered. Currently I've 2 DMA channel one for TX and another for RX I've changed my DTS and update my fsl_ssi to handle new clocks, I guess only the CLK_SPBA has improved my situation. I've also tried to enable both RIE and TIE to service the ISR, with and without SSI DMA support, but this end with a full system freeze.
I got this system freeze too when enabling RIE and TIE because the interrupts TFE1IE, TFE0IE, TDE1IE, TDE0IE are *enabled* at reset. (Check ref manual 61.9.5). which I suspect was a livelock kind of situation where the ISR is just called infinitely often. After disabling those, then the system worked okay. Check out the previous patch I sent on the issue yesterday or the day before.
Ooohh!!! Forgot to check this!!! I'm now going to mask them!!!
Doesn't work for me! Still freeze the system! SIER=0x01d005f4
You still have many per-frame interrupts enabled, which is still too many enabled. for example, you have RLSIE, TLSIE, RFSIE, TFSIE, etc. These all generate one interrupt per frame, and not necessarily at the same time, so you could be having 4 or more interrupts per frame. Be sure they're all zero except for the DMA enable and the specific ones you actually want enabled.
-C
On 10/29/2015 05:39 PM, Caleb Crome wrote:
On Thu, Oct 29, 2015 at 9:34 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/29/2015 05:02 PM, Roberto Fichera wrote:
On 10/29/2015 04:54 PM, Caleb Crome wrote:
On Thu, Oct 29, 2015 at 8:37 AM, Roberto Fichera kernel@tekno-soft.it wrote:
On 10/29/2015 03:55 PM, Caleb Crome wrote:
On Thu, Oct 29, 2015 at 6:44 AM, Caleb Crome caleb@crome.org wrote: > On Wed, Oct 28, 2015 at 9:53 PM, Nicolin Chen nicoleotsuka@gmail.com wrote: >> I am actually thinking about setting a watermark to a larger number. >> I forgot how the SDMA script handles this number. But if this burst >> size means the overall data count per transaction, it might indicate >> that each FIFO only gets half of the burst size due to dual FIFOs. >> >> Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space >> left, the largest safe burst size could be 14 (7 * 2) actually. > Oh, does this depend on the data size? I'm using 16-bit data, so I > guess the bursts are measured in 2 byte units? Does this mean that > the burst size should be dynamically adjusted depending on word size > (I guess done in hw_params)? > >> Nicolin Okay, so wm=8 and maxburst=14 definitely does not work at all,. wm=8, maxburst=8 works okay, but still not perfect.
I just discovered some new information:
With wm=8 and maxburst=8 (which is my best setting so far), I just captured a problem at the very start of playing a file, and restarted enough times to capture it starting wrong:
Instead of the playback starting with
(hex numbers: my ramp file has first nibble as channel, second nibble as frame)
frame 0: 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0, f0 frame 1: 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1, f1
It started with:
frame 0: 00, 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0 frame 1: f0, 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1
So, the transfer started wrong right out of the gate -- with an extra sample inserted at the beginning. Again, my setup is:
- use scope to capture the TDM bus. Trigger on first data change
- aplay myramp.wav
- If okay, ctrl-c and goto 2.
- The capture below shows everything off by 1 sample.
The capture is here: https://drive.google.com/open?id=0B-KUa9Yf1o7iOXFtWXk2ZXdoUXc
This test definitely reveals that there is a startup issue. Now for the $64,000 question: what to do with this knowledge? I'm quite unfamiliar with how the DMA works at all.
I'm my case for example, I'm using a iMX6SX SoC, I've changed fsl_ssi.c to start the SSI clock generated internally by setting both RDMAE and TDMAE just once I'm pretty sure that everything has been setup (DMA and callback). Note that I'm not using alsa because, my target is to integrate SSI in TDM network mode with my DAHDI driver for VoIP app.
Back to the DMA question, in your case shouldn't be really a problem since all DMA stuff is handled by the linux audio framework.
Regarding my SSI problem, I was able to keep the DMA working for few second once before it get stopped and never retriggered. Currently I've 2 DMA channel one for TX and another for RX I've changed my DTS and update my fsl_ssi to handle new clocks, I guess only the CLK_SPBA has improved my situation. I've also tried to enable both RIE and TIE to service the ISR, with and without SSI DMA support, but this end with a full system freeze.
I got this system freeze too when enabling RIE and TIE because the interrupts TFE1IE, TFE0IE, TDE1IE, TDE0IE are *enabled* at reset. (Check ref manual 61.9.5). which I suspect was a livelock kind of situation where the ISR is just called infinitely often. After disabling those, then the system worked okay. Check out the previous patch I sent on the issue yesterday or the day before.
Ooohh!!! Forgot to check this!!! I'm now going to mask them!!!
Doesn't work for me! Still freeze the system! SIER=0x01d005f4
I thought the same but setting only RFF0, TFE0, RDMAE and TDMAE along the RIE and TIE still free the system.
You still have many per-frame interrupts enabled, which is still too many enabled. for example, you have RLSIE, TLSIE, RFSIE, TFSIE, etc. These all generate one interrupt per frame, and not necessarily at the same time, so you could be having 4 or more interrupts per frame. Be sure they're all zero except for the DMA enable and the specific ones you actually want enabled.
Yep! But I still think that the CPU should be able to handle all them.
-C _______________________________________________ Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
On Thu, Oct 29, 2015 at 08:54:00AM -0700, Caleb Crome wrote:
The 35 and 36 are for Dual fifo mode only, and no current system (with fsl_ssi.c anyway) uses dual fifo mode. How do I know? Because the it's definitely broken in the fsl_ssi.c. I was just about to report that bug.
Broken? Where? The reason why there's no current system using that is because there is no SDMA patch to upload a newer version of the SDMA firmware which is always included in the Freescale official release. Yes, you saw that patch in the beginning of this thread. For SDMA DT binding, number 22 is the configurations of Dual FIFO script but it would break most of the upstream users as upstream Kernels running with a default ROM firmware don't have that script.
If you want to use Dual FIFO mode with upstream Kernels, apply the change I provided you in my previous reply along with these two:
commit 2490a94afbcb139f68f7872e5645d66b99720a52 Author: Shawn Guo shawn.guo@freescale.com Date: Tue Jul 16 22:53:18 2013 +0800
ENGR00269945: firwmare: imx: add imx6q sdma script
Add imx6q sdma script which will be used by all i.MX6 series.
Signed-off-by: Shawn Guo shawn.guo@freescale.com
diff --git a/firmware/Makefile b/firmware/Makefile index e297e1b..7b22ab3 100644 --- a/firmware/Makefile +++ b/firmware/Makefile @@ -61,6 +61,7 @@ fw-shipped-$(CONFIG_DRM_RADEON) += radeon/R100_cp.bin radeon/R200_cp.bin \ radeon/RV770_pfp.bin radeon/RV770_me.bin \ radeon/RV730_pfp.bin radeon/RV730_me.bin \ radeon/RV710_pfp.bin radeon/RV710_me.bin +fw-shipped-$(CONFIG_IMX_SDMA) += imx/sdma/sdma-imx6q.bin fw-shipped-$(CONFIG_DVB_AV7110) += av7110/bootcode.bin fw-shipped-$(CONFIG_DVB_TTUSB_BUDGET) += ttusb-budget/dspbootcode.bin fw-shipped-$(CONFIG_E100) += e100/d101m_ucode.bin e100/d101s_ucode.bin \ diff --git a/firmware/imx/sdma/sdma-imx6q.bin.ihex b/firmware/imx/sdma/sdma-imx6q.bin.ihex new file mode 100644 index 0000000..2e561f0 --- /dev/null +++ b/firmware/imx/sdma/sdma-imx6q.bin.ihex @@ -0,0 +1,116 @@ +:1000000053444D4101000000010000001C000000AD +:1000100026000000B40000007A0600008202000002 +:10002000FFFFFFFF00000000FFFFFFFFFFFFFFFFDC +:10003000FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD0 +:10004000FFFFFFFFFFFFFFFF6A1A0000FFFFFFFF38 +:10005000EB020000BB180000FFFFFFFF08040000D8 +:10006000FFFFFFFFC0030000FFFFFFFFFFFFFFFFD9 +:10007000FFFFFFFFAB020000FFFFFFFF7B0300005D +:10008000FFFFFFFFFFFFFFFF4C0400006E040000B6 +:10009000FFFFFFFF00180000FFFFFFFFFFFFFFFF54 +:1000A000000000000018000062180000161A00008E +:1000B000061B0000E3C1DB57E35FE357F352016A1D +:1000C0008F00D500017D8D00A005EB5D7804037DD8 +:1000D00079042C7D367C79041F7CEE56000F600677 +:1000E000057D0965437E0A62417E20980A623E7E54 +:1000F00009653C7E12051205AD026007037DFB55C4 +:10010000D36D2B98FB55041DD36DC86A2F7F011F3B +:1001100003200048E47C5398FB55D76D1500057803 +:100120000962C86A0962C86AD76D5298FB55D76DD3 +:100130001500150005780A62C86A0A62C86AD76D98 +:100140005298FB55D76D15001500150005780B6208 +:10015000C86A0B62C86AD76D097CDF6D077F000033 +:10016000EB55004D077DFAC1E35706980700CC68B0 +:100170000C6813C20AC20398D9C1E3C1DB57E35F1D +:10018000E357F352216A8F00D500017D8D00A00551 +:10019000EB5DFB567804037D79042A7D317C79047C +:1001A000207C700B1103EB53000F6003057D096584 +:1001B000377E0A62357E86980A62327E0965307E15 +:1001C00012051205AD026007027C065A8E98265A67 +:1001D000277F011F03200048E87C700B1103135395 +:1001E000AF98150004780962065A0962265AAE983B +:1001F0001500150004780A62065A0A62265AAE985B +:1002000015001500150004780B62065A0B62265A79 +:10021000077C0000EB55004D067DFAC1E357699855 +:1002200007000C6813C20AC26698700B11031353BF +:100230006C07017CD9C1FB5E8A066B07017CD9C1C2 +:10024000F35EDB59D3588F0110010F398B003CC18D +:100250002B7DC05AC85B4EC1277C88038906E35CAE +:10026000FF0D1105FF1DBC053E07004D187D7008F0 +:1002700011007E07097D7D07027D2852E698F8521D +:10028000DB54BC02CC02097C7C07027D2852EF982B +:10029000F852D354BC02CC02097D0004DD988B00D7 +:1002A000C052C85359C1D67D0002CD98FF08BF0087 +:1002B0007F07157D8804D500017D8D00A005EB5DCD +:1002C0008F0212021202FF3ADA05027C3E071899E9 +:1002D000A402DD02027D3E0718995E071899EB55CE +:1002E0009805EB5DF352FB546A07267D6C07017D90 +:1002F00055996B07577C6907047D6807027D010EDD +:100300002F999358D600017D8E009355A005935DDB +:10031000A00602780255045D1D7C004E087C69072A +:10032000037D0255177E3C99045D147F8906935026 +:100330000048017D2799A099150006780255045DB3 +:100340004F070255245D2F07017CA09917006F0706 +:10035000017C012093559D000700A7D9F598D36C27 +:100360006907047D6807027D010E64999358D600E1 +:10037000017D8E009355A005935DA006027802557D +:10038000C86D0F7C004E087C6907037D0255097E0D +:100390007199C86D067F890693500048017D5C996C +:1003A000A0999A99C36A6907047D6807027D010EC6 +:1003B00087999358D600017D8E009355A005935DD3 +:1003C000A0060278C865045D0F7C004E087C6907B2 +:1003D000037DC865097E9499045D067F8906935064 +:1003E0000048017D7F99A09993559D000700FF6CFF +:1003F000A7D9F5980000E354EB55004D017CF59822 +:10040000DD98E354EB55FF0A1102FF1A7F07027CC7 +:10041000A005B4999D008C05BA05A0051002BA0488 +:10042000AD0454040600E3C1DB57FB52C36AF35228 +:10043000056A8F00D500017D8D00A005EB5D780475 +:10044000037D79042B7D1E7C7904337CEE56000FEE +:10045000FB556007027DC36DD599041DC36DC8624D +:100460003B7E6006027D10021202096A357F12028D +:10047000096A327F1202096A2F7F011F0320004898 +:10048000E77C099AFB55C76D150015001500057826 +:10049000C8620B6AC8620B6AC76D089AFB55C76DC4 +:1004A000150015000578C8620A6AC8620A6AC76D35 +:1004B000089AFB55C76D15000578C862096AC862BD +:1004C000096AC76D097C286A077F0000EB55004D5B +:1004D000057DFAC1DB57BF9977C254040AC2BA99A5 +:1004E000D9C1E3C1DB57F352056A8F00D500017D06 +:1004F0008D00A005FB567804037D7904297D1F7CBF +:1005000079042E7CE35D700D1105ED55000F600739 +:10051000027D0652329A2652337E6005027D100219 +:100520001202096A2D7F1202096A2A7F1202096AE1 +:10053000277F011F03200048EA7CE3555D9A1500E0 +:1005400015001500047806520B6A26520B6A5C9A55 +:1005500015001500047806520A6A26520A6A5C9A47 +:10056000150004780652096A2652096A097C286A2D +:10057000077F0000DB57004D057DFAC1DB571B9A52 +:1005800077C254040AC2189AE3C1DB57F352056AD2 +:10059000FB568E02941AC36AC8626902247D941EB7 +:1005A000C36ED36EC8624802C86A9426981EC36E92 +:1005B000D36EC8624C02C86A9826C36E981EC36E7A +:1005C000C8629826C36E6002097CC8626E02247DF0 +:1005D000096A1E7F0125004D257D849A286A187FAF +:1005E00004627AC2B89AE36E8F00D805017D8D004F +:1005F000A005C8626E02107D096A0A7F0120F97C9D +:10060000286A067F0000004D0D7DFAC1DB576E9A07 +:10061000070004620C6AB59A286AFA7F04627AC2FB +:1006200058045404286AF47F0AC26B9AD9C1E3C102 +:10063000DB57F352056AFB568E02941A0252690286 +:100640001D7D941E06524802065A9426981E065294 +:100650004C02065A9826981E065260020A7C98267A +:1006600006526E02237D096A1D7F0125004D247DFF +:10067000D19A286A177F04627AC2029B8F00D8053C +:10068000017D8D00A00506526E02107D096A0A7F69 +:100690000120F97C286A067F0000004D0D7DFAC11B +:1006A000DB57C19A070004620C6AFF9A286AFA7F36 +:1006B00004627AC258045404286AF47F0AC2BE9ABB +:1006C000016E0B612F7E0B622D7E0B632B7E0C0D5A +:1006D0001704170417049D04081DCC05017C0C0D9C +:1006E000D16A000F4207C86FDD6F1C7F8E009D002E +:1006F00001680B67177ED56B04080278C86F120774 +:10070000117C0B670F7E04080278C86F12070A7C01 +:10071000DD6F087FD169010FC86FDD6F037F0101B5 +:0E0720000004129B0700FF680C680002129B89 +:00000001FF
commit 870291e6b0019ff0a9a135f2641c7801b36a80ef Author: Nicolin Chen nicoleotsuka@gmail.com Date: Thu Sep 4 21:42:35 2014 -0700
firmware: imx: sdma v2
In the upstream kernel, i.MX6 series uses SDMA firmware V2. However, we actually don't have a strict version control internally. So this patch just simply hacks the firmware to change the version number to V2.
Signed-off-by: Nicolin Chen nicoleotsuka@gmail.com
diff --git a/firmware/imx/sdma/sdma-imx6q.bin.ihex b/firmware/imx/sdma/sdma-imx6q.bin.ihex index 2e561f0..ceb114b 100644 --- a/firmware/imx/sdma/sdma-imx6q.bin.ihex +++ b/firmware/imx/sdma/sdma-imx6q.bin.ihex @@ -1,4 +1,4 @@ -:1000000053444D4101000000010000001C000000AD +:1000000053444D4102000000010000001C000000AC :1000100026000000B40000007A0600008202000002 :10002000FFFFFFFF00000000FFFFFFFFFFFFFFFFDC :10003000FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD0
On Thu, Oct 29, 2015 at 11:36 AM, Nicolin Chen nicoleotsuka@gmail.com wrote:
On Thu, Oct 29, 2015 at 08:54:00AM -0700, Caleb Crome wrote:
The 35 and 36 are for Dual fifo mode only, and no current system (with fsl_ssi.c anyway) uses dual fifo mode. How do I know? Because the it's definitely broken in the fsl_ssi.c. I was just about to report that bug.
Broken? Where?
Sorry, my bad, the only broken part was my understanding :-) I discovered that a few minutes after I posted the comment.
On Thu, Oct 29, 2015 at 04:37:35PM +0100, Roberto Fichera wrote:
Regarding my SSI problem, I was able to keep the DMA working for few second once before it get stopped and never retriggered. Currently I've 2 DMA channel one for TX and another for rx
DMA only stops when getting terminate_all() executed or FIFO doesn't reach the watermark so that no newer DMA request would be issued.
I've changed my DTS and update my fsl_ssi to handle new clocks, I guess only the CLK_SPBA has improved my situation. I've also tried to enable both RIE and TIE to service the ISR, with
Guessing? It'd weird that SPBA would ease the issue here as I was told by the IC team that SSI and SAI in SoloX don't require SPBA clock IIRC.
and without SSI DMA support, but this end with a full system freeze. The ISR was never changed in my fsl_ssi.c.
You mentioned that clock status from the Codec chip shows the bit clock stops but now it's related to DMA? I think you should first figure out where the problem locates as Caleb's problem is different from yours.
As I mentioned, you may need to confirm that if the bit clock generation is stopped. DMA surely won't work when the bit clock ends as SSI may no longer consume the data FIFO so the watermark would never be reached again.
Nicolin
On Thu, Oct 29, 2015 at 07:55:59AM -0700, Caleb Crome wrote:
Therefore, if setting watermark to 8, each FIFO has 7 (15 - 8) space left, the largest safe burst size could be 14 (7 * 2) actually.
Oh, does this depend on the data size? I'm using 16-bit data, so I guess the bursts are measured in 2 byte units? Does this mean that the burst size should be dynamically adjusted depending on word size (I guess done in hw_params)?
You don't need to do that. It's already been taken care in the DMA code.
Okay, so wm=8 and maxburst=14 definitely does not work at all,. wm=8, maxburst=8 works okay, but still not perfect.
Make sure you are using Dual FIFO configurations for both SDMA and SSI. You can refer to my change below (I've tested it with a two- channel test case being played in 44.1KHz 16-bit):
diff --git a/arch/arm/boot/dts/imx6sx.dtsi b/arch/arm/boot/dts/imx6sx.dtsi index b8a5056..f4c7308 100644 --- a/arch/arm/boot/dts/imx6sx.dtsi +++ b/arch/arm/boot/dts/imx6sx.dtsi @@ -307,7 +307,7 @@ clocks = <&clks IMX6SX_CLK_SSI1_IPG>, <&clks IMX6SX_CLK_SSI1>; clock-names = "ipg", "baud"; - dmas = <&sdma 37 1 0>, <&sdma 38 1 0>; + dmas = <&sdma 37 22 0>, <&sdma 38 22 0>; dma-names = "rx", "tx"; fsl,fifo-depth = <15>; status = "disabled"; @@ -321,7 +321,7 @@ clocks = <&clks IMX6SX_CLK_SSI2_IPG>, <&clks IMX6SX_CLK_SSI2>; clock-names = "ipg", "baud"; - dmas = <&sdma 41 1 0>, <&sdma 42 1 0>; + dmas = <&sdma 41 22 0>, <&sdma 42 22 0>; dma-names = "rx", "tx"; fsl,fifo-depth = <15>; status = "disabled"; @@ -335,7 +335,7 @@ clocks = <&clks IMX6SX_CLK_SSI3_IPG>, <&clks IMX6SX_CLK_SSI3>; clock-names = "ipg", "baud"; - dmas = <&sdma 45 1 0>, <&sdma 46 1 0>; + dmas = <&sdma 45 22 0>, <&sdma 46 22 0>; dma-names = "rx", "tx"; fsl,fifo-depth = <15>; status = "disabled"; diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c index 674abf7..7cfe661 100644 --- a/sound/soc/fsl/fsl_ssi.c +++ b/sound/soc/fsl/fsl_ssi.c @@ -1002,8 +1002,8 @@ static int _fsl_ssi_set_dai_fmt(struct device *dev, wm = ssi_private->fifo_depth;
regmap_write(regs, CCSR_SSI_SFCSR, - CCSR_SSI_SFCSR_TFWM0(wm) | CCSR_SSI_SFCSR_RFWM0(wm) | - CCSR_SSI_SFCSR_TFWM1(wm) | CCSR_SSI_SFCSR_RFWM1(wm)); + CCSR_SSI_SFCSR_TFWM0(8) | CCSR_SSI_SFCSR_RFWM0(8) | + CCSR_SSI_SFCSR_TFWM1(8) | CCSR_SSI_SFCSR_RFWM1(8));
if (ssi_private->use_dual_fifo) { regmap_update_bits(regs, CCSR_SSI_SRCR, CCSR_SSI_SRCR_RFEN1, @@ -1322,8 +1322,9 @@ static int fsl_ssi_imx_probe(struct platform_device *pdev, /* When using dual fifo mode, we need to keep watermark * as even numbers due to dma script limitation. */ - ssi_private->dma_params_tx.maxburst &= ~0x1; - ssi_private->dma_params_rx.maxburst &= ~0x1; + dev_info(&pdev->dev, "tunning burst size for Dual FIFO mode\n"); + ssi_private->dma_params_tx.maxburst = 16; + ssi_private->dma_params_rx.maxburst = 16; }
if (!ssi_private->use_dma) {
And make sure you have the dev_info() printed out in your console. /* 22 in the DT is to call the Dual FIFO SDMA script */
It started with:
frame 0: 00, 00, 10, 20, 30, 40, 50, 60, 70, 80, 90, a0, b0, c0, d0, e0 frame 1: f0, 01, 11, 21, 31, 41, 51, 61, 71, 81, 91, a1, b1, c1, d1, e1
If this happens, just try to let SDMA work before SSI starts to read FIFOs -- enabling TDMAE or RDMAE before enabling TE or RE:
@@ -1093,6 +1093,15 @@ static int fsl_ssi_trigger(struct snd_pcm_substream *substream, int cmd, case SNDRV_PCM_TRIGGER_RESUME: case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) + regmap_update_bits(regs, CCSR_SSI_SIER, + CCSR_SSI_SIER_TDMAE, + CCSR_SSI_SIER_TDMAE); + else + regmap_update_bits(regs, CCSR_SSI_SIER, + CCSR_SSI_SIER_RDMAE, + CCSR_SSI_SIER_RDMAE); + + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) fsl_ssi_tx_config(ssi_private, true); else fsl_ssi_rx_config(ssi_private, true);
Nicolin
On Thu, Oct 29, 2015 at 06:44:12AM -0700, Caleb Crome wrote:
Reported by user space? Are you saying that's an ALSA underrun in the user space, not a hardware underrun reported by the IRQ in the driver? They are quite different. ALSA underrun comes from the DMA buffer gets underrun while the other one results from FIFO feeding efficiency. For ALSA underrun, enlarging the playback period size and period number will ease the problem:
Sometimes they happen at the same time. So, I run aplay, and all is
The 'they' is indicating ALSA underrun + hardware underrun or ALSA underrun + channel slip? It's not quite logical for a channel slip resulting from an ALSA underrun as it should restart by calling the trigger() functions in DAI drivers IIRC.
fine. Then the user space app will underrun, and then I look at the scope, and the channels have slipped. So somehow the start/restart after the underrun is not always perfect I guess.
Is there any mechanism for the DMA fifo underruns to be reported back to user space? There certainly should be, because the consequences
No. The release from Freescale official tree has a reset procedure applied to ESAI underrun but not SSI but I guess you may want to refer to that.
are catastrophic, yet the user space app goes on as if everything is just great. Much, much worse than the underrun that is reported (i.e. a skip in audio is bad but sometimes tolerable. A channel slip is permanent and absolutely intolerable).
STARTUP ISSUE SOLVED (INELEGANTLY)
:-)
On Thu, Oct 29, 2015 at 10:19 AM, Nicolin Chen nicoleotsuka@gmail.com wrote:
On Thu, Oct 29, 2015 at 06:44:12AM -0700, Caleb Crome wrote:
Reported by user space? Are you saying that's an ALSA underrun in the user space, not a hardware underrun reported by the IRQ in the driver? They are quite different. ALSA underrun comes from the DMA buffer gets underrun while the other one results from FIFO feeding efficiency. For ALSA underrun, enlarging the playback period size and period number will ease the problem:
Sometimes they happen at the same time. So, I run aplay, and all is
The 'they' is indicating ALSA underrun + hardware underrun or ALSA underrun + channel slip?
Exactly, they tend to come together, but either can come without the other.
It's not quite logical for a channel slip resulting from an ALSA underrun as it should restart by calling the trigger() functions in DAI drivers IIRC.
This actually is exactly what I'm seeing now. I'm seeing the *startup* happening from the trigger starting up slipped. So this does make perfect sense to me.
I am playing a very short ramp.wav file and seeing how often it starts up 'slipped' with the extra 0. It started up incorrectly 20 out of 300 trials. So, the startup is failing 7% of the time.
It occurred to me that perhaps the problem has to do when exactly when during the frame-sync period the fsl_ssi_trigger function was called. Perhaps, if it's called near the end or beginning of a frame, somehow something gets messed up. (The docs for the SCR register imply some of this, but it talks about either 2 or 6 bit clocks, so I'd expect the error rate to be lower than 7% (more like 2.5%).
So, I implemented a really inelegant patch to synchronize the trigger with the frame sync signal, and I got ZERO errors out of 500 trials! This seems to have nailed the startup problem!
In addition, I have run about 20 minutes of audio with no slips or problems, even though there have been aplay underruns. This is a major step forward for me :-)
The idea is to enable the SSI before enabling DMA. Then wait for a frame sync by polling. Once I get the frame sync disable the SSI, and let the trigger function continue.
How should this be done properly?
diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c index 73778c2..8cd8284 100644 --- a/sound/soc/fsl/fsl_ssi.c +++ b/sound/soc/fsl/fsl_ssi.c @@ -328,6 +328,41 @@ static void fsl_ssi_rxtx_config(struct fsl_ssi_private *ssi_private, } }
+/* + * wait for a frame sync. do this by enabling the SSI, + * then waiting for sync to happen, then disabling the SSI + * and put it back to the state it was at first + */ +static void wait_for_tfs(struct regmap *regs) +{ + u32 tfs; + u32 scr; + int maxcount = 100000; + regmap_read(regs, CCSR_SSI_SCR, &scr); + regmap_update_bits(regs, CCSR_SSI_SCR, 0x3, 0x3); + while(maxcount--) { + /* clear TFS bit */ + regmap_update_bits(regs, CCSR_SSI_SISR, CCSR_SSI_SISR_TFS, 0); + regmap_read(regs, CCSR_SSI_SISR, &tfs); + if ((tfs & CCSR_SSI_SISR_TFS)==0) + break; /* tfs went to 0 */ + } + if (maxcount < 0) { + printk(KERN_INFO "timed out 1, sisr = 0x%08x\n", tfs); + } + maxcount = 100000; + while(maxcount--) { + /* waiting for tfs to go to 1. */ + regmap_read(regs, CCSR_SSI_SISR, &tfs); + if ((tfs & CCSR_SSI_SISR_TFS)) + break; /* tfs went to 1 */ + } + if (maxcount < 0) { + printk(KERN_INFO "timed out 2\n"); + } + regmap_write(regs, CCSR_SSI_SCR, scr); +} + /* * Calculate the bits that have to be disabled for the current stream that is * getting disabled. This keeps the bits enabled that are necessary for the @@ -360,7 +395,10 @@ static void fsl_ssi_config(struct fsl_ssi_private *ssi_private, bool enable, int nr_active_streams; u32 scr_val; int keep_active; - + wait_for_tfs(regs); /* synchronize with the start of a frame + * to get done with this function well + * before the end of a frame + */ regmap_read(regs, CCSR_SSI_SCR, &scr_val);
nr_active_streams = !!(scr_val & CCSR_SSI_SCR_TE) + @@ -943,9 +980,9 @@ static int _fsl_ssi_set_dai_fmt(struct device *dev, * size. */ if (ssi_private->use_dma) - wm = ssi_private->fifo_depth - 2; + wm = 8; else - wm = ssi_private->fifo_depth; + wm = 8;
regmap_write(regs, CCSR_SSI_SFCSR, CCSR_SSI_SFCSR_TFWM0(wm) | CCSR_SSI_SFCSR_RFWM0(wm) | @@ -1260,8 +1297,8 @@ static int fsl_ssi_imx_probe(struct platform_device *pdev, * We have burstsize be "fifo_depth - 2" to match the SSI * watermark setting in fsl_ssi_startup(). */ - ssi_private->dma_params_tx.maxburst = ssi_private->fifo_depth - 2; - ssi_private->dma_params_rx.maxburst = ssi_private->fifo_depth - 2; + ssi_private->dma_params_tx.maxburst = 8; + ssi_private->dma_params_rx.maxburst = 8; ssi_private->dma_params_tx.addr = ssi_private->ssi_phys + CCSR_SSI_STX0; ssi_private->dma_params_rx.addr = ssi_private->ssi_phys + CCSR_SSI_SRX0;
fine. Then the user space app will underrun, and then I look at the scope, and the channels have slipped. So somehow the start/restart after the underrun is not always perfect I guess.
Is there any mechanism for the DMA fifo underruns to be reported back to user space? There certainly should be, because the consequences
No. The release from Freescale official tree has a reset procedure applied to ESAI underrun but not SSI but I guess you may want to refer to that.
Ooh, that can be a problem. Maybe I'll take a look. But for the moment, it appears that, so far, for now, the system is working.
-Caleb
On Thu, Oct 29, 2015 at 12:06:16PM -0700, Caleb Crome wrote:
This actually is exactly what I'm seeing now. I'm seeing the *startup* happening from the trigger starting up slipped. So this does make perfect sense to me.
I saw your problem in the other reply. And I suggested you to let DMA work first before SSI gets enabled. As SDMA in that case would transfer one burst length (16 if you applied my patch I sent you) and pause before SSI gets enabled. Then SSI would have enough data to send out without any startup issue.
It occurred to me that perhaps the problem has to do when exactly when during the frame-sync period the fsl_ssi_trigger function was called. Perhaps, if it's called near the end or beginning of a frame, somehow
I don't know how you measured if it's before of after. But the frame should not start until trigger() gets call -- more clearly SSIEN and TE get enabled. From my point of view, you problem should be caused by SSI getting enabled without enough data in the FIFO. And that's what I just described in the previous paragraph and previous reply.
something gets messed up. (The docs for the SCR register imply some of this, but it talks about either 2 or 6 bit clocks, so I'd expect the error rate to be lower than 7% (more like 2.5%).
In addition, I have run about 20 minutes of audio with no slips or problems, even though there have been aplay underruns. This is a major step forward for me :-)
It'd be better to avoid user space ALSA underun as it may skip some data.
On Thu, Oct 29, 2015 at 12:28 PM, Nicolin Chen nicoleotsuka@gmail.com wrote:
On Thu, Oct 29, 2015 at 12:06:16PM -0700, Caleb Crome wrote:
This actually is exactly what I'm seeing now. I'm seeing the *startup* happening from the trigger starting up slipped. So this does make perfect sense to me.
I saw your problem in the other reply. And I suggested you to let DMA work first before SSI gets enabled. As SDMA in that case would transfer one burst length (16 if you applied my patch I sent you) and pause before SSI gets enabled. Then SSI would have enough data to send out without any startup issue.
Ah ha, you are exactly right. The root cause is that TE and SSIE are enabled at the same regmap write, with no opportunity for delay between the SSIE and TE. DMA can only get going if SSIE is enabled, and the only place SSIE gets enabled is exactly the same line that TE gets enabled.
specifically: regmap_update_bits(regs, CCSR_SSI_SCR, vals->scr, vals->scr);
I've looked over your emails and I don't see the patch that shows a pause between SSIE enable and TE enable. (I do see the dual-fifo example -- thank you! I'll give that a try -- it may further reduce stress on the system).
Here is a patch that solves the issue much more elegantly than my previous one: diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c index 73778c2..0bb5e52 100644 --- a/sound/soc/fsl/fsl_ssi.c +++ b/sound/soc/fsl/fsl_ssi.c @@ -435,8 +475,27 @@ static void fsl_ssi_config(struct fsl_ssi_private *ssi_private, bool enable,
config_done: /* Enabling of subunits is done after configuration */ - if (enable) + if (enable) { + /* + * don't enable TE/RE and SSIEN at the same time. + * enable SSIEN, but delay enabling of TE to + * allow time for DMA buffer to fill. + */ + u32 mask = vals->scr & ~CCSR_SSI_SCR_TE; + if (mask != vals->scr) { + /* enabling TE in this call. don't enable it + * until some delay after SSIE gets + * enabled. */ + if (vals->scr & ~CCSR_SSI_SCR_TE) { + regmap_update_bits(regs, CCSR_SSI_SCR, + mask, vals->scr); + udelay(50); /* give the DMA a chance + * to fill the TX buffer + * after SSIE is enabled. */ + } + } regmap_update_bits(regs, CCSR_SSI_SCR, vals->scr, vals->scr); + } }
It occurred to me that perhaps the problem has to do when exactly when during the frame-sync period the fsl_ssi_trigger function was called. Perhaps, if it's called near the end or beginning of a frame, somehow
I don't know how you measured if it's before of after. But the frame should not start until trigger() gets call -- more clearly SSIEN and TE get enabled. From my point of view, you problem should be caused by SSI getting enabled without enough data in the FIFO. And that's what I just described in the previous paragraph and previous reply.
Yep, that sure seems to be it. This patch above never seems to have a bad start. Is adding the udelay the best way to put a delay between SSIE and TE enable? Are there any other mechanisms for that?
Thanks so much for your attention and help! I think I can finally move forward with the MX6 on a bunch of projects now :-)
(well, I still have to test Rx and verify full dulex perfection there too, but this is a great start)
-Caleb
On Thu, Oct 29, 2015 at 03:23:41PM -0700, Caleb Crome wrote:
I saw your problem in the other reply. And I suggested you to let DMA work first before SSI gets enabled. As SDMA in that case would transfer one burst length (16 if you applied my patch I sent you) and pause before SSI gets enabled. Then SSI would have enough data to send out without any startup issue.
Ah ha, you are exactly right. The root cause is that TE and SSIE are enabled at the same regmap write, with no opportunity for delay between the SSIE and TE. DMA can only get going if SSIE is enabled, and the only place SSIE gets enabled is exactly the same line that TE gets enabled.
A little difference between your point and mine is that you think DMA request only starts when SSIE and TDMAE both get set while I only think about TDMAE. It's hard to say which one is correct as it depends on the design of IP wrapper but you can fairly test it with your change below: Mask both TE with SSIE and set them after the delay. If it doesn't work, yours is the correct one.
I've looked over your emails and I don't see the patch that shows a
You may need to open an offline email that I sent you with patches in its attachment. I can see it via Gmail anyway.
pause between SSIE enable and TE enable. (I do see the dual-fifo example -- thank you! I'll give that a try -- it may further reduce stress on the system).
I'm sure dual FIFO will get better performance. But the example I gave you doesn't set RX parameters so well. You may need to fine tune it later.
Is adding the udelay the best way to put a delay between SSIE and TE enable? Are there any other mechanisms for that?
Having a delay is much safer for you but surely it's not a common practice that's best all other platforms such as two-channel cases and those who needs performance.
I encourage you to try to follow one of patches I gave you that sets TDMAE/RDMAE at the beginning of the trigger(). Surely you may change it to TDMAE | SSIE after you find out that SSIE is indeed required. If you are still having trouble, adding a delay would be nice for you but it may be hard for me to ack it if you want to merge it in the driver.
Nicolin
On Thu, Oct 29, 2015 at 3:47 PM, Nicolin Chen nicoleotsuka@gmail.com wrote:
On Thu, Oct 29, 2015 at 03:23:41PM -0700, Caleb Crome wrote:
I saw your problem in the other reply. And I suggested you to let DMA work first before SSI gets enabled. As SDMA in that case would transfer one burst length (16 if you applied my patch I sent you) and pause before SSI gets enabled. Then SSI would have enough data to send out without any startup issue.
Ah ha, you are exactly right. The root cause is that TE and SSIE are enabled at the same regmap write, with no opportunity for delay between the SSIE and TE. DMA can only get going if SSIE is enabled, and the only place SSIE gets enabled is exactly the same line that TE gets enabled.
A little difference between your point and mine is that you think DMA request only starts when SSIE and TDMAE both get set while I only think about TDMAE. It's hard to say which one is correct as it depends on the design of IP wrapper but you can fairly test it with your change below: Mask both TE with SSIE and set them after the delay. If it doesn't work, yours is the correct one.
Ah, that's one thing that's very clear in the FSL datasheet: the FIFOs are ZEROED if SSIE is 0. This means that even if the DMA were trying to dump data in before SSIE is enabled, the data would go to bit heaven.
The docs for TE say, "The normal transmit enable sequence is to write data to the STX register(s) and then set the TE bit." (page 5145 of IMX6SDLRM.pdf)
So in the DMA + fifo case the words, "write data to the STX register(s)" imply that it's actually DMA writing to FIFOs, which then write the STX register. So, the sequence must be: enable SSIE & TDMAE to allow DMA to write to the fifo, then later enable TE, right?
I've looked over your emails and I don't see the patch that shows a
You may need to open an offline email that I sent you with patches in its attachment. I can see it via Gmail anyway.
Will do. Thanks.
pause between SSIE enable and TE enable. (I do see the dual-fifo example -- thank you! I'll give that a try -- it may further reduce stress on the system).
I'm sure dual FIFO will get better performance. But the example I gave you doesn't set RX parameters so well. You may need to fine tune it later.
Is adding the udelay the best way to put a delay between SSIE and TE enable? Are there any other mechanisms for that?
Having a delay is much safer for you but surely it's not a common practice that's best all other platforms such as two-channel cases and those who needs performance.
I encourage you to try to follow one of patches I gave you that sets TDMAE/RDMAE at the beginning of the trigger(). Surely you may change it to TDMAE | SSIE after you find out that SSIE is indeed required. If you are still having trouble, adding a delay would be nice for you but it may be hard for me to ack it if you want to merge it in the driver.
I now I see your patch! Okay, I'll give that a go, but it's still just a race condition between the regmap_update_bits with TDMAE (your patch) verses the regmap_update_bits from fsl_ssi_config. You're just hoping that a DMA write happens between TDMAE and the end of fsl_ssi_config where TE is enabled.
Now I think I get it though. We do TMDAE + SSIEN like your patch, then a short while loop on SFCSR.TFCNT0. After the first word gets written to the fifo, TFCNT0 should go > 0, and then we can release TE.
There may be a better status register to wait on but TFCNT0 seems like it will do the trick.
What do you think of that solution? Any better register to wait on? Would that be acceptable to merge into the driver?
Thanks, -Caleb
Nicolin
On Thu, Oct 29, 2015 at 04:33:26PM -0700, Caleb Crome wrote:
A little difference between your point and mine is that you think DMA request only starts when SSIE and TDMAE both get set while I only think about TDMAE. It's hard to say which one is correct as it depends on the design of IP wrapper but you can fairly test it with your change below: Mask both TE with SSIE and set them after the delay. If it doesn't work, yours is the correct one.
Ah, that's one thing that's very clear in the FSL datasheet: the FIFOs are ZEROED if SSIE is 0. This means that even if the DMA were trying to dump data in before SSIE is enabled, the data would go to bit heaven.
The docs for TE say, "The normal transmit enable sequence is to write data to the STX register(s) and then set the TE bit." (page 5145 of IMX6SDLRM.pdf)
So in the DMA + fifo case the words, "write data to the STX register(s)" imply that it's actually DMA writing to FIFOs, which then write the STX register. So, the sequence must be: enable SSIE & TDMAE to allow DMA to write to the fifo, then later enable TE, right?
You have the point. If SSIEN is being treated as the reset signal internally, any write enable signal could be ignored.
I encourage you to try to follow one of patches I gave you that sets TDMAE/RDMAE at the beginning of the trigger(). Surely you may change it to TDMAE | SSIE after you find out that SSIE is indeed required. If you are still having trouble, adding a delay would be nice for you but it may be hard for me to ack it if you want to merge it in the driver.
I now I see your patch! Okay, I'll give that a go, but it's still just a race condition between the regmap_update_bits with TDMAE (your patch) verses the regmap_update_bits from fsl_ssi_config. You're just hoping that a DMA write happens between TDMAE and the end of fsl_ssi_config where TE is enabled.
DMA transaction will be issued once BD is ready (in SDMA driver) and SSI sends a DMA request. So I'm hoping that the context latency between the regmap_update_bits() and TE setting should be enough for DMA to fill the FIFO.
Now I think I get it though. We do TMDAE + SSIEN like your patch, then a short while loop on SFCSR.TFCNT0. After the first word gets written to the fifo, TFCNT0 should go > 0, and then we can release TE.
There may be a better status register to wait on but TFCNT0 seems like it will do the trick.
Waiting for TFCNT0 sounds reasonable to me as long as the code is well commented.
Hi,
Le 30/10/2015 02:29, Nicolin Chen a écrit :
On Thu, Oct 29, 2015 at 04:33:26PM -0700, Caleb Crome wrote:
A little difference between your point and mine is that you think DMA request only starts when SSIE and TDMAE both get set while I only think about TDMAE. It's hard to say which one is correct as it depends on the design of IP wrapper but you can fairly test it with your change below: Mask both TE with SSIE and set them after the delay. If it doesn't work, yours is the correct one.
Ah, that's one thing that's very clear in the FSL datasheet: the FIFOs are ZEROED if SSIE is 0. This means that even if the DMA were trying to dump data in before SSIE is enabled, the data would go to bit heaven.
The docs for TE say, "The normal transmit enable sequence is to write data to the STX register(s) and then set the TE bit." (page 5145 of IMX6SDLRM.pdf)
So in the DMA + fifo case the words, "write data to the STX register(s)" imply that it's actually DMA writing to FIFOs, which then write the STX register. So, the sequence must be: enable SSIE & TDMAE to allow DMA to write to the fifo, then later enable TE, right?
You have the point. If SSIEN is being treated as the reset signal internally, any write enable signal could be ignored.
I encourage you to try to follow one of patches I gave you that sets TDMAE/RDMAE at the beginning of the trigger(). Surely you may change it to TDMAE | SSIE after you find out that SSIE is indeed required. If you are still having trouble, adding a delay would be nice for you but it may be hard for me to ack it if you want to merge it in the driver.
I now I see your patch! Okay, I'll give that a go, but it's still just a race condition between the regmap_update_bits with TDMAE (your patch) verses the regmap_update_bits from fsl_ssi_config. You're just hoping that a DMA write happens between TDMAE and the end of fsl_ssi_config where TE is enabled.
DMA transaction will be issued once BD is ready (in SDMA driver) and SSI sends a DMA request. So I'm hoping that the context latency between the regmap_update_bits() and TE setting should be enough for DMA to fill the FIFO.
Now I think I get it though. We do TMDAE + SSIEN like your patch, then a short while loop on SFCSR.TFCNT0. After the first word gets written to the fifo, TFCNT0 should go > 0, and then we can release TE.
There may be a better status register to wait on but TFCNT0 seems like it will do the trick.
Waiting for TFCNT0 sounds reasonable to me as long as the code is well commented.
At imx50 age, I remember one workaround was to fill the fifo manually, writing directly a number of samples (equal to the number of slots for one frame to keep the synchronization), and then, enable the TMDAE. This just allow to not have to wait an undefined period of time for the DMA to be ready. But, on the other hand, if the time to wait the DMA is short enough, it should not be an issue.
Regards, Arnaud
Hi again,
Le 30/10/2015 09:29, arnaud.mouiche@invoxia.com a écrit :
Hi,
Le 30/10/2015 02:29, Nicolin Chen a écrit :
On Thu, Oct 29, 2015 at 04:33:26PM -0700, Caleb Crome wrote:
A little difference between your point and mine is that you think DMA request only starts when SSIE and TDMAE both get set while I only think about TDMAE. It's hard to say which one is correct as it depends on the design of IP wrapper but you can fairly test it with your change below: Mask both TE with SSIE and set them after the delay. If it doesn't work, yours is the correct one.
Ah, that's one thing that's very clear in the FSL datasheet: the FIFOs are ZEROED if SSIE is 0. This means that even if the DMA were trying to dump data in before SSIE is enabled, the data would go to bit heaven.
The docs for TE say, "The normal transmit enable sequence is to write data to the STX register(s) and then set the TE bit." (page 5145 of IMX6SDLRM.pdf)
So in the DMA + fifo case the words, "write data to the STX register(s)" imply that it's actually DMA writing to FIFOs, which then write the STX register. So, the sequence must be: enable SSIE & TDMAE to allow DMA to write to the fifo, then later enable TE, right?
You have the point. If SSIEN is being treated as the reset signal internally, any write enable signal could be ignored.
I encourage you to try to follow one of patches I gave you that sets TDMAE/RDMAE at the beginning of the trigger(). Surely you may change it to TDMAE | SSIE after you find out that SSIE is indeed required. If you are still having trouble, adding a delay would be nice for you but it may be hard for me to ack it if you want to merge it in the driver.
I now I see your patch! Okay, I'll give that a go, but it's still just a race condition between the regmap_update_bits with TDMAE (your patch) verses the regmap_update_bits from fsl_ssi_config. You're just hoping that a DMA write happens between TDMAE and the end of fsl_ssi_config where TE is enabled.
DMA transaction will be issued once BD is ready (in SDMA driver) and SSI sends a DMA request. So I'm hoping that the context latency between the regmap_update_bits() and TE setting should be enough for DMA to fill the FIFO.
Now I think I get it though. We do TMDAE + SSIEN like your patch, then a short while loop on SFCSR.TFCNT0. After the first word gets written to the fifo, TFCNT0 should go > 0, and then we can release TE.
There may be a better status register to wait on but TFCNT0 seems like it will do the trick.
Waiting for TFCNT0 sounds reasonable to me as long as the code is well commented.
At imx50 age, I remember one workaround was to fill the fifo manually, writing directly a number of samples (equal to the number of slots for one frame to keep the synchronization), and then, enable the TMDAE. This just allow to not have to wait an undefined period of time for the DMA to be ready. But, on the other hand, if the time to wait the DMA is short enough, it should not be an issue.
Regards, Arnaud
In the same idea, they were other similar issues to deal with concerning the RX and TX fifo.
1) Still some samples in the TX fifo when stoping/ re-starting the TX, while RX stream is going on. Since we can't reset the TX fifo content without disabling SSIEN, possible samples filled by the TX DMA are still there when the TX stream stops. And when we start it again, they introduce a random de-synchronization of the output. The workaround for this case was to add additional zero samples in the fifo manually to reach a multiple of the frame size. But I would prefer a way to empty manually the fifo instead. If Freescale can help us to find another way as they know the internal of the SSI...
2) the same for RX fifo, if the RX stream is stopped/-restarted, while TX stream is not stopped. We may still have some samples in the RX fifo, and those fifo must be removed before starting the RX again. This was more simple in this case, since we only need to read the RX register manually until the fifo is empty, before enabling the DMA.
Obviously, disabling the SSIEN completely to start on good basis is not possible since we will lose a random number of samples already present in the fifo corresponding to the stream we don't want to stop, and this number of samples may not be a multiple of slots.
Arnaud
Add Shengjiu.
On Fri, Oct 30, 2015 at 09:45:32AM +0100, arnaud.mouiche@invoxia.com wrote:
At imx50 age, I remember one workaround was to fill the fifo manually, writing directly a number of samples (equal to the number of slots for one frame to keep the synchronization), and then, enable the TMDAE. This just allow to not have to wait an undefined period of time for the DMA to be ready. But, on the other hand, if the time to wait the DMA is short enough, it should not be an issue.
In the same idea, they were other similar issues to deal with concerning the RX and TX fifo.
- Still some samples in the TX fifo when stoping/ re-starting the
TX, while RX stream is going on. Since we can't reset the TX fifo content without disabling SSIEN, possible samples filled by the TX DMA are still there when the TX stream stops. And when we start it again, they introduce a random de-synchronization of the output. The workaround for this case was to add additional zero samples in the fifo manually to reach a multiple of the frame size. But I would prefer a way to empty manually the fifo instead. If Freescale can help us to find another way as they know the internal of the SSI...
- the same for RX fifo, if the RX stream is stopped/-restarted,
while TX stream is not stopped. We may still have some samples in the RX fifo, and those fifo must be removed before starting the RX again. This was more simple in this case, since we only need to read the RX register manually until the fifo is empty, before enabling the DMA.
Obviously, disabling the SSIEN completely to start on good basis is not possible since we will lose a random number of samples already present in the fifo corresponding to the stream we don't want to stop, and this number of samples may not be a multiple of slots.
You are right. Since SSI doesn't have separate FIFO reset bits for TX and RX respectively. These could be issues.
On Fri, Oct 30, 2015 at 09:29:02AM +0100, arnaud.mouiche@invoxia.com wrote:
At imx50 age, I remember one workaround was to fill the fifo manually, writing directly a number of samples (equal to the number of slots for one frame to keep the synchronization), and then, enable the TMDAE. This just allow to not have to wait an undefined period of time for the DMA to be ready. But, on the other hand, if the time to wait the DMA is short enough, it should not be an issue.
Nice input. This reminds me of the zero-filling step inside the ESAI startup procedure:
case SNDRV_PCM_TRIGGER_START: case SNDRV_PCM_TRIGGER_RESUME: case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: regmap_update_bits(esai_priv->regmap, REG_ESAI_xFCR(tx), ESAI_xFCR_xFEN_MASK, ESAI_xFCR_xFEN);
/* Write initial words reqiured by ESAI as normal procedure */ for (i = 0; tx && i < channels; i++) regmap_write(esai_priv->regmap, REG_ESAI_ETDR, 0x0);
regmap_update_bits(esai_priv->regmap, REG_ESAI_xCR(tx), tx ? ESAI_xCR_TE_MASK : ESAI_xCR_RE_MASK, tx ? ESAI_xCR_TE(pins) : ESAI_xCR_RE(pins)); break;
It's exactly the same thing to prevent underrun. This might be a reasonable alternative option due to no polling overhead and timeout handling.
Thanks Nicolin
On Fri, Oct 30, 2015 at 8:49 AM, Nicolin Chen nicoleotsuka@gmail.com wrote:
On Fri, Oct 30, 2015 at 09:29:02AM +0100, arnaud.mouiche@invoxia.com wrote:
At imx50 age, I remember one workaround was to fill the fifo manually, writing directly a number of samples (equal to the number of slots for one frame to keep the synchronization), and then, enable the TMDAE. This just allow to not have to wait an undefined period of time for the DMA to be ready. But, on the other hand, if the time to wait the DMA is short enough, it should not be an issue.
Nice input. This reminds me of the zero-filling step inside the ESAI startup procedure:
case SNDRV_PCM_TRIGGER_START: case SNDRV_PCM_TRIGGER_RESUME: case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: regmap_update_bits(esai_priv->regmap, REG_ESAI_xFCR(tx), ESAI_xFCR_xFEN_MASK, ESAI_xFCR_xFEN); /* Write initial words reqiured by ESAI as normal procedure */ for (i = 0; tx && i < channels; i++) regmap_write(esai_priv->regmap, REG_ESAI_ETDR, 0x0); regmap_update_bits(esai_priv->regmap, REG_ESAI_xCR(tx), tx ? ESAI_xCR_TE_MASK : ESAI_xCR_RE_MASK, tx ? ESAI_xCR_TE(pins) : ESAI_xCR_RE(pins)); break;
It's exactly the same thing to prevent underrun. This might be a reasonable alternative option due to no polling overhead and timeout handling.
Thanks Nicolin
Interesting, but in the SSI case, the # channels can easily be larger than the fifo size. For example, I'm using 16 channels, and the fifo size is only 15. Of course, with the dual fifo enabled, then this restriction is eased to a maximum # of slots of 30.
I'll do a quick check this morning with Nicolin's suggested patch, and see how many loops it needs to poll before the fifo gets a word. My suspicion is that Nicolin's hunch is correct: that is, that the context switch provides enough time for the DMA to push a word.
Will check back shortly.
Then, we'll have the problem that Arnaud bring up -- when starting the Rx after Tx, we will have fresh new problems. In fact, I'm 100% sure we will -- I already see it with a portaudio program that must bring up tx and rx separately. I think I'll start a new thread for that problem once this one is put to bed.
-Caleb
On Thu, Oct 29, 2015 at 6:29 PM, Nicolin Chen nicoleotsuka@gmail.com wrote:
On Thu, Oct 29, 2015 at 04:33:26PM -0700, Caleb Crome wrote:
A little difference between your point and mine is that you think DMA request only starts when SSIE and TDMAE both get set while I only think about TDMAE. It's hard to say which one is correct as it depends on the design of IP wrapper but you can fairly test it with your change below: Mask both TE with SSIE and set them after the delay. If it doesn't work, yours is the correct one.
Ah, that's one thing that's very clear in the FSL datasheet: the FIFOs are ZEROED if SSIE is 0. This means that even if the DMA were trying to dump data in before SSIE is enabled, the data would go to bit heaven.
The docs for TE say, "The normal transmit enable sequence is to write data to the STX register(s) and then set the TE bit." (page 5145 of IMX6SDLRM.pdf)
So in the DMA + fifo case the words, "write data to the STX register(s)" imply that it's actually DMA writing to FIFOs, which then write the STX register. So, the sequence must be: enable SSIE & TDMAE to allow DMA to write to the fifo, then later enable TE, right?
You have the point. If SSIEN is being treated as the reset signal internally, any write enable signal could be ignored.
I encourage you to try to follow one of patches I gave you that sets TDMAE/RDMAE at the beginning of the trigger(). Surely you may change it to TDMAE | SSIE after you find out that SSIE is indeed required. If you are still having trouble, adding a delay would be nice for you but it may be hard for me to ack it if you want to merge it in the driver.
I now I see your patch! Okay, I'll give that a go, but it's still just a race condition between the regmap_update_bits with TDMAE (your patch) verses the regmap_update_bits from fsl_ssi_config. You're just hoping that a DMA write happens between TDMAE and the end of fsl_ssi_config where TE is enabled.
DMA transaction will be issued once BD is ready (in SDMA driver) and SSI sends a DMA request. So I'm hoping that the context latency between the regmap_update_bits() and TE setting should be enough for DMA to fill the FIFO.
Now I think I get it though. We do TMDAE + SSIEN like your patch, then a short while loop on SFCSR.TFCNT0. After the first word gets written to the fifo, TFCNT0 should go > 0, and then we can release TE.
There may be a better status register to wait on but TFCNT0 seems like it will do the trick.
Waiting for TFCNT0 sounds reasonable to me as long as the code is well commented.
Okay, so I tried out your patch and it has 2 separate issues: the first related to missing samples (even when I enabled the SSIEN), and the second relating to the dual fifo. So, first the missing samples.
********************** ** Problem 1 ******* ********************** When I enable your patch in single fifo mode we get maxburst lost samples at the beginning of the stream. The important bit is below. It doesn't matter if the SSIEN comes before or after the TDMAE, the samples are lost. Surely because something in the fsl_ssi_config clears the fifo. I didn't look into that yet. --------------------------- snip ------------------- diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c index 73778c2..9d414e8 100644 @@ -1039,6 +1070,20 @@ static int fsl_ssi_trigger(struct snd_pcm_substream *substream, int cmd, case SNDRV_PCM_TRIGGER_START: case SNDRV_PCM_TRIGGER_RESUME: case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: + /* eanble SSI */ + regmap_update_bits(regs, CCSR_SSI_SCR, + CCSR_SSI_SCR_SSIEN, + CCSR_SSI_SCR_SSIEN); + /* and enable DMA to start data pumping */ + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) + regmap_update_bits(regs, CCSR_SSI_SIER, + CCSR_SSI_SIER_TDMAE, + CCSR_SSI_SIER_TDMAE); + else + regmap_update_bits(regs, CCSR_SSI_SIER, + CCSR_SSI_SIER_RDMAE, + CCSR_SSI_SIER_RDMAE); + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) fsl_ssi_tx_config(ssi_private, true); else
--------------------------- snip -------------------
If instead, I do the SSIEN at the bottom of the fsl_ssi_configure function like this, it works perfectly (in single fifo mode)
--------------------------- snip ------------------- +++ b/sound/soc/fsl/fsl_ssi.c @@ -435,8 +435,49 @@ static void fsl_ssi_config(struct fsl_ssi_private *ssi_private, bool enable,
config_done: /* Enabling of subunits is done after configuration */ - if (enable) + if (enable) { + /* eanble SSI */ + regmap_update_bits(regs, CCSR_SSI_SCR, + CCSR_SSI_SCR_SSIEN, + CCSR_SSI_SCR_SSIEN); + /* + * We must wait here until the DMA actually manages to + * get a word into the Tx FIFO. Only if starting a Tx + * stream. + * In tests on an MX6 at 1GHz clock speed, the do + * loop below never iterated at all (i.e. it dropped + * through without repeating ever. which means that + * the DMA has had time to get some words into the TX + * buffer. In fact, the tfcnt0 was always 13, so it + * was quite full by the time it reached this point, + * so this do loop should never be a bottleneck. If + * max iterations is hit, then something might be + * wrong. report it in that case. + */ + int tfcnt0 = 0; + int tfcnt1 = 1; + int max_iterations = 1000; + if (vals->scr & CCSR_SSI_SCR_TE) { + u32 sfcsr; + do { + regmap_read(regs, CCSR_SSI_SFCSR, &sfcsr); + tfcnt0 = CCSR_SSI_SFCSR_TFCNT0(sfcsr); + tfcnt1 = CCSR_SSI_SFCSR_TFCNT0(sfcsr); + } while(max_iterations-- && (tfcnt0 == 0) && (tfcnt1 == 0)); + } + if (max_iterations <= 0) { + /* + * The DMA definitely should have stuck at + * least a word into the FIFO by now. Report + * an error, but continue on blindly anyway, + * even though the SSI might not start right. + */ + struct platform_device *pdev = ssi_private->pdev; + dev_err(&pdev->dev, "max_iterations reached when" + "starting SSI Tx\n"); + } regmap_update_bits(regs, CCSR_SSI_SCR, vals->scr, vals->scr); + } } --------------------------- snip -------------------
When all is fixed, the data comes in perfect order: (most significant nibble is channel # from wav file, least significant 12 bits are frame number). (hope that lines don't wrap at 79 characters...) 0000,1000,2000,3000,4000,5000,6000,7000,8000,9000,a000,b000,c000,d000,e000,f000 0001,1001,2001,3001,4001,5001,6001,7001,8001,9001,a001,b001,c001,d001,e001,f001 0002,1002,2002,3002,4002,5002,6002,7002,8002,9002,a002,b002,c002,d002,e002,f002 0003,1003,2003,3003,4003,5003,6003,7003,8003,9003,a003,b003,c003,d003,e003,f003 ... 00fd,10fd,20fd,30fd,40fd,50fd,60fd,70fd,80fd,90fd,a0fd,b0fd,c0fd,d0fd,e0fd,f0fd 00fe,10fe,20fe,30fe,40fe,50fe,60fe,70fe,80fe,90fe,a0fe,b0fe,c0fe,d0fe,e0fe,f0fe 00ff,10ff,20ff,30ff,40ff,50ff,60ff,70ff,80ff,90ff,a0ff,b0ff,c0ff,d0ff,e0ff,f0ff and all zeros after that.
FYI, this is a 256 frame, 16 channel sound file for doing this test.
********************** ** Problem 2 ******* ********************** When the dual fifo mode is enabled, the data comes in the following order: 0000,1000,2000,3000,4000,5000,6000,7000,8000,9000,a000,b000,c000,d000,e000,f000 0001,1001,2001,3001,4001,5001,6001,7001,8001,9001,a001,b001,c001,d001,e001,1002 0002,3002,2002,5002,4002,7002,6002,9002,8002,b002,a002,d002,c002,f002,e002,1003 0003,3003,2003,5003,4003,7003,6003,9003,8003,b003,a003,d003,c003,f003,e003,1004
Strange, right? Frame 0 is perfect, however after the first frame, the data gets scrambled and from then on is wrong. The pattern stays consistent after frame 2.
Looks like I need to stick with single fifo for now.
So, bottom line is: single fifo seems to work perfectly with my proposed fix above. Dual fifo doesn't seem to work.
-caleb
On Fri, Oct 30, 2015 at 3:04 PM, Caleb Crome caleb@crome.org wrote:
On Thu, Oct 29, 2015 at 6:29 PM, Nicolin Chen nicoleotsuka@gmail.com wrote:
On Thu, Oct 29, 2015 at 04:33:26PM -0700, Caleb Crome wrote:
Waiting for TFCNT0 sounds reasonable to me as long as the code is well commented.
Okay, so I tried out your patch and it has 2 separate issues: the first related to missing samples (even when I enabled the SSIEN), and the second relating to the dual fifo. So, first the missing samples.
** Problem 2 *******
When the dual fifo mode is enabled, the data comes in the following order: 0000,1000,2000,3000,4000,5000,6000,7000,8000,9000,a000,b000,c000,d000,e000,f000 0001,1001,2001,3001,4001,5001,6001,7001,8001,9001,a001,b001,c001,d001,e001,1002 0002,3002,2002,5002,4002,7002,6002,9002,8002,b002,a002,d002,c002,f002,e002,1003 0003,3003,2003,5003,4003,7003,6003,9003,8003,b003,a003,d003,c003,f003,e003,1004
Strange, right? Frame 0 is perfect, however after the first frame, the data gets scrambled and from then on is wrong. The pattern stays consistent after frame 2.
Looks like I need to stick with single fifo for now.
So, bottom line is: single fifo seems to work perfectly with my proposed fix above. Dual fifo doesn't seem to work.
I realized the problem with the dual fifo mode: it's that you had set the maxburst to 16 in your patch. I guess this must be the maxburst to a single fifo maybe? When I set the fual fifo maxburst back to 8, I get perfection again!
So, now bottom line is: both single and dual ffio modes work :-)
So, with this patch (I'll send separately as a proper patch), Tx will work perfectly.
Next thing to tackle is: doing starts/stops on Tx/Rx in different orders:
So far, I've only verified: TxStart - TxEnd
Next is: RxStart - RxEnd
And the combinations: TxStart - RxStart - TxEnd - RxEnd TxStart - RxStart - RxEnd - TxEnd RxStart - TxStart - RxEnd - TxEnd RxStart - TxStart - TxEnd - RxEnd
Maybe the rest won't be as difficult :-/
-caleb
On Fri, Oct 30, 2015 at 03:35:17PM -0700, Caleb Crome wrote:
** Problem 2 *******
I realized the problem with the dual fifo mode: it's that you had set the maxburst to 16 in your patch. I guess this must be the maxburst to a single fifo maybe? When I set the fual fifo maxburst back to 8, I get perfection again!
I actually set 16 for dual FIFO and tested with 2-channel playback. Since you have dual FIFO right now, burst 8 doesn't hurt a lot as SPBA should be a pretty dedicated bus from SDMA point of view. As long as you don't have too many SPBA modules working with SDMA. It should work for you.
On Fri, Oct 30, 2015 at 6:32 PM, Nicolin Chen nicoleotsuka@gmail.com wrote:
On Fri, Oct 30, 2015 at 03:35:17PM -0700, Caleb Crome wrote:
** Problem 2 *******
I realized the problem with the dual fifo mode: it's that you had set the maxburst to 16 in your patch. I guess this must be the maxburst to a single fifo maybe? When I set the fual fifo maxburst back to 8, I get perfection again!
I actually set 16 for dual FIFO and tested with 2-channel playback. Since you have dual FIFO right now, burst 8 doesn't hurt a lot as SPBA should be a pretty dedicated bus from SDMA point of view. As long as you don't have too many SPBA modules working with SDMA. It should work for you.
But did check for bit perfect playback from start to end of your file? Or simply that it sounded right? it appears that with maxburst=16, after about 30 samples, the data plays out without further slipping. This would appear 'working' without careful analysis because it could slip an even number of samples and each channel would end up in the right slot after a few frames. maxbust=16 *definitely* does not work for me.
-Caleb
Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
On Fri, Oct 30, 2015 at 03:04:37PM -0700, Caleb Crome wrote:
** Problem 1 *******
When I enable your patch in single fifo mode we get maxburst lost samples at the beginning of the stream. The important bit is below. It doesn't matter if the SSIEN comes before or after the TDMAE, the samples are lost. Surely because something in the fsl_ssi_config clears the fifo. I didn't look into that yet.
The config function has gone way complicated than I realized. But you may need to dig a little bit to make a perfect point to insert your change as it might break other platforms: AC97 and old i.MX and MPC.
--------------------------- snip -------------------
If instead, I do the SSIEN at the bottom of the fsl_ssi_configure function like this, it works perfectly (in single fifo mode)
--------------------------- snip ------------------- +++ b/sound/soc/fsl/fsl_ssi.c @@ -435,8 +435,49 @@ static void fsl_ssi_config(struct fsl_ssi_private *ssi_private, bool enable,
config_done: /* Enabling of subunits is done after configuration */
- if (enable)
- if (enable) {
/* eanble SSI */
regmap_update_bits(regs, CCSR_SSI_SCR,
CCSR_SSI_SCR_SSIEN,
CCSR_SSI_SCR_SSIEN);
/*
* We must wait here until the DMA actually manages to
* get a word into the Tx FIFO. Only if starting a Tx
* stream.
* In tests on an MX6 at 1GHz clock speed, the do
* loop below never iterated at all (i.e. it dropped
* through without repeating ever. which means that
* the DMA has had time to get some words into the TX
* buffer. In fact, the tfcnt0 was always 13, so it
* was quite full by the time it reached this point,
* so this do loop should never be a bottleneck. If
* max iterations is hit, then something might be
* wrong. report it in that case.
*/
int tfcnt0 = 0;
int tfcnt1 = 1;
int max_iterations = 1000;
if (vals->scr & CCSR_SSI_SCR_TE) {
u32 sfcsr;
do {
regmap_read(regs, CCSR_SSI_SFCSR, &sfcsr);
tfcnt0 = CCSR_SSI_SFCSR_TFCNT0(sfcsr);
tfcnt1 = CCSR_SSI_SFCSR_TFCNT0(sfcsr);
} while(max_iterations-- && (tfcnt0 == 0) && (tfcnt1 == 0));
}
if (max_iterations <= 0) {
/*
* The DMA definitely should have stuck at
* least a word into the FIFO by now. Report
* an error, but continue on blindly anyway,
* even though the SSI might not start right.
*/
struct platform_device *pdev = ssi_private->pdev;
dev_err(&pdev->dev, "max_iterations reached when"
"starting SSI Tx\n");
}
Looks like polling is the only way to safely kick off. It's okay. But I would like to see how the change will be after merging the simultaneous TE/RE work around. It may need go a long way with other platform users.
On Fri, Oct 30, 2015 at 6:48 PM, Nicolin Chen nicoleotsuka@gmail.com wrote:
On Fri, Oct 30, 2015 at 03:04:37PM -0700, Caleb Crome wrote:
** Problem 1 *******
When I enable your patch in single fifo mode we get maxburst lost samples at the beginning of the stream. The important bit is below. It doesn't matter if the SSIEN comes before or after the TDMAE, the samples are lost. Surely because something in the fsl_ssi_config clears the fifo. I didn't look into that yet.
The config function has gone way complicated than I realized.
I know! It's quite complex and has taken me some time to pretty much understand.
But you may need to dig a little bit to make a perfect point to insert your change as it might break other platforms: AC97 and old i.MX and MPC.
The simple change to first enable SSIE, then TX should not cause any issues I think. The current code enables TE and SSIE at the same time, which is clearly against what the datasheet says to do. I believe the end of the fsl_ssi_config file in fact is the right place to do it, regardless of platform or format. (As long as the fifo & dima are enabled).
--------------------------- snip -------------------
If instead, I do the SSIEN at the bottom of the fsl_ssi_configure function like this, it works perfectly (in single fifo mode)
--------------------------- snip ------------------- +++ b/sound/soc/fsl/fsl_ssi.c @@ -435,8 +435,49 @@ static void fsl_ssi_config(struct fsl_ssi_private *ssi_private, bool enable,
config_done: /* Enabling of subunits is done after configuration */
- if (enable)
- if (enable) {
/* eanble SSI */
regmap_update_bits(regs, CCSR_SSI_SCR,
CCSR_SSI_SCR_SSIEN,
CCSR_SSI_SCR_SSIEN);
/*
* We must wait here until the DMA actually manages to
* get a word into the Tx FIFO. Only if starting a Tx
* stream.
* In tests on an MX6 at 1GHz clock speed, the do
* loop below never iterated at all (i.e. it dropped
* through without repeating ever. which means that
* the DMA has had time to get some words into the TX
* buffer. In fact, the tfcnt0 was always 13, so it
* was quite full by the time it reached this point,
* so this do loop should never be a bottleneck. If
* max iterations is hit, then something might be
* wrong. report it in that case.
*/
int tfcnt0 = 0;
int tfcnt1 = 1;
int max_iterations = 1000;
if (vals->scr & CCSR_SSI_SCR_TE) {
u32 sfcsr;
do {
regmap_read(regs, CCSR_SSI_SFCSR, &sfcsr);
tfcnt0 = CCSR_SSI_SFCSR_TFCNT0(sfcsr);
tfcnt1 = CCSR_SSI_SFCSR_TFCNT0(sfcsr);
} while(max_iterations-- && (tfcnt0 == 0) && (tfcnt1 == 0));
}
if (max_iterations <= 0) {
/*
* The DMA definitely should have stuck at
* least a word into the FIFO by now. Report
* an error, but continue on blindly anyway,
* even though the SSI might not start right.
*/
struct platform_device *pdev = ssi_private->pdev;
dev_err(&pdev->dev, "max_iterations reached when"
"starting SSI Tx\n");
}
Looks like polling is the only way to safely kick off. It's okay.
'polling' is a strong word because it never actually loops :-) Perhaps 'checking' is a better word.
But I would like to see how the change will be after merging the simultaneous TE/RE work around.
I'm not sure I understand this statement. What's the simultaneous TE/RE work around? It appears that the trigger function *only* does either one or the other, and not simultaneously. Is there a patch in the works to make tx/rx happen simultaneously?
-Caleb
On Sat, Oct 31, 2015 at 09:22:54AM -0700, Caleb Crome wrote:
But I would like to see how the change will be after merging the simultaneous TE/RE work around.
I'm not sure I understand this statement. What's the simultaneous TE/RE work around? It appears that the trigger function *only* does either one or the other, and not simultaneously. Is there a patch in the works to make tx/rx happen simultaneously?
I mean those problems Arnaud mentioned.
participants (6)
-
arnaud.mouiche@invoxia.com
-
Caleb Crome
-
Fabio Estevam
-
Markus Pargmann
-
Nicolin Chen
-
Roberto Fichera