On Tue, Sep 24, 2019 at 02:41:46PM +0300, Peter Ujfalusi wrote:
It can only fixed by using different sequence within trigger for 'stop' and 'start': case SNDRV_PCM_TRIGGER_START: case SNDRV_PCM_TRIGGER_RESUME: case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: Start DMA first followed by CPU DAI (currently used sequence)
case SNDRV_PCM_TRIGGER_STOP: case SNDRV_PCM_TRIGGER_SUSPEND: case SNDRV_PCM_TRIGGER_PAUSE_PUSH: Stop CPU DAI first followed by DMA
Yeah, this makes sense I think.
If I think about the issue, I'm not sure why it was not noticed before as the behavior makes sense: we stop the DMA first then we stop the CPU DAI. If between the DMA stop and DAI stop we would need a sample in the DAI (which is still running) then for sure we will underrun in the HW (or overrun in case of capture).
There are a bunch of systems where the trigger only actually does anything with one or the other of the IPs and the startup for the other is handled by a hardware signal so the ordering doesn't really matter for them.
Not sure if anyone else have seen such underrun/overrun when stopping a stream, but the fact that I have seen it with both UDMA+PDMA and EDMA on different platforms makes me wonder if the issue can be seen on other platforms as well.
I'd guess so, especially with smaller buffers.