On 23 July 2012 16:47, Vinod Koul vinod.koul@linux.intel.com wrote:
On Mon, 2012-07-23 at 16:20 +0530, Jassi Brar wrote:
Couldn't we employ snd_pcm_hw_params.fifo_size for the purpose ?
Nope, that should be used to represent the delay in samples being
output
not the sample which are buffered and processed in a DSP
After leaving alsa ring buffer, don't the samples incur extra delay of your internal buffer ?
Yes they do.
And they are two types. a) constant delay in DMA FIFO, aptly represented by delay b) dynamic buffering on input DMA at DSP and during post-processing. This is before the circular DMA at output, and this patch attempts to represent this buffering.
Correct, but I think (a) and (b) are not that different. I too might like to buy some time when the ring buffer is empty but my I2S controller's "deep fifo" still has a few millisecs of data 'buffered'. I don't see how your scenario is effectively different from mine. And I can't see how it matters if the the "DSP buffer" is before or after the ring-buffer. So perhaps we should abstract out any data outside of the ring-buffer to be in "fifo" and start taking fifo_size into account before warning XRUNs ?