On Mon, Jul 23, 2012 at 03:16:28PM -0500, Pierre-Louis Bossart wrote:
Right, but this depends on the ability of the device to pause reading data when it reads up to the point where the application has written. This is a separate capability to any latency that's been added by the buffering, and most of the systems that have the buffering don't have this capability but instead either don't report the buffer or rely on the application being a full buffer ahead of the hardware.
We don't have such fancy hardware (and I don't think anyone has). This can happen even with simple IP that has an embedded SRAM and bursty DMA, if the IP buffer amounts to the period size to avoid
The usual way of representing this is to have the pointers in the buffer represent where the input to the hardware pipeline is at (ie, where the actual DMA is reading) and handle underflow on that normally.
partial wakes or transfers, and the application cannot provide more than one period initially you get an underflow that isn't a true one.
Normally this is handled by requiring that the application has at least one period ready for the hardware at all times, otherwise as you get down towards the end of the buffer things get racier and racier. It's not really about the delay, either - it's about when the DMA wants to read the memory which isn't quite the same thing. It could be that it does this with half the buffer remaining, it could be that it waits until it's got some much smaller amount of data left.
I think what's really needed here is to represent the buffer in the hardware and how it's filled up to the stack more directly. It's not the fact that there is data buffered that we need to tell the stack about (we can already do that), what we need to tell it is that the trigger level for refilling that may not be what's expected.