Oleksandr Andrushchenko wrote:
On 08/07/2017 04:11 PM, Clemens Ladisch wrote:
How does that interface work?
For the buffer received in .copy_user/.copy_kernel we send a request to the backend and get response back (async) when it has copied the bytes into HW/mixer/etc, so the buffer at frontend side can be reused.
So if the frontend sends too many (too large) requests, does the backend wait until there is enough free space in the buffer before it does the actual copying and then acks?
If yes, then these acks can be used as interrupts. (You still have to count frames, and call snd_pcm_period_elapsed() exactly when a period boundary was reached or crossed.)
Splitting a large read/write into smaller requests to the backend would improve the granularity of the known stream position.
The overall latency would be the sum of the sizes of the frontend and backend buffers.
Why is the protocol designed this way? Wasn't the goal to expose some 'real' sound card?
Regards, Clemens