On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote:
Mauro Carvalho Chehab mchehab@kernel.org escreveu:
There are still time control associated with it, as audio and video needs to be in sync. This is done by controlling the buffers size and could be fine-tuned by checking when the buffer transfer is done.
...
Just complementing: on media, we do this per video buffer (or per half video buffer). A typical use case on cameras is to have buffers transferred 30 times per second, if the video was streamed at 30 frames per second.
IIRC some big use case for this hardware was transcoding so there was a desire to just go at whatever rate the hardware could support as there is no interactive user consuming the output as it is generated.
I would assume that, on an audio/video stream, the audio data transfer will be programmed to also happen on a regular interval.
With audio the API is very much "wake userspace every Xms".