On Wed, Jun 24, 2009 at 03:07:17PM -0400, Jon Smirl wrote:
On Wed, Jun 24, 2009 at 12:28 PM, Mark Brownbroonie@opensource.wolfsonmicro.com wrote:
it does. Part of what's going on here is that the kernel code is trying to give userspace access to the data for as long as possible.
The problem is knowing which sample in the background music to start mixing the low latency laser blast into. ALSA will need to know this index to figure out where to switch onto the replacement buffer. This offset is dynamic and it depends on how much work pulse is doing.
Of course, some hardware is not going to allow the DMA controller to be reprogrammed while active so would need to either wait for a buffer boundary or update the data in the current buffer as is currently done.
I'm starting to think the OSS model is right and mixing belongs in the kernel
This isn't a kernel/user problem. Exactly the same issues come up if the code pushing data into the driver is in the kernel, it'll still want as much information as possible about what the current status is.
Moving any non-hardware stuff into the kernel would create more problems than it solves. Remember that ALSA supports arbitrary plugin stacks - users could be doing signal processing on the data post mix, for example doing soft EQ or 3D enhancement. Some of this can be done pre-mix but it'll always be less efficient and in some cases would interfere with the operation of the algorithms.
Remember also that hardware output is just one option for ALSA. You can also have output plugins that do things like send data over the network.