Hi,
On Tue, 9 Feb 2010, Raymond Yau wrote:
- at T3, application calls snd_pcm_delay() to query how many samples
of delay there is currently (e.g. if it write a sample to ALSA PCM device now, how long before it hits the speaker)
No, your assumption (how long before it hits the speaker ) is wrong , refer to
http://www.alsa-project.org/alsa-doc/alsa-lib/group___p_c_m.html
I have trouble following your logic here. I mean:
For playback the delay is defined as the time that a frame that is written to the PCM stream shortly after this call will take to be actually audible. It is as such the overall latency from the write call to the final DAC.
Isn't that _exactly_ what I wrote above?
I.e. this is the purpose of snd_pcm_delay() and why application developers use it for e.g. audio/video sync. So what's the difference? Do you mean that speaker!=DAC, or...?
why do PA insist to use one period per buffer when only those ISA drivers and intel8x0 have periods_min =1 , the most common HDA driver and most sound cards have periods_min =2 ?
That is discussed at length here: http://0pointer.de/blog/projects/pulse-glitch-free.html