On Tue, Apr 11, 2023 at 01:09:59PM +0200, Jaroslav Kysela wrote:
On 08. 04. 23 9:24, Oswald Buddenhagen wrote:
Also, silencing touches the DMA buffer which may not be desired.
hypothetically, yes. but practically? [...]
The buffers can be mmaped so used directly for application and hardware.
yes, and they are owned by the hardware/driver. an application would know better than doing with them anything they were not designed for.
And lastly drivers can handle draining correctly (stop at the exact position - see substream->ops->trigger with SNDRV_PCM_TRIGGER_DRAIN argument).
yeah. hypothetically. afaict, there is exactly one driver which supports this. most (older) hardware wouldn't even have the capability to do such precise timing without external help.
Most hardware has FIFO and most drivers uses the DMA position, so I think that the interrupt -> stop DMA latency may be covered with this FIFO in most cases.
on most hardware it would be quite a stunt to re-program the buffer pointers on the fly to enable a mid-period interrupt. and given the reliability problems insisted on by takashi in the other thread, the approach seems questionable at best. and that's still ignoring the effort of migrating tens (hundreds?) of drivers.
Again, I would improve the documentation.
no amount of documentation will fix a bad api. it's just not how humans work.
the silencing is controlled using sw_params, so applications may request the silencing before drain.
yeah, they could, but they don't, and most won't ever.
you're arguing for not doing a very practical and simple change that will fix a lot of user code at once, for the sake of preventing an entirely hypothetical and implausible problem. that is not a good trade-off.
Lastly, I think that you cannot call updated snd_pcm_playback_silence() function with runtime->silence_size == 0.
if (runtime->silence_size < runtime->boundary) {
you missed the hunk that adjusts the code accordingly.
regards