On 08. 04. 23 9:24, Oswald Buddenhagen wrote:
On Sat, Apr 08, 2023 at 01:58:21AM +0200, Jaroslav Kysela wrote:
On 05. 04. 23 22:12, Oswald Buddenhagen wrote:
Draining will always playback somewhat beyond the end of the filled buffer. This would produce artifacts if the user did not set up the auto-silencing machinery. This patch makes it work out of the box.
I think that it was really bad decision to apply this patch without a broader discussion.
When we designed the API, we knew about described problems and we decided to keep this up to applications.
i ran into no documentation of either the problems nor the decisions and their implications for the user.
The documentation may be improved, but the "period transfers" are described.
The silencing may not help in all cases where the PCM samples ends with a high volume.
that would just create a slight crack, which isn't any different from a "regular" sudden stop.
A volume ramping should be used and it's an application job.
imo, that entirely misses the point - the volume is most likely already zero at the end of the buffer. that doesn't mean that it's ok to play the samples again where the volume might not be *quite* zero yet.
Also, silencing touches the DMA buffer which may not be desired.
hypothetically, yes. but practically? why would anyone want to play the same samples after draining? draining is most likely followed by closing the device. and even if not, in most cases (esp. where draining would actually make sense) one wouldn't play a fixed pattern that could be just re-used, so one would have to re-fill the buffer prior to starting again anyway. never mind the effort necessary to track the state of the buffer instead of just re-filling it. so for all practical purposes, already played samples can be considered undefined data and thus safe to overwrite.
The buffers can be mmaped so used directly for application and hardware. I don't really feel that it's a good thing to modify this buffer for playback when the application has not requested for that.
And lastly drivers can handle draining correctly (stop at the exact position - see substream->ops->trigger with SNDRV_PCM_TRIGGER_DRAIN argument).
yeah. hypothetically. afaict, there is exactly one driver which supports this. most (older) hardware wouldn't even have the capability to do such precise timing without external help.
Most hardware has FIFO and most drivers uses the DMA position, so I think that the interrupt -> stop DMA latency may be covered with this FIFO in most cases.
But I would really keep this on the driver code to handle this rather than do the forced silencing.
On Sat, Apr 08, 2023 at 07:55:48AM +0200, Takashi Iwai wrote:
Applying the silencing blindly might be an overkill, indeed, although this could be seen as an easy solution. Let's see.
i don't think that "overkill" is right here. someone has to do the silencing for draining to be useful at all, and so the question is only who that should be. my argument is that not auto-silencing is *extremely* unexpected, and thus just bad api. i'm pretty certain that about 99% of the usages of DRAIN start out missing this, and most never get fixed.
Again, I would improve the documentation. Also, the silencing is controlled using sw_params, so applications may request the silencing before drain.
imo, if any api is added, it should be to opt *out* of auto-silencing. but i don't think this makes any sense; there would be ~zero users of this ever.
Lastly, I think that you cannot call updated snd_pcm_playback_silence() function with runtime->silence_size == 0.
if (runtime->silence_size < runtime->boundary) { frames = runtime->silence_threshold - noise_dist; if ((snd_pcm_sframes_t) frames <= 0) return; if (frames > runtime->silence_size) frames = runtime->silence_size; } else {
The frames variable will be 0, so your code will do nothing.
Jaroslav