On Mon, 01 Mar 2021 16:24:00 +0100, Anton Yakovlev wrote:
On 01.03.2021 15:56, Takashi Iwai wrote:
On Mon, 01 Mar 2021 15:47:46 +0100, Anton Yakovlev wrote:
On 01.03.2021 14:32, Takashi Iwai wrote:
On Mon, 01 Mar 2021 10:25:05 +0100, Anton Yakovlev wrote:
On 28.02.2021 12:27, Takashi Iwai wrote:
On Sat, 27 Feb 2021 09:59:52 +0100, Anton Yakovlev wrote: > +/** > + * virtsnd_pcm_event() - Handle the PCM device event notification. > + * @snd: VirtIO sound device. > + * @event: VirtIO sound event. > + * > + * Context: Interrupt context.
OK, then nonatomic PCM flag is invalid...
Well, no. Here, events are kind of independent entities. PCM-related events are just a special case of more generic events, which can carry any kind of notification/payload. (And at the moment, only XRUN notification is supported for PCM substreams.) So it has nothing to do with the atomicity of the PCM device itself.
OK, thanks.
Basically the only question is how snd_pcm_period_elapsed() is called. And I see that it's called inside queue->lock, and this already invalidates the nonatomic PCM mode. So the code needs the fix: either fix this locking (and the context is guaranteed not to be an irq context), or change to the normal PCM mode without nonatomic flag. Both would bring some side-effect, and we need further changes, I suppose...
Ok, I understood the problem. Well, I would say the nonatomic PCM mode is more important option, since in this mode we can guarantee the correct operation of the device.
Which operation (except for the resume) in the trigger and the pointer callbacks need the action that need to sleep? I thought the sync with the command queue is done in the sync_stop. And most of others seem already taking the spinlock in themselves, so the non-atomic operation has little merit for them.
All three commands from the beginning of that switch in trigger op: ops->trigger(START) ops->trigger(PAUSE_RELEASE) ops->trigger(RESUME)
They all result in virtsnd_ctl_msg_send_sync(VIRTIO_SND_R_PCM_START)
And that command must be sync, i.e. we need to wait/sleep until device sent response. There are two major reasons for that: we need to be sure, that substream has been actually started (i.e. device said OK). And we need to take into account the virtualization overhead as well. Since substream starting on device side may take some time, we can not return from the trigger op until it actually happened. In atomic mode all these nuances are lost, and it may lead to incorrect application behavior.
I see, then the nonatomic mode is the only way to go, indeed.
Takashi