On Sat, Oct 19, 2019 at 09:22:16AM +0200, Takashi Iwai wrote:
On Fri, 18 Oct 2019 08:19:05 +0200, Takashi Sakamoto wrote:
Hi,
This patchset is a part of my continuous work to improve ALSA IEC 61883-1/6 packet streaming engine for clock-recovery, addressed in my previous message: https://mailman.alsa-project.org/pipermail/alsa-devel/2019-August/153388.htm...
For the clock-recovery, information in the sequence of tx packet from device should be used to generate the sequence of rx packet to the device. In current implementation of the engine, the packets are processed in different tasklet contexts for each IR/IT context. This is inconvenient to bypass information between these IR/IT contexts.
In this patchset, the engine is improved to process all of IR/IT contexts in the same domain by a tasklet context for one of IT context. For convenience, the IT context is called as 'IRQ target'. As a result, 1394 OHCI controller is managed to generate hardware IRQ just for the IRQ target. All of rx/tx packets are processed in tasklet for the hardware IRQ.
Takashi Sakamoto (6): ALSA: firewire-lib: add irq_target member into amdtp_domain struct ALSA: firewire-lib: replace pointer callback to flush isoc contexts in AMDTP domain ALSA: firewire-lib: replace ack callback to flush isoc contexts in AMDTP domain ALSA: firewire-lib: cancel flushing isoc context in the laste step to process context callback ALSA: firewire-lib: handle several AMDTP streams in callback handler of IRQ target ALSA: firewire-lib: postpone to start IR context
Applied all patches now.
Although the preempt handling in AMDTP looks a bit suspicious, it should be OK as long as the code has been tested, so I took as is.
I can understand your concern but it works well as long as I tested. In this time I use preemption-disable during processing queued packet in process context but before I had used irq-disable instead.
I depict a call graph to process isoc packet in process context below:
On pcm.ack() or pcm.pointer() callbacks: fw_iso_context_flush_completions() ->struct fw_card_driver.flush_iso_completions() = ohci_flush_iso_completions() ->tasklet_disable() ->context_tasklet() (read registers on 1394 OHCI controller to get info for queued packet) ->flush_iso_completions() ->struct fw_iso_context.callback.sc() = amdtp_master_callback() ->out_stream_callback() (for irq-target) ->fw_iso_context_flush_completions() ->... ->out_stream_callback() or in_stream_callback() (for non irq-target) ->tasklet_enable()
The tasklet of IT context for irq-target AMDTP stream is temporarily disabled when flushing isoc packets queued for the context. The tasklet doesn't run even if it's queued in hw IRQ context. In a point to processing queued packet for several isoc contexts in the same domain, I have no concern without any irq-flag handling for local cpu.
1394 OHCI controller still continue isoc cycle during the above call graph (= 125 usec interval according to 24.576 MHz clock). Actually the number of queued packets for non-irq-target AMDTP stream can be slightly different from the number for irq-target AMDTP stream by one or two packets without any interruption. In a case that any interruption occurs after processing queued packets for the irq-target stream, it's likely to process largely different number of queued packets for the rest of AMDTP streams in the same domain after resumed. It's desirable not to make such count gap between streams in the same domain and I leave it for my future work.
In this time the count gap is allowed. I use kernel preemption to avoid the interruption but to catch hw IRQ from 1394 OHCI controller (and the other hardware).
In another point, there's a race condition against flushing queued packet in runtime between several PCM substreams for AMDTP streams in the same domain. ALSA PCM core executes pcm.pointer and pcm.ack callback under spin lock of runtime of PCM substream, thus the race is controlled for operations to single PCM substream. However some PCM substreams are associated to the same domain via AMDTP streams. At present I never add any solution for this race.
Thanks
Takashi Sakamoto