On Fri, Mar 15, 2019 at 10:30:30AM +0000, Liam Girdwood wrote:
On Fri, 2019-03-15 at 10:20 +0100, Guennadi Liakhovetski wrote:
Hi,
Working on https://github.com/thesofproject/sof/pull/1053 I'm making sure all the DSP cores see the same data, which on Xtensa seems to require manual cache synchronisation.
Fwiw, this depends on the ISA and xtensa IP configuration (and this can vary between xtensa devices since the ISA is extremely configurable). This may also be true for other DSP ISAs too.
So not all configurations require manual cache sync? Do we know which ones do and which ones don't?
This is a painful task, it's actually even only practically possible because there aren't many objects in SOF that can be accessed by different cores. However, with that PR interrupt descriptors will also become accessible by multiple cores.
On Sue Creek there are two kinds of cascading interrupt controllers: internal "level2" - "level5" and a DesignWare interrupt controller IP, used by several peripherals, uncliding the UART. With the addition of manual cache synchronisation UART interrupts stopped working. Debugging showed, that DW INTC interrupt descriptors lose their updates when other descriptors have their cache invalidated. And those descriptors aren't even immediately close to each other in RAM. Roughly the following is happening:
The DW- IRQ controller IP address space is non cacheable (since it's HW) so you shouldn't need to invalidate any access to it.
Sorry, I didn't mean physical DW INTC registers, I meant a descriptor in software, in RAM.
dw_intc->irq = new_irq; list_for_item(list, descriptors) { desc = container_of(list, struct irq_desc, list); dcache_invalidate_region(desc, sizeof(*desc));
at which point dw_intc->irq loses its new_irq value and restores the initial -EINVAL value. If I either remove the invalidation line above or add a writeback for dw_intc between the first two lines, the problem goes away.
So iiuc dw_intc->irq is being trashed when you invalidate desc ?
Yes,
Therefore questions:
- am I using cache invalidation wrongly?
- if the invalidated memory region isn't cache-line aligned, what happens then? Are all full lines invalidated, thus potentially erasing data? But even if this were handled wrongly, it wouldn't cause this problem, because those descriptors aren't adjacent in RAM.
- is that a software or a hardware bug?
- how to fix?
More questions
- Are other dw_intc-> members trashed ? How is this structure/member aligned in
memory ?
Not in my test, because other members were initialised statically and only .irq was assigned before scanning the descriptor list and invalidating caches. I don't impose any alignment so far. The DW INTC descriptor was at 0xbe046aa8 and the descriptor, whose invalidation caused the trouble was at 0xbe04640c. .irq is in fact the first element in the struct.
- Has descriptors been invalidated ? i.e. are you invalidating a valid address
for desc ?
Yes, addresses are valid.
- If desc and sizeof(*desc) are not cache line/size alligned, then try and make
sure that size is rounded up to nearest cache line and desc rounded down to nearest cache line for alignment.
If this is the problem, this would indicate a hardware bug? I thought about trying this but according to our understanding this shouuldn't be necessary?
- Is this code running on core 0 (the same core that registered all the IRQs) ?
So far Sue Creek doesn't enable SMP, so, yes.
Thanks Guennadi