On Tue, 27 Dec 2022 16:26:54 +0100, Marek Marczykowski-Górecki wrote:
On Thu, Dec 22, 2022 at 09:09:15AM +0100, Takashi Iwai wrote:
On Sat, 10 Dec 2022 17:17:42 +0100, Marek Marczykowski-Górecki wrote:
On Sat, Dec 10, 2022 at 02:00:06AM +0100, Marek Marczykowski-Górecki wrote:
On Fri, Dec 09, 2022 at 01:40:15PM +0100, Marek Marczykowski-Górecki wrote:
On Fri, Dec 09, 2022 at 09:10:19AM +0100, Takashi Iwai wrote:
On Fri, 09 Dec 2022 02:27:30 +0100, Marek Marczykowski-Górecki wrote: > > Hi, > > Under Xen PV dom0, with Linux >= 5.17, sound stops working after few > hours. pavucontrol still shows meter bars moving, but the speakers > remain silent. At least on some occasions I see the following message in > dmesg: > > [ 2142.484553] snd_hda_intel 0000:00:1f.3: Unstable LPIB (18144 >= 6396); disabling LPIB delay counting
Hit the issue again, this message did not appear in the log (or at least not yet).
(...)
In anyway, please check the behavior with 6.1-rc8 + the commit cc26516374065a34e10c9a8bf3e940e42cd96e2a ALSA: memalloc: Allocate more contiguous pages for fallback case from for-next of my sound git tree (which will be in 6.2-rc1).
This did not helped.
Looking at the mentioned commits, there is one specific aspect of Xen PV that may be relevant. It configures PAT differently than native Linux. Theoretically Linux adapts automatically and using proper API (like set_memory_wc()) should just work, but at least for i915 driver it causes issues (not fully tracked down yet). Details about that bug report include some more background: https://lore.kernel.org/intel-gfx/Y5Hst0bCxQDTN7lK@mail-itl/
Anyway, I have tested it on a Xen modified to setup PAT the same way as native Linux and the audio issue is still there.
If the problem persists, another thing to check is the hack below works.
Trying this one now.
And this one didn't either :/
(Sorry for the late reply, as I've been off in the last weeks.)
I think the hack doesn't influence on the PCM buffer pages, but only about BDL pages. Could you check the patch below instead? It'll disable the SG-buffer handling on x86 completely.
This seems to "fix" the issue, thanks! I guess I'll run it this way for now, but a proper solution would be nice. Let me know if I can collect any more info that would help with that.
Then we seem to go back again with the coherent memory allocation for the fallback sg cases. It was changed because the use of dma_alloc_coherent() caused a problem with IOMMU case for retrieving the page addresses, but since the commit 9736a325137b, we essentially avoid the fallback when IOMMU is used, so it should be fine again.
Let me know if the patch like below works for you instead of the previous hack to disable SG-buffer (note: totally untested!)
thanks,
Takashi
-- 8< -- --- a/sound/core/memalloc.c +++ b/sound/core/memalloc.c @@ -719,17 +719,30 @@ static const struct snd_malloc_ops snd_dma_sg_wc_ops = { struct snd_dma_sg_fallback { size_t count; struct page **pages; + dma_addr_t *addrs; };
static void __snd_dma_sg_fallback_free(struct snd_dma_buffer *dmab, struct snd_dma_sg_fallback *sgbuf) { - bool wc = dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK; - size_t i; - - for (i = 0; i < sgbuf->count && sgbuf->pages[i]; i++) - do_free_pages(page_address(sgbuf->pages[i]), PAGE_SIZE, wc); + size_t i, size; + + if (sgbuf->pages && sgbuf->addrs) { + i = 0; + while (i < sgbuf->count) { + if (!sgbuf->pages[i] || !sgbuf->addrs[i]) + break; + size = sgbuf->addrs[i] & ~PAGE_MASK; + if (!WARN_ON(size)) + break; + dma_free_coherent(dmab->dev.dev, size, + page_address(sgbuf->pages[i]), + sgbuf->addrs[i] & PAGE_MASK); + i += size; + } + } kvfree(sgbuf->pages); + kvfree(sgbuf->addrs); kfree(sgbuf); }
@@ -738,9 +751,8 @@ static void *snd_dma_sg_fallback_alloc(struct snd_dma_buffer *dmab, size_t size) struct snd_dma_sg_fallback *sgbuf; struct page **pagep, *curp; size_t chunk, npages; - dma_addr_t addr; + dma_addr_t *addrp; void *p; - bool wc = dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK;
sgbuf = kzalloc(sizeof(*sgbuf), GFP_KERNEL); if (!sgbuf) @@ -748,14 +760,16 @@ static void *snd_dma_sg_fallback_alloc(struct snd_dma_buffer *dmab, size_t size) size = PAGE_ALIGN(size); sgbuf->count = size >> PAGE_SHIFT; sgbuf->pages = kvcalloc(sgbuf->count, sizeof(*sgbuf->pages), GFP_KERNEL); - if (!sgbuf->pages) + sgbuf->addrs = kvcalloc(sgbuf->count, sizeof(*sgbuf->addrs), GFP_KERNEL); + if (!sgbuf->pages || !sgbuf->addrs) goto error;
pagep = sgbuf->pages; - chunk = size; + addrp = sgbuf->addrs; + chunk = PAGE_SIZE * (PAGE_SIZE - 1); /* to fit in low bits in addrs */ while (size > 0) { chunk = min(size, chunk); - p = do_alloc_pages(dmab->dev.dev, chunk, &addr, wc); + p = dma_alloc_coherent(dmab->dev.dev, chunk, addrp, DEFAULT_GFP); if (!p) { if (chunk <= PAGE_SIZE) goto error; @@ -767,6 +781,8 @@ static void *snd_dma_sg_fallback_alloc(struct snd_dma_buffer *dmab, size_t size) size -= chunk; /* fill pages */ npages = chunk >> PAGE_SHIFT; + *addrp |= npages; /* store in lower bits */ + addrp += npages; curp = virt_to_page(p); while (npages--) *pagep++ = curp++; @@ -775,6 +791,10 @@ static void *snd_dma_sg_fallback_alloc(struct snd_dma_buffer *dmab, size_t size) p = vmap(sgbuf->pages, sgbuf->count, VM_MAP, PAGE_KERNEL); if (!p) goto error; + + if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK) + set_pages_array_wc(sgbuf->pages, sgbuf->count); + dmab->private_data = sgbuf; /* store the first page address for convenience */ dmab->addr = snd_sgbuf_get_addr(dmab, 0); @@ -787,7 +807,11 @@ static void *snd_dma_sg_fallback_alloc(struct snd_dma_buffer *dmab, size_t size)
static void snd_dma_sg_fallback_free(struct snd_dma_buffer *dmab) { + struct snd_dma_sg_fallback *sgbuf = dmab->private_data; + vunmap(dmab->area); + if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK) + set_pages_array_wb(sgbuf->pages, sgbuf->count); __snd_dma_sg_fallback_free(dmab, dmab->private_data); }