From: Pan Xiuli xiuli.pan@linux.intel.com
We have wrong logic in reschedule, we always reschedule the trace_work once the buffer is half full and new trace coming. We will delay the old schedule before the old scheduled trace_work is finally run. Thus the trace_work will like the carrot in front of the DSP as the donkey, the DSP will never run the trace_work that scheduled in the furture while we are in busy state and lots of trace are coming continuously.
Signed-off-by: Pan Xiuli xiuli.pan@linux.intel.com Reviewed-by: Zhigang Wu zhigang.wu@linux.intel.com Tested-by: Zhigang Wu zhigang.wu@linux.intel.com
--- Test with: Mininow max rt5651 and GP-MRB nocodec and CNL nocodec SOF 1.1-stable: 210989dffeea811de2370fccb7cf5d53106b1e6e SOF-Tool 1.1-stable: 49c1b450e635ac0c893d67ff0ddfd34e03a85b46 https://github.com/plbossart/sound/tree/topic/sof-v4.14: 8d8c1bb32537800726b14d00d13f324b7f536386 --- src/include/reef/dma-trace.h | 1 + src/lib/dma-trace.c | 9 ++++++++- 2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/src/include/reef/dma-trace.h b/src/include/reef/dma-trace.h index f17ffe5..5ef5eb2 100644 --- a/src/include/reef/dma-trace.h +++ b/src/include/reef/dma-trace.h @@ -63,6 +63,7 @@ struct dma_trace_data { struct work dmat_work; uint32_t enabled; uint32_t copy_in_progress; + uint32_t rescheduled; uint32_t stream_tag; spinlock_t lock; }; diff --git a/src/lib/dma-trace.c b/src/lib/dma-trace.c index 1ff2bd4..ef29026 100644 --- a/src/lib/dma-trace.c +++ b/src/lib/dma-trace.c @@ -128,6 +128,9 @@ out: ipc_dma_trace_send_position(); #endif
+ /* enable reschedule after one copy is done */ + if (d->rescheduled) + d->rescheduled = 0; /* reschedule the trace copying work */ return DMA_TRACE_PERIOD; } @@ -137,6 +140,7 @@ int dma_trace_init_early(struct reef *reef) struct dma_trace_buf *buffer;
trace_data = rzalloc(RZONE_SYS, SOF_MEM_CAPS_RAM, sizeof(*trace_data)); + trace_data->rescheduled = 0; buffer = &trace_data->dmatb;
/* allocate new buffer */ @@ -335,9 +339,12 @@ void dtrace_event(const char *e, uint32_t length) spin_unlock_irq(&trace_data->lock, flags);
/* schedule copy now if buffer > 50% full */ - if (trace_data->enabled && buffer->avail >= (DMA_TRACE_LOCAL_SIZE / 2)) + if (trace_data->enabled && !trace_data->rescheduled && + buffer->avail >= (DMA_TRACE_LOCAL_SIZE / 2)) { work_reschedule_default(&trace_data->dmat_work, DMA_TRACE_RESCHEDULE_TIME); + trace_data->rescheduled = 1; + } }
void dtrace_event_atomic(const char *e, uint32_t length)