
On 3/27/2018 6:21 PM, Xiuli Pan wrote:
From: Pan Xiuli xiuli.pan@linux.intel.com
We have wrong logic in reschedule, we always reschedule the trace_work once the buffer is half full and new trace coming. We will delay the old schedule before the old scheduled trace_work is finally run. Thus the trace_work will like the carrot in front of the DSP as the donkey, the DSP will never run the trace_work that scheduled in the furture while we are in busy state and lots of trace are coming continuously.
Signed-off-by: Pan Xiuli xiuli.pan@linux.intel.com Reviewed-by: Zhigang Wu zhigang.wu@linux.intel.com Reviewed-by: Yan Wang yan.wang@linux.intel.com Tested-by: Zhigang Wu zhigang.wu@linux.intel.com
V2: reuse copy in progress as Yan suggest.
Test with: Mininow max rt5651 and GP-MRB nocodec and CNL nocodec SOF 1.1-stable: 210989dffeea811de2370fccb7cf5d53106b1e6e SOF-Tool 1.1-stable: 49c1b450e635ac0c893d67ff0ddfd34e03a85b46 https://github.com/plbossart/sound/tree/topic/sof-v4.14: 8d8c1bb32537800726b14d00d13f324b7f536386
src/lib/dma-trace.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/src/lib/dma-trace.c b/src/lib/dma-trace.c index 1ff2bd4..733435e 100644 --- a/src/lib/dma-trace.c +++ b/src/lib/dma-trace.c @@ -115,7 +115,7 @@ out: buffer->avail -= size; }
- /* DMA trace copying is done */
/* DMA trace copying is done, allow reschedule */ d->copy_in_progress = 0;
spin_unlock_irq(&d->lock, flags);
@@ -137,6 +137,7 @@ int dma_trace_init_early(struct reef *reef) struct dma_trace_buf *buffer;
trace_data = rzalloc(RZONE_SYS, SOF_MEM_CAPS_RAM, sizeof(*trace_data));
- trace_data->rescheduled = 0;
This may be unnecessary. And copy_in_progress = 1 in trace_work() need be removed? Thanks.
Yan Wang
buffer = &trace_data->dmatb;
/* allocate new buffer */ @@ -335,9 +336,15 @@ void dtrace_event(const char *e, uint32_t length) spin_unlock_irq(&trace_data->lock, flags);
/* schedule copy now if buffer > 50% full */
- if (trace_data->enabled && buffer->avail >= (DMA_TRACE_LOCAL_SIZE / 2))
if (trace_data->enabled &&
buffer->avail >= (DMA_TRACE_LOCAL_SIZE / 2)) {
work_reschedule_default(&trace_data->dmat_work, DMA_TRACE_RESCHEDULE_TIME);
/* reschedule should not be intrrupted */
/* just like we are in copy progress */
trace_data->copy_in_progress = 1;
}
}
void dtrace_event_atomic(const char *e, uint32_t length)