On 12/1/17 2:21 AM, yan.wang@linux.intel.com wrote:
From: Yan Wang yan.wang@linux.intel.com
If DMA trace copying is running, it is unnecessary to checking half full local DMA trace buffer.
- Add one flag in DMA trace data strcuture.
- Set/unset this flag in trace_work() callback.
- Add checking for this flag in dtrace_event().
The commit message is really far from self-explanatory. nothing explains under what circumstances the buffer can be half-full, and whether it's good or bad. Likewise it seems like the feature is that you avoid starting a new trace DMA transfer while an existing one is already underway, which seems like a fundamentally sound thing to do. However it's really conceptually unrelated to the decision on when the transfer occurs (it could be 90% full or 10% full depending on the defined watermark - which should be configurable btw). The commit message should really be "do not check trace buffer fullness while a transfer is taking place"
Signed-off-by: Yan Wang yan.wang@linux.intel.com
src/include/reef/dma-trace.h | 1 + src/lib/dma-trace.c | 17 +++++++++++++++++ 2 files changed, 18 insertions(+)
diff --git a/src/include/reef/dma-trace.h b/src/include/reef/dma-trace.h index 641a4dc..21824f3 100644 --- a/src/include/reef/dma-trace.h +++ b/src/include/reef/dma-trace.h @@ -60,6 +60,7 @@ struct dma_trace_data { uint32_t host_size; struct work dmat_work; uint32_t enabled;
- uint32_t dmat_work_flag; spinlock_t lock; };
diff --git a/src/lib/dma-trace.c b/src/lib/dma-trace.c index 763e955..9dfe7c7 100644 --- a/src/lib/dma-trace.c +++ b/src/lib/dma-trace.c @@ -56,6 +56,9 @@ static uint64_t trace_work(void *data, uint64_t delay) if (avail == 0) return DMA_TRACE_US;
- /* DMA trace copying is working */
- d->dmat_work_flag = 1;
- /* make sure we dont write more than buffer */ if (avail > DMA_TRACE_LOCAL_SIZE) avail = DMA_TRACE_LOCAL_SIZE;
@@ -100,7 +103,12 @@ static uint64_t trace_work(void *data, uint64_t delay)
out: spin_lock_irq(&d->lock, flags);
buffer->avail -= size;
/* DMA trace copying is done */
d->dmat_work_flag = 0;
spin_unlock_irq(&d->lock, flags);
/* reschedule the trace copying work */
@@ -138,6 +146,7 @@ int dma_trace_init(struct dma_trace_data *d) buffer->avail = 0; d->host_offset = 0; d->enabled = 0;
d->dmat_work_flag = 0;
list_init(&d->config.elem_list); work_init(&d->dmat_work, trace_work, d, WORK_ASYNC);
@@ -226,6 +235,14 @@ void dtrace_event(const char *e, uint32_t length)
spin_lock_irq(&trace_data->lock, flags); dtrace_add_event(e, length);
/* if DMA trace copying is working */
/* don't check if local buffer is half full */
if (trace_data->dmat_work_flag) {
spin_unlock_irq(&trace_data->lock, flags);
return;
}
spin_unlock_irq(&trace_data->lock, flags);
/* schedule copy now if buffer > 50% full */