mailman.alsa-project.org
Sign In Sign Up
Manage this list Sign In Sign Up

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Sound-open-firmware

Thread Start a new thread
Download
Threads by month
  • ----- 2025 -----
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2018 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2017 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2016 -----
  • December
  • November
  • October
sound-open-firmware@alsa-project.org

December 2017

  • 11 participants
  • 66 discussions
[Sound-open-firmware] [PATCH resend 1/2] SRC: Bug fix for handling a deleted conversion
by Seppo Ingalsuo 06 Dec '17

06 Dec '17
This patch fixes a regression that caused SRC to try to initialize for a mode that has been disabled from in/out rates matrix. The feature exist to save tables RAM with modes those are not required. The bug caused a divide by zero to happen in src_buffer_lengths() function. Signed-off-by: Seppo Ingalsuo <seppo.ingalsuo(a)linux.intel.com> --- src/audio/src_core.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/src/audio/src_core.c b/src/audio/src_core.c index dc22772..d8b9a3d 100644 --- a/src/audio/src_core.c +++ b/src/audio/src_core.c @@ -133,7 +133,6 @@ int src_buffer_lengths(struct src_param *a, int fs_in, int fs_out, int nch, { struct src_stage *stage1; struct src_stage *stage2; - int k; int q; int den; int num; @@ -149,18 +148,25 @@ int src_buffer_lengths(struct src_param *a, int fs_in, int fs_out, int nch, a->idx_in = src_find_fs(src_in_fs, NUM_IN_FS, fs_in); a->idx_out = src_find_fs(src_out_fs, NUM_OUT_FS, fs_out); - /* Set blk_in, blk_out so that the muted fallback SRC keeps - * just source & sink in sync in pipeline without drift. - */ + /* Check that both in and out rates are supported */ if ((a->idx_in < 0) || (a->idx_out < 0)) { - k = gcd(fs_in, fs_out); - a->blk_in = fs_in / k; - a->blk_out = fs_out / k; + trace_src_error("us1"); + tracev_value(fs_in); + tracev_value(fs_out); return -EINVAL; } stage1 = src_table1[a->idx_out][a->idx_in]; stage2 = src_table2[a->idx_out][a->idx_in]; + + /* Check from stage1 parameter for a deleted in/out rate combination.*/ + if (stage1->filter_length < 1) { + trace_src_error("us2"); + tracev_value(fs_in); + tracev_value(fs_out); + return -EINVAL; + } + a->fir_s1 = nch * src_fir_delay_length(stage1); a->out_s1 = nch * src_out_delay_length(stage1); -- 2.11.0
2 3
0 0
[Sound-open-firmware] [PATCH v4] Add support to replace stale stream/trace position updates.
by yan.wang@linux.intel.com 06 Dec '17

06 Dec '17
From: Yan Wang <yan.wang(a)linux.intel.com> Host message queue is long sometimes. So when one new IPC message like DMA trace host offset is pushed into, the previous same type IPC meesage may haven't been sent. It is better to update the previous same type IPC message instead of sending 2 messages because it not only reduces the waiting number of host message queue but also send the latest information as soon as possible. The following is details of implementation: 1. Add "replace" parameter for ipc_queue_host_message() to indicate whether check duplicate message in host message queue which is decided by sender. 2. Add msg_find() to search duplicate message. 3. If replace flag is true, search and replace duplicate message. 4. For the message of host offset of DMA trace, enable replace logic. Signed-off-by: Yan Wang <yan.wang(a)linux.intel.com> --- src/include/reef/ipc.h | 2 +- src/ipc/intel-ipc.c | 45 ++++++++++++++++++++++++++++++++++----------- 2 files changed, 35 insertions(+), 12 deletions(-) diff --git a/src/include/reef/ipc.h b/src/include/reef/ipc.h index fcab45b..bb814be 100644 --- a/src/include/reef/ipc.h +++ b/src/include/reef/ipc.h @@ -125,7 +125,7 @@ int ipc_stream_send_xrun(struct comp_dev *cdev, int ipc_queue_host_message(struct ipc *ipc, uint32_t header, void *tx_data, size_t tx_bytes, void *rx_data, - size_t rx_bytes, void (*cb)(void*, void*), void *cb_data); + size_t rx_bytes, void (*cb)(void*, void*), void *cb_data, uint32_t replace); int ipc_send_short_msg(uint32_t msg); void ipc_platform_do_cmd(struct ipc *ipc); diff --git a/src/ipc/intel-ipc.c b/src/ipc/intel-ipc.c index e13b541..391907e 100644 --- a/src/ipc/intel-ipc.c +++ b/src/ipc/intel-ipc.c @@ -367,7 +367,7 @@ int ipc_stream_send_position(struct comp_dev *cdev, posn->comp_id = cdev->comp.id; return ipc_queue_host_message(_ipc, posn->rhdr.hdr.cmd, posn, - sizeof(*posn), NULL, 0, NULL, NULL); + sizeof(*posn), NULL, 0, NULL, NULL, 1); } /* send stream position TODO: send compound message */ @@ -379,7 +379,7 @@ int ipc_stream_send_xrun(struct comp_dev *cdev, posn->comp_id = cdev->comp.id; return ipc_queue_host_message(_ipc, posn->rhdr.hdr.cmd, posn, - sizeof(*posn), NULL, 0, NULL, NULL); + sizeof(*posn), NULL, 0, NULL, NULL, 1); } static int ipc_stream_trigger(uint32_t header) @@ -644,7 +644,7 @@ int ipc_dma_trace_send_position(void) posn.rhdr.hdr.size = sizeof(posn); return ipc_queue_host_message(_ipc, posn.rhdr.hdr.cmd, &posn, - sizeof(posn), NULL, 0, NULL, NULL); + sizeof(posn), NULL, 0, NULL, NULL, 1); } static int ipc_glb_debug_message(uint32_t header) @@ -899,19 +899,40 @@ static inline struct ipc_msg *msg_get_empty(struct ipc *ipc) return msg; } +static inline struct ipc_msg *msg_find(struct ipc *ipc, uint32_t header) +{ + struct list_item *plist; + struct ipc_msg *msg = NULL; + + list_for_item(plist, &ipc->msg_list) { + msg = container_of(plist, struct ipc_msg, list); + if (msg->header == header) + return msg; + } + + return NULL; +} int ipc_queue_host_message(struct ipc *ipc, uint32_t header, void *tx_data, size_t tx_bytes, void *rx_data, - size_t rx_bytes, void (*cb)(void*, void*), void *cb_data) + size_t rx_bytes, void (*cb)(void*, void*), void *cb_data, uint32_t replace) { - struct ipc_msg *msg; - uint32_t flags; + struct ipc_msg *msg = NULL; + uint32_t flags, found = 0; int ret = 0; spin_lock_irq(&ipc->lock, flags); - /* get a free message */ - msg = msg_get_empty(ipc); + /* do we need to replace an existing message? */ + if (replace) + msg = msg_find(ipc, header); + + /* do we need to use a new empty message? */ + if (msg) + found = 1; + else + msg = msg_get_empty(ipc); + if (msg == NULL) { trace_ipc_error("eQb"); ret = -EBUSY; @@ -929,9 +950,11 @@ int ipc_queue_host_message(struct ipc *ipc, uint32_t header, if (tx_bytes > 0 && tx_bytes < SOF_IPC_MSG_MAX_SIZE) rmemcpy(msg->tx_data, tx_data, tx_bytes); - /* now queue the message */ - ipc->dsp_pending = 1; - list_item_append(&msg->list, &ipc->msg_list); + if (!found) { + /* now queue the message */ + ipc->dsp_pending = 1; + list_item_append(&msg->list, &ipc->msg_list); + } out: spin_unlock_irq(&ipc->lock, flags); -- 2.7.4
3 2
0 0
[Sound-open-firmware] [PATCH] configure.ac: add CONFIG_HOST_PTABLE flag for platforms which need handle it
by Keyon Jie 06 Dec '17

06 Dec '17
We only need handle host page tables on platforms that we program DMA host buffer(addr/size) inside firmware, for other platforms, host driver will program these settings and won't pass in page tables. So here add frag CONFIG_HOST_PTABLE to configure this for different platforms, on Baytrail, Cherrytrail, we need CONFIG_HOST_PTABLE to be selected. Signed-off-by: Keyon Jie <yang.jie(a)linux.intel.com> --- configure.ac | 2 ++ src/ipc/intel-ipc.c | 6 ++++++ 2 files changed, 8 insertions(+) diff --git a/configure.ac b/configure.ac index 093d0b4..e437d06 100644 --- a/configure.ac +++ b/configure.ac @@ -82,6 +82,7 @@ case "$with_platform" in AC_DEFINE([CONFIG_BAYTRAIL], [1], [Configure for Baytrail]) AC_DEFINE([CONFIG_DMA_TRACE], [1], [Configure DMA trace]) + AC_DEFINE([CONFIG_HOST_PTABLE], [1], [Configure handling host page table]) ;; cherrytrail*) @@ -99,6 +100,7 @@ case "$with_platform" in AC_DEFINE([CONFIG_CHERRYTRAIL], [1], [Configure for Cherrytrail]) AC_DEFINE([CONFIG_DMA_TRACE], [1], [Configure DMA trace]) + AC_DEFINE([CONFIG_HOST_PTABLE], [1], [Configure handling host page table]) ;; *) AC_MSG_ERROR([Host platform not specified]) diff --git a/src/ipc/intel-ipc.c b/src/ipc/intel-ipc.c index 4c310b6..e39951a 100644 --- a/src/ipc/intel-ipc.c +++ b/src/ipc/intel-ipc.c @@ -82,6 +82,7 @@ static inline struct sof_ipc_hdr *mailbox_validate(void) return hdr; } +#ifdef CONFIG_HOST_PTABLE static void dma_complete(void *data, uint32_t type, struct dma_sg_elem *next) { struct intel_ipc_data *iipc = (struct intel_ipc_data *)data; @@ -219,6 +220,7 @@ static int parse_page_descriptors(struct intel_ipc_data *iipc, return 0; } +#endif /* * Stream IPC Operations. @@ -227,7 +229,9 @@ static int parse_page_descriptors(struct intel_ipc_data *iipc, /* allocate a new stream */ static int ipc_stream_pcm_params(uint32_t stream) { +#ifdef CONFIG_HOST_PTABLE struct intel_ipc_data *iipc = ipc_get_drvdata(_ipc); +#endif struct sof_ipc_pcm_params *pcm_params = _ipc->comp_data; struct sof_ipc_pcm_params_reply reply; struct ipc_comp_dev *pcm_dev; @@ -255,6 +259,7 @@ static int ipc_stream_pcm_params(uint32_t stream) cd = pcm_dev->cd; cd->params = pcm_params->params; +#ifdef CONFIG_HOST_PTABLE /* use DMA to read in compressed page table ringbuffer from host */ err = get_page_descriptors(iipc, &pcm_params->params.buffer); if (err < 0) { @@ -269,6 +274,7 @@ static int ipc_stream_pcm_params(uint32_t stream) trace_ipc_error("eAP"); goto error; } +#endif /* configure pipeline audio params */ err = pipeline_params(pcm_dev->cd->pipeline, pcm_dev->cd, pcm_params); -- 2.11.0
3 4
0 0
[Sound-open-firmware] [PATCH] configure.ac: add CONFIG_DMA_TRACE flag for DMA trace feature
by Keyon Jie 06 Dec '17

06 Dec '17
We only support DMA trace feature on Baytrail, Cherrytrail at the moment, if CONFIG_DMA_TRACE flag is not defined, we will use traditional mailbox trace instead. Signed-off-by: Keyon Jie <yang.jie(a)linux.intel.com> --- configure.ac | 2 ++ src/ipc/dma-copy.c | 2 ++ src/ipc/intel-ipc.c | 4 ++++ src/lib/trace.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 66 insertions(+) diff --git a/configure.ac b/configure.ac index 8946c36..093d0b4 100644 --- a/configure.ac +++ b/configure.ac @@ -81,6 +81,7 @@ case "$with_platform" in AC_SUBST(XTENSA_CORE) AC_DEFINE([CONFIG_BAYTRAIL], [1], [Configure for Baytrail]) + AC_DEFINE([CONFIG_DMA_TRACE], [1], [Configure DMA trace]) ;; cherrytrail*) @@ -97,6 +98,7 @@ case "$with_platform" in AC_SUBST(XTENSA_CORE) AC_DEFINE([CONFIG_CHERRYTRAIL], [1], [Configure for Cherrytrail]) + AC_DEFINE([CONFIG_DMA_TRACE], [1], [Configure DMA trace]) ;; *) AC_MSG_ERROR([Host platform not specified]) diff --git a/src/ipc/dma-copy.c b/src/ipc/dma-copy.c index 8565966..57d3e3d 100644 --- a/src/ipc/dma-copy.c +++ b/src/ipc/dma-copy.c @@ -77,7 +77,9 @@ static void dma_complete(void *data, uint32_t type, struct dma_sg_elem *next) if (type == DMA_IRQ_TYPE_LLIST) wait_completed(comp); +#if defined(CONFIG_DMA_TRACE) ipc_dma_trace_send_position(); +#endif next->size = DMA_RELOAD_END; } diff --git a/src/ipc/intel-ipc.c b/src/ipc/intel-ipc.c index e13b541..4c310b6 100644 --- a/src/ipc/intel-ipc.c +++ b/src/ipc/intel-ipc.c @@ -585,6 +585,7 @@ static int ipc_glb_pm_message(uint32_t header) } } +#if defined(CONFIG_DMA_TRACE) /* * Debug IPC Operations. */ @@ -662,6 +663,7 @@ static int ipc_glb_debug_message(uint32_t header) return -EINVAL; } } +#endif /* * Topology IPC Operations. @@ -877,8 +879,10 @@ int ipc_cmd(void) return ipc_glb_stream_message(hdr->cmd); case iGS(SOF_IPC_GLB_DAI_MSG): return ipc_glb_dai_message(hdr->cmd); +#if defined(CONFIG_DMA_TRACE) case iGS(SOF_IPC_GLB_TRACE_MSG): return ipc_glb_debug_message(hdr->cmd); +#endif default: trace_ipc_error("eGc"); trace_value(type); diff --git a/src/lib/trace.c b/src/lib/trace.c index eec93b5..bb81d47 100644 --- a/src/lib/trace.c +++ b/src/lib/trace.c @@ -111,6 +111,8 @@ void _trace_error_atomic(uint32_t event) dcache_writeback_region((void*)t, sizeof(uint64_t) * 2); } +#if defined(CONFIG_DMA_TRACE) + void _trace_event(uint32_t event) { uint64_t dt[2]; @@ -135,6 +137,62 @@ void _trace_event_atomic(uint32_t event) dtrace_event_atomic((const char*)dt, sizeof(uint64_t) * 2); } +#else + +void _trace_event(uint32_t event) +{ + unsigned long flags; + uint64_t time, *t; + + if (!trace.enable) + return; + + time = platform_timer_get(platform_timer); + + /* send event by mail box too. */ + spin_lock_irq(&trace.lock, flags); + + /* write timestamp and event to trace buffer */ + t = (uint64_t *)(MAILBOX_TRACE_BASE + trace.pos); + trace.pos += (sizeof(uint64_t) << 1); + + if (trace.pos > MAILBOX_TRACE_SIZE - sizeof(uint64_t) * 2) + trace.pos = 0; + + spin_unlock_irq(&trace.lock, flags); + + t[0] = time; + t[1] = event; + + /* writeback trace data */ + dcache_writeback_region((void *)t, sizeof(uint64_t) * 2); +} + +void _trace_event_atomic(uint32_t event) +{ + uint64_t time, *t; + + if (!trace.enable) + return; + + time = platform_timer_get(platform_timer); + + /* write timestamp and event to trace buffer */ + t = (uint64_t *)(MAILBOX_TRACE_BASE + trace.pos); + trace.pos += (sizeof(uint64_t) << 1); + + if (trace.pos > MAILBOX_TRACE_SIZE - sizeof(uint64_t) * 2) + trace.pos = 0; + + t[0] = time; + t[1] = event; + + /* writeback trace data */ + dcache_writeback_region((void *)t, sizeof(uint64_t) * 2); +} + +#endif + void trace_off(void) { trace.enable = 0; -- 2.11.0
2 2
0 0
[Sound-open-firmware] [PATCH v2] Use atomic API without spin lock for trace_error().
by yan.wang@linux.intel.com 06 Dec '17

06 Dec '17
From: Yan Wang <yan.wang(a)linux.intel.com> When trace_error() is used to save error information into trace buffer, the firmware may not in normal state and some spin lock be still locked. So it may cause dead lock if trace_error() still uses non-atomic API with spin lock. Signed-off-by: Yan Wang <yan.wang(a)linux.intel.com> --- src/include/reef/trace.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/include/reef/trace.h b/src/include/reef/trace.h index ef82d75..9497f98 100644 --- a/src/include/reef/trace.h +++ b/src/include/reef/trace.h @@ -128,7 +128,7 @@ void trace_init(struct reef * reef); /* error tracing */ #if TRACEE #define trace_error(__c, __e) \ - _trace_error(__c | (__e[0] << 16) | (__e[1] <<8) | __e[2]) + _trace_error_atomic(__c | (__e[0] << 16) | (__e[1] <<8) | __e[2]) #define trace_error_atomic(__c, __e) \ _trace_error_atomic(__c | (__e[0] << 16) | (__e[1] <<8) | __e[2]) #else -- 2.7.4
2 1
0 0
[Sound-open-firmware] [PATCH 1/2] byt: clk: fix 38.4 MHz CPU clock support
by Pierre-Louis Bossart 06 Dec '17

06 Dec '17
38.4 MHz is not available, replace by 50 MHz as documented in HAS Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart(a)linux.intel.com> --- src/platform/baytrail/clk.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/platform/baytrail/clk.c b/src/platform/baytrail/clk.c index ce40052..b972aed 100644 --- a/src/platform/baytrail/clk.c +++ b/src/platform/baytrail/clk.c @@ -69,7 +69,7 @@ static struct clk_pdata *clk_pdata; static const struct freq_table cpu_freq[] = { {25000000, 25, 0x0}, {25000000, 25, 0x1}, - {38400000, 50, 0x2}, + {50000000, 50, 0x2}, {50000000, 50, 0x3}, /* default */ {100000000, 100, 0x4}, {200000000, 200, 0x5}, -- 2.14.1
2 2
0 0
[Sound-open-firmware] [PATCH] dw-dma: add handle for external layer 2 interrupt
by Keyon Jie 06 Dec '17

06 Dec '17
On some platforms(CONFIG_IRQ_MAP configured), the DW DMAC interrupts will be mapped to external layer 2 numbers, and use different numbers for each channel. Here add the handles for this case: 1. register interrupt handler for each channel; 2. the handler only need to take care of the specific channel. Signed-off-by: Keyon Jie <yang.jie(a)linux.intel.com> --- src/drivers/dw-dma.c | 238 +++++++++++++++++++++++++++++++++++++------------ src/include/reef/dma.h | 6 ++ 2 files changed, 187 insertions(+), 57 deletions(-) diff --git a/src/drivers/dw-dma.c b/src/drivers/dw-dma.c index b288049..fe229e1 100644 --- a/src/drivers/dw-dma.c +++ b/src/drivers/dw-dma.c @@ -711,6 +711,186 @@ static inline void dw_dma_chan_reload_next(struct dma *dma, int channel, dw_write(dma, DW_DMA_CHAN_EN, CHAN_ENABLE(channel)); } +static void dw_dma_setup(struct dma *dma) +{ + struct dw_drv_plat_data *dp = dma->plat_data.drv_plat_data; + int i; + + /* we cannot config DMAC if DMAC has been already enabled by host */ + if (dw_read(dma, DW_DMA_CFG) != 0) + dw_write(dma, DW_DMA_CFG, 0x0); + + /* now check that it's 0 */ + for (i = DW_DMA_CFG_TRIES; i > 0; i--) { + if (dw_read(dma, DW_DMA_CFG) == 0) + goto found; + } + trace_dma_error("eDs"); + return; + +found: + for (i = 0; i < DW_MAX_CHAN; i++) + dw_read(dma, DW_DMA_CHAN_EN); + +#ifdef HAVE_HDDA + /* enable HDDA before DMAC */ + shim_write(SHIM_HMDC, SHIM_HMDC_HDDA_ALLCH); +#endif + + /* enable the DMA controller */ + dw_write(dma, DW_DMA_CFG, 1); + + /* mask all interrupts for all 8 channels */ + dw_write(dma, DW_MASK_TFR, INT_MASK_ALL); + dw_write(dma, DW_MASK_BLOCK, INT_MASK_ALL); + dw_write(dma, DW_MASK_SRC_TRAN, INT_MASK_ALL); + dw_write(dma, DW_MASK_DST_TRAN, INT_MASK_ALL); + dw_write(dma, DW_MASK_ERR, INT_MASK_ALL); + +#ifdef DW_FIFO_PARTITION + /* TODO: we cannot config DMA FIFOs if DMAC has been already */ + /* allocate FIFO partitions, 128 bytes for each ch */ + dw_write(dma, DW_FIFO_PART1_LO, 0x100080); + dw_write(dma, DW_FIFO_PART1_HI, 0x100080); + dw_write(dma, DW_FIFO_PART0_HI, 0x100080); + dw_write(dma, DW_FIFO_PART0_LO, 0x100080 | (1 << 26)); + dw_write(dma, DW_FIFO_PART0_LO, 0x100080); +#endif + + /* set channel priorities */ + for (i = 0; i < DW_MAX_CHAN; i++) { +#if defined CONFIG_BAYTRAIL || defined CONFIG_CHERRYTRAIL + dw_write(dma, DW_CTRL_HIGH(i), + DW_CTLH_CLASS(dp->chan[i].class)); +#else + dw_write(dma, DW_CFG_LOW(i), DW_CFG_CLASS(dp->chan[i].class)); +#endif + } + +} + +#ifdef CONFIG_IRQ_MAP +/* external layer 2 interrupt for dmac */ +static void dw_dma_irq_handler(void *data) +{ + struct dma_int *dma_int = (struct dma_int *)data; + struct dma *dma = dma_int->dma; + struct dma_pdata *p = dma_get_drvdata(dma); + struct dma_sg_elem next; + uint32_t status_tfr = 0, status_block = 0, status_err = 0, status_intr; + uint32_t mask; + int i = dma_int->channel; + + status_intr = dw_read(dma, DW_INTR_STATUS); + if (!status_intr) + trace_dma_error("eDI"); + + trace_dma("irq"); + + /* get the source of our IRQ. */ + status_block = dw_read(dma, DW_STATUS_BLOCK); + status_tfr = dw_read(dma, DW_STATUS_TFR); + + /* TODO: handle errors, just clear them atm */ + status_err = dw_read(dma, DW_STATUS_ERR); + if (status_err) { + trace_dma_error("eDi"); + dw_write(dma, DW_CLEAR_ERR, status_err & i); + } + + /* clear interrupts for channel*/ + dw_write(dma, DW_CLEAR_BLOCK, status_block); + dw_write(dma, DW_CLEAR_TFR, status_tfr); + + /* skip if channel is not running */ + if (p->chan[i].status != COMP_STATE_ACTIVE) { + trace_dma_error("eDs"); + return; + } + + mask = 0x1 << i; + + /* end of a transfer */ + if ((status_tfr & mask) && + (p->chan[i].cb_type & DMA_IRQ_TYPE_LLIST)) { + trace_value(status_tfr); + + next.src = next.dest = DMA_RELOAD_LLI; + next.size = DMA_RELOAD_LLI; /* will reload lli by default */ + if (p->chan[i].cb) + p->chan[i].cb(p->chan[i].cb_data, + DMA_IRQ_TYPE_LLIST, &next); + + /* check for reload channel: + * next.size is DMA_RELOAD_END, stop this dma copy; + * next.size > 0 but not DMA_RELOAD_LLI, use next + * element for next copy; + * if we are waiting for pause, pause it; + * otherwise, reload lli + */ + switch (next.size) { + case DMA_RELOAD_END: + p->chan[i].status = COMP_STATE_PREPARE; + break; + case DMA_RELOAD_LLI: + /* reload lli, but let's check if it is paused */ + if (p->chan[i].status != COMP_STATE_PAUSED) + dw_dma_chan_reload_lli(dma, i); + break; + default: + dw_dma_chan_reload_next(dma, i, &next); + break; + } + } +#if DW_USE_HW_LLI + /* end of a LLI block */ + if (status_block & mask && + p->chan[i].cb_type & DMA_IRQ_TYPE_BLOCK) { + p->chan[i].cb(p->chan[i].cb_data, + DMA_IRQ_TYPE_BLOCK); + } +#endif +} + +static int dw_dma_probe(struct dma *dma) +{ + struct dma_int *dma_int[DW_MAX_CHAN]; + struct dma_pdata *dw_pdata; + int i; + + /* allocate private data */ + dw_pdata = rzalloc(RZONE_SYS, RFLAGS_NONE, sizeof(*dw_pdata)); + dma_set_drvdata(dma, dw_pdata); + + spinlock_init(&dma->lock); + + dw_dma_setup(dma); + + /* init work */ + for (i = 0; i < dma->plat_data.channels; i++) { + dw_pdata->chan[i].dma = dma; + dw_pdata->chan[i].channel = i; + dw_pdata->chan[i].status = COMP_STATE_INIT; + + dma_int[i] = rzalloc(RZONE_SYS, RFLAGS_NONE, + sizeof(struct dma_int)); + + dma_int[i]->dma = dma; + dma_int[i]->channel = i; + dma_int[i]->irq = dma->plat_data.irq + + (i << REEF_IRQ_BIT_SHIFT); + + /* register our IRQ handler */ + interrupt_register(dma_int[i]->irq, + dw_dma_irq_handler, dma_int[i]); + interrupt_enable(dma_int[i]->irq); + + } + + return 0; +} + +#else /* this will probably be called at the end of every period copied */ static void dw_dma_irq_handler(void *data) { @@ -805,63 +985,6 @@ static void dw_dma_irq_handler(void *data) } } -static void dw_dma_setup(struct dma *dma) -{ - struct dw_drv_plat_data *dp = dma->plat_data.drv_plat_data; - int i; - - /* we cannot config DMAC if DMAC has been already enabled by host */ - if (dw_read(dma, DW_DMA_CFG) != 0) - dw_write(dma, DW_DMA_CFG, 0x0); - - /* now check that it's 0 */ - for (i = DW_DMA_CFG_TRIES; i > 0; i--) { - if (dw_read(dma, DW_DMA_CFG) == 0) - goto found; - } - trace_dma_error("eDs"); - return; - -found: - for (i = 0; i < DW_MAX_CHAN; i++) - dw_read(dma, DW_DMA_CHAN_EN); - -#ifdef HAVE_HDDA - /* enable HDDA before DMAC */ - shim_write(SHIM_HMDC, SHIM_HMDC_HDDA_ALLCH); -#endif - - /* enable the DMA controller */ - dw_write(dma, DW_DMA_CFG, 1); - - /* mask all interrupts for all 8 channels */ - dw_write(dma, DW_MASK_TFR, INT_MASK_ALL); - dw_write(dma, DW_MASK_BLOCK, INT_MASK_ALL); - dw_write(dma, DW_MASK_SRC_TRAN, INT_MASK_ALL); - dw_write(dma, DW_MASK_DST_TRAN, INT_MASK_ALL); - dw_write(dma, DW_MASK_ERR, INT_MASK_ALL); - -#ifdef DW_FIFO_PARTITION - /* TODO: we cannot config DMA FIFOs if DMAC has been already */ - /* allocate FIFO partitions, 128 bytes for each ch */ - dw_write(dma, DW_FIFO_PART1_LO, 0x100080); - dw_write(dma, DW_FIFO_PART1_HI, 0x100080); - dw_write(dma, DW_FIFO_PART0_HI, 0x100080); - dw_write(dma, DW_FIFO_PART0_LO, 0x100080 | (1 << 26)); - dw_write(dma, DW_FIFO_PART0_LO, 0x100080); -#endif - - /* set channel priorities */ - for (i = 0; i < DW_MAX_CHAN; i++) { -#if defined CONFIG_BAYTRAIL || defined CONFIG_CHERRYTRAIL - dw_write(dma, DW_CTRL_HIGH(i), DW_CTLH_CLASS(dp->chan[i].class)); -#else - dw_write(dma, DW_CFG_LOW(i), DW_CFG_CLASS(dp->chan[i].class)); -#endif - } - -} - static int dw_dma_probe(struct dma *dma) { struct dma_pdata *dw_pdata; @@ -888,6 +1011,7 @@ static int dw_dma_probe(struct dma *dma) return 0; } +#endif const struct dma_ops dw_dma_ops = { .channel_get = dw_dma_channel_get, diff --git a/src/include/reef/dma.h b/src/include/reef/dma.h index 697e2c6..fc298e6 100644 --- a/src/include/reef/dma.h +++ b/src/include/reef/dma.h @@ -127,6 +127,12 @@ struct dma { void *private; }; +struct dma_int { + struct dma *dma; + uint32_t channel; + uint32_t irq; +}; + struct dma *dma_get(int dmac_id); #define dma_set_drvdata(dma, data) \ -- 2.11.0
1 0
0 0
[Sound-open-firmware] [PATCH v3 2/2] Do not check trace buffer fullness while a
by yan.wang@linux.intel.com 05 Dec '17

05 Dec '17
From: Yan Wang <yan.wang(a)linux.intel.com> The purpose of checking half fullness is sending trace data as soon as possible. It will avoid local DMA trace buffer full and unsent trace data overwritten. If DMA trace copying is running currently, it is unnecessary to checking half fullness local DMA trace buffer. The following is implementation details: 1. Add one flag in DMA trace data strcuture. 2. Set/unset this flag in trace_work() callback. 3. Add checking for this flag in dtrace_event(). Signed-off-by: Yan Wang <yan.wang(a)linux.intel.com> --- src/include/reef/dma-trace.h | 1 + src/lib/dma-trace.c | 17 +++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/src/include/reef/dma-trace.h b/src/include/reef/dma-trace.h index 641a4dc..21824f3 100644 --- a/src/include/reef/dma-trace.h +++ b/src/include/reef/dma-trace.h @@ -60,6 +60,7 @@ struct dma_trace_data { uint32_t host_size; struct work dmat_work; uint32_t enabled; + uint32_t copy_in_progress; spinlock_t lock; }; diff --git a/src/lib/dma-trace.c b/src/lib/dma-trace.c index 763e955..9dfe7c7 100644 --- a/src/lib/dma-trace.c +++ b/src/lib/dma-trace.c @@ -56,6 +56,9 @@ static uint64_t trace_work(void *data, uint64_t delay) if (avail == 0) return DMA_TRACE_US; + /* DMA trace copying is working */ + d->copy_in_progress = 1; + /* make sure we dont write more than buffer */ if (avail > DMA_TRACE_LOCAL_SIZE) avail = DMA_TRACE_LOCAL_SIZE; @@ -100,7 +103,12 @@ static uint64_t trace_work(void *data, uint64_t delay) out: spin_lock_irq(&d->lock, flags); + buffer->avail -= size; + + /* DMA trace copying is done */ + d->copy_in_progress = 0; + spin_unlock_irq(&d->lock, flags); /* reschedule the trace copying work */ @@ -138,6 +146,7 @@ int dma_trace_init(struct dma_trace_data *d) buffer->avail = 0; d->host_offset = 0; d->enabled = 0; + d->copy_in_progress = 0; list_init(&d->config.elem_list); work_init(&d->dmat_work, trace_work, d, WORK_ASYNC); @@ -226,6 +235,14 @@ void dtrace_event(const char *e, uint32_t length) spin_lock_irq(&trace_data->lock, flags); dtrace_add_event(e, length); + + /* if DMA trace copying is working */ + /* don't check if local buffer is half full */ + if (trace_data->copy_in_progress) { + spin_unlock_irq(&trace_data->lock, flags); + return; + } + spin_unlock_irq(&trace_data->lock, flags); /* schedule copy now if buffer > 50% full */ -- 2.7.4
2 1
0 0
[Sound-open-firmware] [PATCH v2] bvt: clk: Fix clock lookup table
by Liam Girdwood 05 Dec '17

05 Dec '17
Baytrail clock lookup has wrong MHz values for XTAL. Fix. Signed-off-by: Liam Girdwood <liam.r.girdwood(a)linux.intel.com> --- src/platform/baytrail/clk.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/platform/baytrail/clk.c b/src/platform/baytrail/clk.c index eac64c3..ce40052 100644 --- a/src/platform/baytrail/clk.c +++ b/src/platform/baytrail/clk.c @@ -67,8 +67,8 @@ static struct clk_pdata *clk_pdata; #if defined CONFIG_BAYTRAIL /* increasing frequency order */ static const struct freq_table cpu_freq[] = { - {19200000, 25, 0x0}, - {19200000, 25, 0x1}, + {25000000, 25, 0x0}, + {25000000, 25, 0x1}, {38400000, 50, 0x2}, {50000000, 50, 0x3}, /* default */ {100000000, 100, 0x4}, -- 2.14.1
1 0
0 0
[Sound-open-firmware] [PATCH] bvt: clk: Fix clock lookup table
by Liam Girdwood 05 Dec '17

05 Dec '17
Baytrail clock lookup has wrong ticks per MHz values for 19.2M. Fix. Signed-off-by: Liam Girdwood <liam.r.girdwood(a)linux.intel.com> --- src/platform/baytrail/clk.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/platform/baytrail/clk.c b/src/platform/baytrail/clk.c index eac64c3..b616eff 100644 --- a/src/platform/baytrail/clk.c +++ b/src/platform/baytrail/clk.c @@ -67,8 +67,8 @@ static struct clk_pdata *clk_pdata; #if defined CONFIG_BAYTRAIL /* increasing frequency order */ static const struct freq_table cpu_freq[] = { - {19200000, 25, 0x0}, - {19200000, 25, 0x1}, + {19200000, 19, 0x0}, + {19200000, 19, 0x1}, {38400000, 50, 0x2}, {50000000, 50, 0x3}, /* default */ {100000000, 100, 0x4}, -- 2.14.1
2 4
0 0
  • ← Newer
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • Older →

HyperKitty Powered by HyperKitty version 1.3.8.