On Sat, Apr 18, 2015 at 06:34:07PM +0100, Mark Brown wrote:
On Fri, Apr 10, 2015 at 04:14:07PM +0800, Koro Chen wrote:
+Each external interface (called "IO" in this driver) is presented as a +DAI to ASoC. An IO must be connected via the interconnect to a memif. +The connection paths are configured through the device tree.
Why are these connection paths configured via device tree? I would expect that either there would be runtime configurability of these things (particularly if loopback configurations within the hardware are possible) or we'd just allocate memory interfaces to DAIs automatically as DAIs come into use.
There is a crossbar switch between the memory interfaces and the DAIs. Not every connection is possible, so not every memory interface can be used for every DAI. An algorithm choosing a suitable memory interface must be quite clever, complicated and also SoC dependent (the same but different hardware is used on MT8135 aswell), so I thought offering a static configuration via device tree is a good start. Should there be runtime configuration possible later the device tree settings could provide a good default.
+- mem-interface-playback:
- mem-interface-capture: property of memif, format is: <memif irq use_sram>;
memif: which memif to be used
(defined in include/dt-bindings/sound/mtk-afe.h)
irq: which irq to be used
(defined in include/dt-bindings/sound/mtk-afe.h)
use_sram: 1 is yes, 0 is no
Again, this looks like stuff we should be able to figure out at runtime
- the use of SRAM in particular looks like something we might want to
change depending on use case. Assuming it adds buffering then for a VoIP application we might not want to use SRAM to minimize latency but during music playback we might want to enable SRAM to minimize power consumption.
That's exactly the usecase. How could such a runtime configurability look like? sysfs? Or something based on the buffer sizes?
Sascha