[alsa-devel] [PATCH 00/10] Add support for img AXD audio hardware decoder
This patch series adds AXD Alsa Compress Offload SoC driver.
AXD is an audio hardware based on MIPS architecture that supports decoding, encoding, GEQ, resampling, mixing and synchronisation. At the moment only decoding support is added in hope to add the rest of the functionality on top of that once this is accepted.
I divided the files into separate patches by functionality in hope it'll make the reviewing process easier. Worth noting that a lot of the cmd interface helper funtions in patch 7 are not used yet but will be as support for more functionality is added later.
At the moment this code has been tested on Pistachio SoC using gstreamer patched with the code in this link
https://bugzilla.gnome.org/show_bug.cgi?id=743192
Qais Yousef (10): irqchip: irq-mips-gic: export gic_send_ipi dt: add img,axd.txt device tree binding document ALSA: add AXD Audio Processing IP alsa driver ALSA: axd: add fw binary header manipulation files ALSA: axd: add buffers manipulation files ALSA: axd: add basic files for sending/receiving axd cmds ALSA: axd: add cmd interface helper functions ALSA: axd: add low level AXD platform setup files ALSA: axd: add alsa compress offload operations ALSA: axd: add Makefile
.../devicetree/bindings/sound/img,axd.txt | 34 + drivers/irqchip/irq-mips-gic.c | 1 + sound/soc/Kconfig | 1 + sound/soc/Makefile | 1 + sound/soc/img/Kconfig | 11 + sound/soc/img/Makefile | 1 + sound/soc/img/axd/Makefile | 13 + sound/soc/img/axd/axd_alsa_ops.c | 211 ++ sound/soc/img/axd/axd_api.h | 649 ++++ sound/soc/img/axd/axd_buffers.c | 243 ++ sound/soc/img/axd/axd_buffers.h | 74 + sound/soc/img/axd/axd_cmds.c | 102 + sound/soc/img/axd/axd_cmds.h | 532 ++++ sound/soc/img/axd/axd_cmds_config.c | 1235 ++++++++ sound/soc/img/axd/axd_cmds_decoder_config.c | 422 +++ sound/soc/img/axd/axd_cmds_info.c | 1249 ++++++++ sound/soc/img/axd/axd_cmds_internal.c | 3264 ++++++++++++++++++++ sound/soc/img/axd/axd_cmds_internal.h | 317 ++ sound/soc/img/axd/axd_cmds_pipes.c | 1387 +++++++++ sound/soc/img/axd/axd_hdr.c | 64 + sound/soc/img/axd/axd_hdr.h | 24 + sound/soc/img/axd/axd_module.c | 742 +++++ sound/soc/img/axd/axd_module.h | 83 + sound/soc/img/axd/axd_platform.h | 35 + sound/soc/img/axd/axd_platform_mips.c | 416 +++ 25 files changed, 11111 insertions(+) create mode 100644 Documentation/devicetree/bindings/sound/img,axd.txt create mode 100644 sound/soc/img/Kconfig create mode 100644 sound/soc/img/Makefile create mode 100644 sound/soc/img/axd/Makefile create mode 100644 sound/soc/img/axd/axd_alsa_ops.c create mode 100644 sound/soc/img/axd/axd_api.h create mode 100644 sound/soc/img/axd/axd_buffers.c create mode 100644 sound/soc/img/axd/axd_buffers.h create mode 100644 sound/soc/img/axd/axd_cmds.c create mode 100644 sound/soc/img/axd/axd_cmds.h create mode 100644 sound/soc/img/axd/axd_cmds_config.c create mode 100644 sound/soc/img/axd/axd_cmds_decoder_config.c create mode 100644 sound/soc/img/axd/axd_cmds_info.c create mode 100644 sound/soc/img/axd/axd_cmds_internal.c create mode 100644 sound/soc/img/axd/axd_cmds_internal.h create mode 100644 sound/soc/img/axd/axd_cmds_pipes.c create mode 100644 sound/soc/img/axd/axd_hdr.c create mode 100644 sound/soc/img/axd/axd_hdr.h create mode 100644 sound/soc/img/axd/axd_module.c create mode 100644 sound/soc/img/axd/axd_module.h create mode 100644 sound/soc/img/axd/axd_platform.h create mode 100644 sound/soc/img/axd/axd_platform_mips.c
Cc: Thomas Gleixner tglx@linutronix.de Cc: Jason Cooper jason@lakedaemon.net Cc: Marc Zyngier marc.zyngier@arm.com Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: Rob Herring robh+dt@kernel.org Cc: Pawel Moll pawel.moll@arm.com Cc: Mark Rutland mark.rutland@arm.com Cc: Ian Campbell ijc+devicetree@hellion.org.uk Cc: Kumar Gala galak@codeaurora.org Cc: devicetree@vger.kernel.org Cc: Liam Girdwood lgirdwood@gmail.com Cc: Mark Brown broonie@kernel.org Cc: Jaroslav Kysela perex@perex.cz Cc: Takashi Iwai tiwai@suse.com
Some drivers might require to send ipi to other cores. So export it. This will be used later by AXD driver.
Signed-off-by: Qais Yousef qais.yousef@imgtec.com Cc: Thomas Gleixner tglx@linutronix.de Cc: Jason Cooper jason@lakedaemon.net Cc: Marc Zyngier marc.zyngier@arm.com Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org --- drivers/irqchip/irq-mips-gic.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c index ff4be0515a0d..fc6fd506cd7e 100644 --- a/drivers/irqchip/irq-mips-gic.c +++ b/drivers/irqchip/irq-mips-gic.c @@ -227,6 +227,7 @@ void gic_send_ipi(unsigned int intr) { gic_write(GIC_REG(SHARED, GIC_SH_WEDGE), GIC_SH_WEDGE_SET(intr)); } +EXPORT_SYMBOL(gic_send_ipi);
int gic_get_c0_compare_int(void) {
On Mon, 24 Aug 2015, Qais Yousef wrote:
Some drivers might require to send ipi to other cores. So export it.
Which IPIs do you need to send from a driver which are not exposed by the SMP functions already?
This will be used later by AXD driver.
That smells fishy and it wants a proper explanation WHY and not just a sloppy statement that it will be used later. I can figure that out myself as exporting a function without using it does not make any sense.
Thanks,
tglx
On 08/24/2015 01:49 PM, Thomas Gleixner wrote:
On Mon, 24 Aug 2015, Qais Yousef wrote:
Some drivers might require to send ipi to other cores. So export it.
Which IPIs do you need to send from a driver which are not exposed by the SMP functions already?
It's not an SMP IPI. We use GIC to exchange interrupts between AXD and the host system since AXD is another MIPS core in the cluster.
This will be used later by AXD driver.
That smells fishy and it wants a proper explanation WHY and not just a sloppy statement that it will be used later. I can figure that out myself as exporting a function without using it does not make any sense.
Sorry for the terse explanation. As pointed above AXD uses GIC to send and receive interrupts to the host core. Without this change I can't compile the driver as a driver module because the symbol is not exported.
Does this make things clearer?
Thanks, Qais
Thanks,
tglx
On 24/08/15 14:02, Qais Yousef wrote:
On 08/24/2015 01:49 PM, Thomas Gleixner wrote:
On Mon, 24 Aug 2015, Qais Yousef wrote:
Some drivers might require to send ipi to other cores. So export it.
Which IPIs do you need to send from a driver which are not exposed by the SMP functions already?
It's not an SMP IPI. We use GIC to exchange interrupts between AXD and the host system since AXD is another MIPS core in the cluster.
So is this the case of another CPU in the system that is not under control of Linux, but that you need to signal anyway? How do you agree on the IPI number between the two systems?
This will be used later by AXD driver.
That smells fishy and it wants a proper explanation WHY and not just a sloppy statement that it will be used later. I can figure that out myself as exporting a function without using it does not make any sense.
Sorry for the terse explanation. As pointed above AXD uses GIC to send and receive interrupts to the host core. Without this change I can't compile the driver as a driver module because the symbol is not exported.
Does this make things clearer?
To me, it feels like this is yet another case of routing interrupts to another agent in the system, which is not a CPU under the kernel's control. There is at least two other platforms doing similar craziness (a Freescale platform, and at least one Nvidia).
I'd rather see something more "architected" than this blind export, or at least some level of filtering (the idea random drivers can access such a low-level function doesn't make me feel very good).
Thanks,
M.
On 08/24/2015 02:32 PM, Marc Zyngier wrote:
On 24/08/15 14:02, Qais Yousef wrote:
On 08/24/2015 01:49 PM, Thomas Gleixner wrote:
On Mon, 24 Aug 2015, Qais Yousef wrote:
Some drivers might require to send ipi to other cores. So export it.
Which IPIs do you need to send from a driver which are not exposed by the SMP functions already?
It's not an SMP IPI. We use GIC to exchange interrupts between AXD and the host system since AXD is another MIPS core in the cluster.
So is this the case of another CPU in the system that is not under control of Linux, but that you need to signal anyway? How do you agree on the IPI number between the two systems?
When Linux loads AXD firmware into memory it places the GIC numbers to use at a specific offset there as part of the startup protocol. When AXD starts running it will see these values and use them to send and receive interrupts.
This will be used later by AXD driver.
That smells fishy and it wants a proper explanation WHY and not just a sloppy statement that it will be used later. I can figure that out myself as exporting a function without using it does not make any sense.
Sorry for the terse explanation. As pointed above AXD uses GIC to send and receive interrupts to the host core. Without this change I can't compile the driver as a driver module because the symbol is not exported.
Does this make things clearer?
To me, it feels like this is yet another case of routing interrupts to another agent in the system, which is not a CPU under the kernel's control. There is at least two other platforms doing similar craziness (a Freescale platform, and at least one Nvidia).
I'd rather see something more "architected" than this blind export, or at least some level of filtering (the idea random drivers can access such a low-level function doesn't make me feel very good).
I don't know how to architect this better or how to perform the filtering, but I'm happy to hear suggestions and try them out. Keep in mind that detecting GIC and writing your own gic_send_ipi() is very simple. I have done this when the driver was out of tree. So restricting it by not exporting it will not prevent someone from really accessing the functionality, it's just they have to do it their own way.
Thanks, Qais
Thanks,
M.
On Mon, 24 Aug 2015, Qais Yousef wrote:
On 08/24/2015 02:32 PM, Marc Zyngier wrote:
I'd rather see something more "architected" than this blind export, or at least some level of filtering (the idea random drivers can access such a low-level function doesn't make me feel very good).
I don't know how to architect this better or how to perform the filtering, but I'm happy to hear suggestions and try them out. Keep in mind that detecting GIC and writing your own gic_send_ipi() is very simple. I have done this when the driver was out of tree. So restricting it by not exporting it will not prevent someone from really accessing the functionality, it's just they have to do it their own way.
Keep in mind that we are not talking about out of tree hackery. We talk about a kernel code submission and I doubt, that you will get away with a GIC detection/fiddling burried in your driver code.
Keep in mind that just slapping an export to some random function is not much better than doing a GIC hack in the driver.
Marcs concerns about blindly exposing IPI functionality to drivers is well justified and that kind of coprocessor stuff is not unique to your particular SoC. We're going to see such things more frequently in the not so distant future, so we better think now about proper solutions to that problem.
There are a couple of issues to solve:
1) How is the IPI which is received by the coprocessor reserved in the system?
2) How is it associated to a particular driver?
3) How do we ensure that a driver cannot issue random IPIs and can only send the associated ones?
None of these issues are handled by your export.
So we need a core infrastructure which allows us to do that. The requirements are pretty clear from the above and Marc might have some further restrictions in mind.
Thanks,
tglx
On 08/24/2015 04:07 PM, Thomas Gleixner wrote:
On Mon, 24 Aug 2015, Qais Yousef wrote:
On 08/24/2015 02:32 PM, Marc Zyngier wrote:
I'd rather see something more "architected" than this blind export, or at least some level of filtering (the idea random drivers can access such a low-level function doesn't make me feel very good).
I don't know how to architect this better or how to perform the filtering, but I'm happy to hear suggestions and try them out. Keep in mind that detecting GIC and writing your own gic_send_ipi() is very simple. I have done this when the driver was out of tree. So restricting it by not exporting it will not prevent someone from really accessing the functionality, it's just they have to do it their own way.
Keep in mind that we are not talking about out of tree hackery. We talk about a kernel code submission and I doubt, that you will get away with a GIC detection/fiddling burried in your driver code.
Keep in mind that just slapping an export to some random function is not much better than doing a GIC hack in the driver.
Marcs concerns about blindly exposing IPI functionality to drivers is well justified and that kind of coprocessor stuff is not unique to your particular SoC. We're going to see such things more frequently in the not so distant future, so we better think now about proper solutions to that problem.
Sure I'm not trying to argue against that.
There are a couple of issues to solve:
How is the IPI which is received by the coprocessor reserved in the system?
How is it associated to a particular driver?
Shouldn't 'interrupts' property in DT take care of these 2 questions? Maybe we can give it an alias name to make it more readable that this interrupt is requested for external IPI.
- How do we ensure that a driver cannot issue random IPIs and can only send the associated ones?
If we get the irq number from DT then I'm not sure how feasible it is to implement a generic_send_ipi() function that takes this number to generate an IPI.
Do you think this approach would work?
None of these issues are handled by your export.
So we need a core infrastructure which allows us to do that. The requirements are pretty clear from the above and Marc might have some further restrictions in mind.
Another issue I'm having which is related is that I need to communicate these GIC irq numbers to AXD core when it starts up. So the logic is that these IPIs are not hardwired and it's up to the system designer to allocate 2 free GIC irqs to be used for that purpose. At the moment I have my own DT property to take these numbers. Hopefully this link would explain the issue. See the question about gic-irq property.
https://lkml.org/lkml/2015/8/24/459
From what I know there's no generic way for the driver to get the hw irq number from linux irq number unless I missed something. Is it possible to add something to support this? Or maybe there's something but I failed to find?
Thanks, Qais
Thanks,
tglx
[adding Mark Rutland, as this is heading straight into uncharted DT territory]
On 24/08/15 17:39, Qais Yousef wrote:
On 08/24/2015 04:07 PM, Thomas Gleixner wrote:
On Mon, 24 Aug 2015, Qais Yousef wrote:
On 08/24/2015 02:32 PM, Marc Zyngier wrote:
I'd rather see something more "architected" than this blind export, or at least some level of filtering (the idea random drivers can access such a low-level function doesn't make me feel very good).
I don't know how to architect this better or how to perform the filtering, but I'm happy to hear suggestions and try them out. Keep in mind that detecting GIC and writing your own gic_send_ipi() is very simple. I have done this when the driver was out of tree. So restricting it by not exporting it will not prevent someone from really accessing the functionality, it's just they have to do it their own way.
Keep in mind that we are not talking about out of tree hackery. We talk about a kernel code submission and I doubt, that you will get away with a GIC detection/fiddling burried in your driver code.
Keep in mind that just slapping an export to some random function is not much better than doing a GIC hack in the driver.
Marcs concerns about blindly exposing IPI functionality to drivers is well justified and that kind of coprocessor stuff is not unique to your particular SoC. We're going to see such things more frequently in the not so distant future, so we better think now about proper solutions to that problem.
Sure I'm not trying to argue against that.
There are a couple of issues to solve:
How is the IPI which is received by the coprocessor reserved in the system?
How is it associated to a particular driver?
Shouldn't 'interrupts' property in DT take care of these 2 questions? Maybe we can give it an alias name to make it more readable that this interrupt is requested for external IPI.
The "interrupts" property has a rather different meaning, and isn't designed to hardcode IPIs. Also, this property describes an interrupt from a device to the CPU, not the other way around (I imagine you also have an interrupt coming from the AXD to the CPU, possibly using an IPI too).
We can deal with these issues, but that's not something we can improvise.
What I had in mind was something fairly generic: - interrupt-source: something generating an interrupt - interrupt-sink: something being targeted by an interrupt
You could then express things like:
intc: interrupt-controller@1000 { interrupt-controller; };
mydevice@f0000000 { interrupt-source = <&intc INT_SPEC 2 &inttarg1 &inttarg1>; };
inttarg1: mydevice@f1000000 { interrupt-sink = <&intc HWAFFINITY1>; };
inttarg2: cpu@1 { interrupt-sink = <&intc HWAFFINITY2>; };
You could also imagine having CPUs being both source and sink.
- How do we ensure that a driver cannot issue random IPIs and can only send the associated ones?
If we get the irq number from DT then I'm not sure how feasible it is to implement a generic_send_ipi() function that takes this number to generate an IPI.
Do you think this approach would work?
If you follow the above approach, it should be pretty easy to derive a source identifier and a sink identifier from the DT, and have the core code to route one to the other and do the right thing.
The source identifier could also be used to describe an IPI in a fairly safe way (the target being fixed by DT, but the actual number used dynamically allocated by the kernel).
This is just a 10 minutes braindump, so feel free to throw rocks at it and to come up with a better solution! :-)
Thanks,
M.
On 08/24/2015 06:17 PM, Marc Zyngier wrote:
[adding Mark Rutland, as this is heading straight into uncharted DT territory]
On 24/08/15 17:39, Qais Yousef wrote:
On 08/24/2015 04:07 PM, Thomas Gleixner wrote:
On Mon, 24 Aug 2015, Qais Yousef wrote:
On 08/24/2015 02:32 PM, Marc Zyngier wrote:
I'd rather see something more "architected" than this blind export, or at least some level of filtering (the idea random drivers can access such a low-level function doesn't make me feel very good).
I don't know how to architect this better or how to perform the filtering, but I'm happy to hear suggestions and try them out. Keep in mind that detecting GIC and writing your own gic_send_ipi() is very simple. I have done this when the driver was out of tree. So restricting it by not exporting it will not prevent someone from really accessing the functionality, it's just they have to do it their own way.
Keep in mind that we are not talking about out of tree hackery. We talk about a kernel code submission and I doubt, that you will get away with a GIC detection/fiddling burried in your driver code.
Keep in mind that just slapping an export to some random function is not much better than doing a GIC hack in the driver.
Marcs concerns about blindly exposing IPI functionality to drivers is well justified and that kind of coprocessor stuff is not unique to your particular SoC. We're going to see such things more frequently in the not so distant future, so we better think now about proper solutions to that problem.
Sure I'm not trying to argue against that.
There are a couple of issues to solve:
How is the IPI which is received by the coprocessor reserved in the system?
How is it associated to a particular driver?
Shouldn't 'interrupts' property in DT take care of these 2 questions? Maybe we can give it an alias name to make it more readable that this interrupt is requested for external IPI.
The "interrupts" property has a rather different meaning, and isn't designed to hardcode IPIs. Also, this property describes an interrupt from a device to the CPU, not the other way around (I imagine you also have an interrupt coming from the AXD to the CPU, possibly using an IPI too).
Yes we have an interrupt from AXD to the CPU. But the way I take care of the routing at the moment is that the CPU routes the interrupt it receives from AXD. And AXD routes the interrupt it receives from the CPU. This is useful because in MIPS GIC the routing is done per hw thread on the core so this gives the flexibility for each one to choose what it suits it best.
We can deal with these issues, but that's not something we can improvise.
What I had in mind was something fairly generic:
- interrupt-source: something generating an interrupt
- interrupt-sink: something being targeted by an interrupt
You could then express things like:
intc: interrupt-controller@1000 { interrupt-controller; };
mydevice@f0000000 { interrupt-source = <&intc INT_SPEC 2 &inttarg1 &inttarg1>; };
To make sure we're on the same page. INT_SPEC here refers to the arguments we pass to a standard interrupts property, right?
inttarg1: mydevice@f1000000 { interrupt-sink = <&intc HWAFFINITY1>; };
inttarg2: cpu@1 { interrupt-sink = <&intc HWAFFINITY2>; };
And HWAFFINITY here is the core (or hardware thread) this interrupt will be routed to?
So for my case where CPU is on core 0 and AXD is on core 1 my description will look like
cpu: cpu@0 { interrupt-source = <&gic GIC_SHARED 36 IRQ_TYPE_EDGE_RISING 1 &axd>; interrupt-sink = <&gic 0>; }
axd: axd { interrupt-source = <&gic GIC_SHARED 37 IRQ_TYPE_EDGE_RISING 1 &cpu>; interrupt-sink = <&gic 1>; }
If I didn't misunderstand you, the issue I see with this is that the information about which IRQ to use to send an IPI to AXD is not present in the AXD node. We will need to search the cpu node for something that is meant to be routed to axd or have some logic to implicitly infer it from interrupt-sink in axd node. Not convenient IMO.
Can we replace 'something' in interrupt-source and interrupt-sink definitions to 'host' or 'CPU' or do we really care about creating IPI between any 2 'things'?
Changing the definition will also make interrupt-sink a synonym/alias to interrupts property. So the description will become
axd: axd { interrupt-source = <&gic GIC_SHARED 36 IRQ_TYPE_EDGE_RISING>; /* interrupt from CPU to AXD */ interrupt-sink = <&gic GIC_SHARED 37 IRQ_TYPE_EDGE_RISING>; /* interrupt from AXD to CPU */ }
But this assume Linux won't take care of the routing. If we want Linux to take care of the routing, maybe something like this then?
axd: axd { interrupt-source = <&gic GIC_SHARED 36 IRQ_TYPE_EDGE_RISING HWAFFINITY1>; /* interrupt from CPU to AXD@HWAFFINITY1*/ interrupt-sink = <&gic GIC_SHARED 37 IRQ_TYPE_EDGE_RISING HWAFFINITY2>; /* interrupt from AXD to CPU@HWAFFINITY2 */ }
I don't think it's necessary to specify the HWAFFINITY2 for interrupt-sink as linux can use SMP affinity to move it around but we can make it optional in case there's a need to hardcode it to a specific Linux core. Or maybe the driver can use affinity hint..
You could also imagine having CPUs being both source and sink.
- How do we ensure that a driver cannot issue random IPIs and can only send the associated ones?
If we get the irq number from DT then I'm not sure how feasible it is to implement a generic_send_ipi() function that takes this number to generate an IPI.
Do you think this approach would work?
If you follow the above approach, it should be pretty easy to derive a source identifier and a sink identifier from the DT, and have the core code to route one to the other and do the right thing.
Do you think it's better for linux to take care of all the routing instead of each core doing its own routing? If Linux to do the routing for the other core (even if optionally), what's the mechanism to do that? We can't use irq_set_affinity() because we want to map something that is not part of Linux. A new mapping function in struct irq_domain_ops maybe?
The source identifier could also be used to describe an IPI in a fairly safe way (the target being fixed by DT, but the actual number used dynamically allocated by the kernel).
To be dynamic, then the interrupt controller must specify which interrupts are actually free to use. What if the DT doesn't describe all the hardawre that is connected to GIC and Linux thinks its free to use but actually it's connected to a real hardware but no one told us about? I think since this information will always have to be looked up maybe it's better to give the responsibility to the user to specify something they know will work explicitly.
This is just a 10 minutes braindump, so feel free to throw rocks at it and to come up with a better solution! :-)
Thanks for that. My brain is tied down to my use case to come up with something generic easily :-)
Any pointers on the best way to tie gic_send_ipi() with the driver/core code? The way it's currently tied to the core code is through SMP IPI functions which I don't think we can use. I'm thinking adding a pointer function in struct irq_chip would be the easiest approach maybe?
Thanks, Qais
Thanks,
M.
On Wed, 26 Aug 2015, Qais Yousef wrote:
Can we replace 'something' in interrupt-source and interrupt-sink definitions to 'host' or 'CPU' or do we really care about creating IPI between any 2 'things'?
Changing the definition will also make interrupt-sink a synonym/alias to interrupts property. So the description will become
axd: axd { interrupt-source = <&gic GIC_SHARED 36 IRQ_TYPE_EDGE_RISING>; /* interrupt from CPU to AXD */ interrupt-sink = <&gic GIC_SHARED 37 IRQ_TYPE_EDGE_RISING>; /* interrupt from AXD to CPU */ }
But this assume Linux won't take care of the routing. If we want Linux to take care of the routing, maybe something like this then?
axd: axd { interrupt-source = <&gic GIC_SHARED 36 IRQ_TYPE_EDGE_RISING HWAFFINITY1>; /* interrupt from CPU to AXD@HWAFFINITY1*/ interrupt-sink = <&gic GIC_SHARED 37 IRQ_TYPE_EDGE_RISING HWAFFINITY2>; /* interrupt from AXD to CPU@HWAFFINITY2 */ }
I don't think it's necessary to specify the HWAFFINITY2 for interrupt-sink as linux can use SMP affinity to move it around but we can make it optional in case there's a need to hardcode it to a specific Linux core. Or maybe the driver can use affinity hint..
Wrong. You cannot move an IPI around with set_affinity. It's possible to send an IPI to more than one target CPU, but that has nothing to do with affinities.
Are you talking about IPIs or about general interrupts which have an affinity setting?
Any pointers on the best way to tie gic_send_ipi() with the driver/core code? The way it's currently tied to the core code is through SMP IPI functions which I don't think we can use. I'm thinking adding a pointer function in struct irq_chip would be the easiest approach maybe?
That's the least of our worries. We need to get the high level interfaces and the devicetree mechanism straight before we talk about this kind of details.
Thanks,
tglx
On 08/26/2015 02:19 PM, Thomas Gleixner wrote:
On Wed, 26 Aug 2015, Qais Yousef wrote:
Can we replace 'something' in interrupt-source and interrupt-sink definitions to 'host' or 'CPU' or do we really care about creating IPI between any 2 'things'?
Changing the definition will also make interrupt-sink a synonym/alias to interrupts property. So the description will become
axd: axd { interrupt-source = <&gic GIC_SHARED 36 IRQ_TYPE_EDGE_RISING>; /* interrupt from CPU to AXD */ interrupt-sink = <&gic GIC_SHARED 37 IRQ_TYPE_EDGE_RISING>; /* interrupt from AXD to CPU */ }
But this assume Linux won't take care of the routing. If we want Linux to take care of the routing, maybe something like this then?
axd: axd { interrupt-source = <&gic GIC_SHARED 36 IRQ_TYPE_EDGE_RISING HWAFFINITY1>; /* interrupt from CPU to AXD@HWAFFINITY1*/ interrupt-sink = <&gic GIC_SHARED 37 IRQ_TYPE_EDGE_RISING HWAFFINITY2>; /* interrupt from AXD to CPU@HWAFFINITY2 */ }
I don't think it's necessary to specify the HWAFFINITY2 for interrupt-sink as linux can use SMP affinity to move it around but we can make it optional in case there's a need to hardcode it to a specific Linux core. Or maybe the driver can use affinity hint..
Wrong. You cannot move an IPI around with set_affinity. It's possible to send an IPI to more than one target CPU, but that has nothing to do with affinities.
Are you talking about IPIs or about general interrupts which have an affinity setting?
Maybe my view of the world is limited. I wrote this because the mechanism to route an IPI and set affinities is the same. So specifying which core or hardware thread should Linux CPU route this IPI to is the same as setting the affinity, no? Linux will not move the IPI that is routed to the coprocessor core. Just the IPI it will receive.
Also the way I see it is that this is an external interrupt whether it was asserted by real signal or through IPI mechanism and it should be treated as such in terms of moving inside Linux SMP, no? Again maybe my view of the world is limited but I can't see why migrating the interrupt would affect correctness unless there's a hardware limitation like only core 0 can read info from AXD (which is where my suggestion to using affinity hint above to accommodate such limitations).
When you say 'It is possible to send an IPI to more than one target CPU', is it a case we need to cater for? The way I was seeing this problem is communication between single Linux SMP and a single coprocessor unit. I didn't think of it as single to many. Even if the coprocessor is a cluster I'd expect it to act as a single unit like Linux SMP. And if it wanted to send 2 different interrupts it will need to use 2 different IPIs.
If I'm stating anything obvious above please bear with me. I'm just trying to be clear about my view of the world in case I'm missing something :-)
Any pointers on the best way to tie gic_send_ipi() with the driver/core code? The way it's currently tied to the core code is through SMP IPI functions which I don't think we can use. I'm thinking adding a pointer function in struct irq_chip would be the easiest approach maybe?
That's the least of our worries. We need to get the high level interfaces and the devicetree mechanism straight before we talk about this kind of details.
Fair enough. The reason I asked is to help me start writing some test code but I'll wait.
Thanks, Qais
Thanks,
tglx
On Wed, 26 Aug 2015, Qais Yousef wrote:
On 08/26/2015 02:19 PM, Thomas Gleixner wrote:
Wrong. You cannot move an IPI around with set_affinity. It's possible to send an IPI to more than one target CPU, but that has nothing to do with affinities.
Are you talking about IPIs or about general interrupts which have an affinity setting?
Maybe my view of the world is limited. I wrote this because the mechanism to route an IPI and set affinities is the same.
That might be the case on your particular platform, but that's not generally true.
So specifying which core or hardware thread should Linux CPU route this IPI to is the same as setting the affinity, no? Linux will not move the IPI that is routed to the coprocessor core. Just the IPI it will receive.
Also the way I see it is that this is an external interrupt whether it was asserted by real signal or through IPI mechanism and it should be treated as such in terms of moving inside Linux SMP, no? Again maybe my view of the world is limited but I can't see why migrating the interrupt would affect correctness unless there's a hardware limitation like only core 0 can read info from AXD (which is where my suggestion to using affinity hint above to accommodate such limitations).
When you say 'It is possible to send an IPI to more than one target CPU', is it a case we need to cater for? The way I was seeing this problem is communication between single Linux SMP and a single coprocessor unit. I didn't think of it as single to many. Even if the coprocessor is a cluster I'd expect it to act as a single unit like Linux SMP. And if it wanted to send 2 different interrupts it will need to use 2 different IPIs.
You are confusing the terms.
IPI = Inter Processor Interrupt
As the name says that's an interrupt which goes from one cpu to another. So an IPI has a very clear target.
Whether the platform implements IPIs via general interrupts which are made affine to a particular cpu or some other specialized mechanism is completely irrelevant. An IPI is not subject to affinity settings, period.
So if you want to use an IPI then you need a target cpu for that IPI.
If you want something which can be affined to any cpu, then you need a general interrupt and not an IPI.
That's what I asked before and you still did not answer that question.
Are you talking about IPIs or about general interrupts which have an affinity setting?
Thanks,
tglx
On 08/26/2015 04:08 PM, Thomas Gleixner wrote:
On Wed, 26 Aug 2015, Qais Yousef wrote:
On 08/26/2015 02:19 PM, Thomas Gleixner wrote:
Wrong. You cannot move an IPI around with set_affinity. It's possible to send an IPI to more than one target CPU, but that has nothing to do with affinities.
Are you talking about IPIs or about general interrupts which have an affinity setting?
Maybe my view of the world is limited. I wrote this because the mechanism to route an IPI and set affinities is the same.
That might be the case on your particular platform, but that's not generally true.
So specifying which core or hardware thread should Linux CPU route this IPI to is the same as setting the affinity, no? Linux will not move the IPI that is routed to the coprocessor core. Just the IPI it will receive.
Also the way I see it is that this is an external interrupt whether it was asserted by real signal or through IPI mechanism and it should be treated as such in terms of moving inside Linux SMP, no? Again maybe my view of the world is limited but I can't see why migrating the interrupt would affect correctness unless there's a hardware limitation like only core 0 can read info from AXD (which is where my suggestion to using affinity hint above to accommodate such limitations).
When you say 'It is possible to send an IPI to more than one target CPU', is it a case we need to cater for? The way I was seeing this problem is communication between single Linux SMP and a single coprocessor unit. I didn't think of it as single to many. Even if the coprocessor is a cluster I'd expect it to act as a single unit like Linux SMP. And if it wanted to send 2 different interrupts it will need to use 2 different IPIs.
You are confusing the terms.
IPI = Inter Processor Interrupt
As the name says that's an interrupt which goes from one cpu to another. So an IPI has a very clear target.
OK understood. My interpretation of the processor here was the difference. I was viewing the whole linux cpus as one unit with regard to its coprocessors.
Whether the platform implements IPIs via general interrupts which are made affine to a particular cpu or some other specialized mechanism is completely irrelevant. An IPI is not subject to affinity settings, period.
So if you want to use an IPI then you need a target cpu for that IPI.
If you want something which can be affined to any cpu, then you need a general interrupt and not an IPI.
We are using IPIs to exchange interrupts. Affinity is not important to me.
Thanks, Qais
That's what I asked before and you still did not answer that question.
Are you talking about IPIs or about general interrupts which have an affinity setting?
Thanks,
tglx
On Wed, 26 Aug 2015, Qais Yousef wrote:
On 08/26/2015 04:08 PM, Thomas Gleixner wrote:
IPI = Inter Processor Interrupt
As the name says that's an interrupt which goes from one cpu to another. So an IPI has a very clear target.
OK understood. My interpretation of the processor here was the difference. I was viewing the whole linux cpus as one unit with regard to its coprocessors.
You can only view it this way if you talk about peripheral interrupts which are not used as per cpu interrupts and can be routed to a single cpu or a set of cpus via set_affinity.
Whether the platform implements IPIs via general interrupts which are made affine to a particular cpu or some other specialized mechanism is completely irrelevant. An IPI is not subject to affinity settings, period.
So if you want to use an IPI then you need a target cpu for that IPI.
If you want something which can be affined to any cpu, then you need a general interrupt and not an IPI.
We are using IPIs to exchange interrupts. Affinity is not important to me.
That's a bold statement. If you chose CPU x as the target for the interrupts received from the coprocessor, then you have pinned the processing for this stuff on to CPU x. So you limit the freedom of moving stuff around on the linux cpus.
And if your root irq controller supports sending normal device interrupts in the same or a similar way as it sends IPIs you can spare quite some extra handling on the linux side for receiving the coprocessor interrupt, i.e. you can use just the bog standard request_irq() mechanism and have the ability to set the affinity of that interrupt from user space so you can move it to the core on which your processing happens. Definitely simpler and more flexible, so I would go there if the hardware allows.
But back to the IPIs. We need infrastructure and DT support to:
1) reserve an IPI
2) send an IPI
3) request/free an IPI
#1 We have no infrastructure for that, but we definitely need one.
We can look at the IPI as a single linux irq number which is replicated on all cpu cores. The replication can happen in hardware or by software, but that depends on the underlying root irq controller. How that is implemented does not matter for the reservation.
The most flexible and platform independent solution would be to describe the IPI space as a seperate irq domain. In most cases this would be a hierarchical domain stacked on the root irq domain:
[IPI-domain] --> [GIC-MIPS-domain]
on x86 this would be:
[IPI-domain] --> [vector-domain]
That needs some change how the IPIs which are used by the kernel (rescheduling, function call ..) are set up, but we get a proper management and collision avoidance that way. Depending on the platform we could actually remove the whole IPI compile time reservation and hand out IPIs at boot time on demand and dynamically.
So the reservation function would be something like:
unsigned int irq_reserve_ipi(const struct cpumask *dest, void *devid);
@dest contains the possible targets for the IPI. So for generic linux IPIs this would be cpu_possible_mask. For your coprocessor the target would be a cpumask with just the bit of the coprocessor core set. If you need to use an IPI for sending an interrupt from the coprocessor to a specific linux core then @dest will contain just that target cpu.
@devid is stored in the IPI domain for sanity checks during operation.
The function returns a linux irq number or 0 if allocation fails.
We need a complementary interface as well, so you can hand back the IPI to the core when the coprocessor is disabled:
void irq_destroy_ipi(unsigned int irq, void *devid);
To configure your coprocessor proper, we need a translation mechanism from the linux interrupt number to the magic value which needs to be written into the trigger register when the coprocessor wants to send an interrupt or an IPI.
int irq_get_irq_hwcfg(unsigned int irq, struct irq_hwcfg *cfg);
struct irq_hwcfg needs to be defined, but it might look like this:
{ /* Generic fields */ x; ... union { mips_gic; ... }; };
The actual hw specific value(s) need to be filled in from the irq domain specific code.
#2 We have no generic mechanism for that either.
Something like this is needed:
void irq_send_ipi(unsigned int irq, const struct cpumask *dest, void *devid);
@dest is for generic linux IPIs and can be NULL so the IPI is sent to the core(s) which have been handed in at reservation time
@devid is used to sanity check the driver call.
So that finally will call down via a irq chip callback into the code which sends the IPI.
#3 Now you get lucky, because we actually have an interface for this
request_percpu_irq() free_percpu_irq() disable_percpu_irq() enable_percpu_irq()
Though there is a caveat. enable/disable_percpu_irq() must be called from the target cpu, but that should be a solvable problem.
And at the IPI-domain side we need sanity checks whether the cpu from which enable/disable is called is actually configured in the reservation mask.
There are a few other nasty details, but that's not important for the big picture.
As I said above, I really would recommend to avoid that if possible because a bog standard device interrupt is way simpler to deal with.
That's certainly not the quick and dirty solution you are looking for, but exposing IPIs to drivers by anything else than a well thought out infrastructure is not going to happen.
Thanks,
tglx
On 2015/8/27 5:40, Thomas Gleixner wrote:
But back to the IPIs. We need infrastructure and DT support to:
reserve an IPI
send an IPI
request/free an IPI
#1 We have no infrastructure for that, but we definitely need one.
We can look at the IPI as a single linux irq number which is replicated on all cpu cores. The replication can happen in hardware or by software, but that depends on the underlying root irq controller. How that is implemented does not matter for the reservation.
The most flexible and platform independent solution would be to describe the IPI space as a seperate irq domain. In most cases this would be a hierarchical domain stacked on the root irq domain:
[IPI-domain] --> [GIC-MIPS-domain]
on x86 this would be:
[IPI-domain] --> [vector-domain]
That needs some change how the IPIs which are used by the kernel (rescheduling, function call ..) are set up, but we get a proper management and collision avoidance that way. Depending on the platform we could actually remove the whole IPI compile time reservation and hand out IPIs at boot time on demand and dynamically.
Hi Thomas, Good point:) That will make the code more clear. Thanks! Gerry
On 08/26/2015 10:40 PM, Thomas Gleixner wrote:
On Wed, 26 Aug 2015, Qais Yousef wrote:
On 08/26/2015 04:08 PM, Thomas Gleixner wrote:
IPI = Inter Processor Interrupt
As the name says that's an interrupt which goes from one cpu to another. So an IPI has a very clear target.
OK understood. My interpretation of the processor here was the difference. I was viewing the whole linux cpus as one unit with regard to its coprocessors.
You can only view it this way if you talk about peripheral interrupts which are not used as per cpu interrupts and can be routed to a single cpu or a set of cpus via set_affinity.
Whether the platform implements IPIs via general interrupts which are made affine to a particular cpu or some other specialized mechanism is completely irrelevant. An IPI is not subject to affinity settings, period.
So if you want to use an IPI then you need a target cpu for that IPI.
If you want something which can be affined to any cpu, then you need a general interrupt and not an IPI.
We are using IPIs to exchange interrupts. Affinity is not important to me.
That's a bold statement. If you chose CPU x as the target for the interrupts received from the coprocessor, then you have pinned the processing for this stuff on to CPU x. So you limit the freedom of moving stuff around on the linux cpus.
I said that because I thought you were telling me if I'm expecting my IPIs to be movable then I must be using general interrupts. So what I was saying is that we use IPIs and if it's against the rule for them to have affinity we can live with that.
And if your root irq controller supports sending normal device interrupts in the same or a similar way as it sends IPIs you can spare quite some extra handling on the linux side for receiving the coprocessor interrupt, i.e. you can use just the bog standard request_irq() mechanism and have the ability to set the affinity of that interrupt from user space so you can move it to the core on which your processing happens. Definitely simpler and more flexible, so I would go there if the hardware allows.
That's what I was trying to say but words failed me to explain it clearly maybe :(
But back to the IPIs. We need infrastructure and DT support to:
reserve an IPI
send an IPI
request/free an IPI
#1 We have no infrastructure for that, but we definitely need one.
We can look at the IPI as a single linux irq number which is replicated on all cpu cores. The replication can happen in hardware or by software, but that depends on the underlying root irq controller. How that is implemented does not matter for the reservation. The most flexible and platform independent solution would be to describe the IPI space as a seperate irq domain. In most cases this would be a hierarchical domain stacked on the root irq domain: [IPI-domain] --> [GIC-MIPS-domain] on x86 this would be: [IPI-domain] --> [vector-domain] That needs some change how the IPIs which are used by the kernel (rescheduling, function call ..) are set up, but we get a proper management and collision avoidance that way. Depending on the platform we could actually remove the whole IPI compile time reservation and hand out IPIs at boot time on demand and dynamically. So the reservation function would be something like: unsigned int irq_reserve_ipi(const struct cpumask *dest, void *devid); @dest contains the possible targets for the IPI. So for generic linux IPIs this would be cpu_possible_mask. For your coprocessor the target would be a cpumask with just the bit of the coprocessor core set. If you need to use an IPI for sending an interrupt from the coprocessor to a specific linux core then @dest will contain just that target cpu. @devid is stored in the IPI domain for sanity checks during operation. The function returns a linux irq number or 0 if allocation fails. We need a complementary interface as well, so you can hand back the IPI to the core when the coprocessor is disabled: void irq_destroy_ipi(unsigned int irq, void *devid); To configure your coprocessor proper, we need a translation mechanism from the linux interrupt number to the magic value which needs to be written into the trigger register when the coprocessor wants to send an interrupt or an IPI. int irq_get_irq_hwcfg(unsigned int irq, struct irq_hwcfg *cfg); struct irq_hwcfg needs to be defined, but it might look like this: {
/* Generic fields */ x; ... union { mips_gic; ... }; };
The actual hw specific value(s) need to be filled in from the irq domain specific code.
#2 We have no generic mechanism for that either.
Something like this is needed: void irq_send_ipi(unsigned int irq, const struct cpumask *dest, void *devid); @dest is for generic linux IPIs and can be NULL so the IPI is sent to the core(s) which have been handed in at reservation time @devid is used to sanity check the driver call. So that finally will call down via a irq chip callback into the code which sends the IPI.
#3 Now you get lucky, because we actually have an interface for this
request_percpu_irq() free_percpu_irq() disable_percpu_irq() enable_percpu_irq() Though there is a caveat. enable/disable_percpu_irq() must be called from the target cpu, but that should be a solvable problem. And at the IPI-domain side we need sanity checks whether the cpu from which enable/disable is called is actually configured in the reservation mask. There are a few other nasty details, but that's not important for the big picture. As I said above, I really would recommend to avoid that if possible because a bog standard device interrupt is way simpler to deal with.
Agreed. Something for us to think about and consider. But if not for AXD this kind of mechanism is important for us for other reasons so we probably want to see it through.
That's certainly not the quick and dirty solution you are looking for, but exposing IPIs to drivers by anything else than a well thought out infrastructure is not going to happen.
Thanks a lot for the detailed explanation. I wasn't looking for a quick and dirty solution but my view of the problem is much simpler than yours so my idea of a solution would look quick and dirty. I have a better appreciation of the problem now and a way to approach it :-)
From DT point of view are we OK with this form then
coprocessor { interrupt-source = <&intc INT_SPEC COP_HWAFFINITY>; interrupt-sink = <&intc INT_SPEC CPU_HWAFFINITY>; }
and if the root controller sends normal IPI as it sends normal device interrupts then interrupt-sink can be a standard interrupts property (like in my case)
coprocessor { interrupt-source = <&intc INT_SPEC COP_HWAFFINITY>; interrupts = <INT_SPEC>; }
Does this look right to you? Is there something else that needs to be covered still?
One more thing I can think of now is that the coprocessor will need the raw irq numbers that are picked by linux so that it can use them to trigger the IPI. Are we ok to add a function that returns this raw irq number (as opposed to linux irq number) directly from DT? The way this is communicated to the coprocessor will be platform specific.
Thanks, Qais
Thanks,
tglx
On Fri, 28 Aug 2015, Qais Yousef wrote:
Thanks a lot for the detailed explanation. I wasn't looking for a quick and dirty solution but my view of the problem is much simpler than yours so my idea of a solution would look quick and dirty. I have a better appreciation of the problem now and a way to approach it :-)
From DT point of view are we OK with this form then
coprocessor { interrupt-source = <&intc INT_SPEC COP_HWAFFINITY>; interrupt-sink = <&intc INT_SPEC CPU_HWAFFINITY>; }
and if the root controller sends normal IPI as it sends normal device interrupts then interrupt-sink can be a standard interrupts property (like in my case)
coprocessor { interrupt-source = <&intc INT_SPEC COP_HWAFFINITY>; interrupts = <INT_SPEC>; }
Does this look right to you? Is there something else that needs to be covered still?
I'm not an DT wizard. I leave that to the DT experts.
One more thing I can think of now is that the coprocessor will need the raw irq numbers that are picked by linux so that it can use them to trigger the IPI. Are we ok to add a function that returns this raw irq number (as opposed to linux irq number) directly from DT? The way this is communicated to the coprocessor will be platform specific.
Why do you want that to be hacked into DT?
To configure your coprocessor proper, we need a translation mechanism from the linux interrupt number to the magic value which needs to be written into the trigger register when the coprocessor wants to send an interrupt or an IPI. int irq_get_irq_hwcfg(unsigned int irq, struct irq_hwcfg *cfg); struct irq_hwcfg needs to be defined, but it might look like this: {
/* Generic fields */ x; ... union { mips_gic; ... }; };
That function provides you the information which you have to hand over to your coprocessor firmware.
Thanks,
tglx
On 08/28/2015 03:22 PM, Thomas Gleixner wrote:
To configure your coprocessor proper, we need a translation mechanism from the linux interrupt number to the magic value which needs to be written into the trigger register when the coprocessor wants to send an interrupt or an IPI. int irq_get_irq_hwcfg(unsigned int irq, struct irq_hwcfg *cfg); struct irq_hwcfg needs to be defined, but it might look like this: {
/* Generic fields */ x; ... union { mips_gic; ... }; };
That function provides you the information which you have to hand over to your coprocessor firmware.
Of course!
* me slapping myself on the back *
Thanks, Qais
On 08/28/2015 03:22 PM, Thomas Gleixner wrote:
On Fri, 28 Aug 2015, Qais Yousef wrote:
Thanks a lot for the detailed explanation. I wasn't looking for a quick and dirty solution but my view of the problem is much simpler than yours so my idea of a solution would look quick and dirty. I have a better appreciation of the problem now and a way to approach it :-)
From DT point of view are we OK with this form then
coprocessor { interrupt-source = <&intc INT_SPEC COP_HWAFFINITY>; interrupt-sink = <&intc INT_SPEC CPU_HWAFFINITY>; }
and if the root controller sends normal IPI as it sends normal device interrupts then interrupt-sink can be a standard interrupts property (like in my case)
coprocessor { interrupt-source = <&intc INT_SPEC COP_HWAFFINITY>; interrupts = <INT_SPEC>; }
Does this look right to you? Is there something else that needs to be covered still?
I'm not an DT wizard. I leave that to the DT experts.
Hi Marc Zyngier, Mark Rutland,
Any comments about the DT binding for the IPIs?
To recap, the proposal which is based on Marc Zyngier's is to use interrupt-source to represent an IPI from Linux CPU to a coprocessor and interrupt-sink to receive an IPI from coprocessor to Linux CPU. Hopefully the description above is self explanatory. Please let me know if you need more info. Thomas covered the routing, synthesising, and requesting parts in the core code. The remaining (high level) issue is how to describe the IPIs in DT.
Thanks, Qais
On 02/09/15 10:33, Qais Yousef wrote:
On 08/28/2015 03:22 PM, Thomas Gleixner wrote:
On Fri, 28 Aug 2015, Qais Yousef wrote:
Thanks a lot for the detailed explanation. I wasn't looking for a quick and dirty solution but my view of the problem is much simpler than yours so my idea of a solution would look quick and dirty. I have a better appreciation of the problem now and a way to approach it :-)
From DT point of view are we OK with this form then
coprocessor { interrupt-source = <&intc INT_SPEC COP_HWAFFINITY>; interrupt-sink = <&intc INT_SPEC CPU_HWAFFINITY>; }
and if the root controller sends normal IPI as it sends normal device interrupts then interrupt-sink can be a standard interrupts property (like in my case)
coprocessor { interrupt-source = <&intc INT_SPEC COP_HWAFFINITY>; interrupts = <INT_SPEC>; }
Does this look right to you? Is there something else that needs to be covered still?
I'm not an DT wizard. I leave that to the DT experts.
Hi Marc Zyngier, Mark Rutland,
Any comments about the DT binding for the IPIs?
To recap, the proposal which is based on Marc Zyngier's is to use interrupt-source to represent an IPI from Linux CPU to a coprocessor and interrupt-sink to receive an IPI from coprocessor to Linux CPU. Hopefully the description above is self explanatory. Please let me know if you need more info. Thomas covered the routing, synthesising, and requesting parts in the core code. The remaining (high level) issue is how to describe the IPIs in DT.
I'm definitely *not* a DT expert! ;-) My initial binding proposal was only for wired interrupts, not for IPIs. There is definitely some common aspects, except for one part:
Who decides on the IPI number? So far, we've avoided encoding IPI numbers in the DT just like we don't encode MSIs, because they are programmable things. My feeling is that we shouldn't put the IPI number in the DT because the rest of the kernel uses them as well and could decide to use this particular IPI number for its own use: *clash*.
The way I see it would be to have a pool of IPI numbers that the kernel requests for its own use first, leaving whatever remains to drivers.
Mark (as *you* are the expert ;-), what do you think?
M.
On 09/02/2015 10:55 AM, Marc Zyngier wrote:
On 02/09/15 10:33, Qais Yousef wrote:
On 08/28/2015 03:22 PM, Thomas Gleixner wrote:
On Fri, 28 Aug 2015, Qais Yousef wrote:
Thanks a lot for the detailed explanation. I wasn't looking for a quick and dirty solution but my view of the problem is much simpler than yours so my idea of a solution would look quick and dirty. I have a better appreciation of the problem now and a way to approach it :-)
From DT point of view are we OK with this form then
coprocessor { interrupt-source = <&intc INT_SPEC COP_HWAFFINITY>; interrupt-sink = <&intc INT_SPEC CPU_HWAFFINITY>; }
and if the root controller sends normal IPI as it sends normal device interrupts then interrupt-sink can be a standard interrupts property (like in my case)
coprocessor { interrupt-source = <&intc INT_SPEC COP_HWAFFINITY>; interrupts = <INT_SPEC>; }
Does this look right to you? Is there something else that needs to be covered still?
I'm not an DT wizard. I leave that to the DT experts.
Hi Marc Zyngier, Mark Rutland,
Any comments about the DT binding for the IPIs?
To recap, the proposal which is based on Marc Zyngier's is to use interrupt-source to represent an IPI from Linux CPU to a coprocessor and interrupt-sink to receive an IPI from coprocessor to Linux CPU. Hopefully the description above is self explanatory. Please let me know if you need more info. Thomas covered the routing, synthesising, and requesting parts in the core code. The remaining (high level) issue is how to describe the IPIs in DT.
I'm definitely *not* a DT expert! ;-) My initial binding proposal was only for wired interrupts, not for IPIs. There is definitely some common aspects, except for one part:
Who decides on the IPI number? So far, we've avoided encoding IPI numbers in the DT just like we don't encode MSIs, because they are programmable things. My feeling is that we shouldn't put the IPI number in the DT because the rest of the kernel uses them as well and could decide to use this particular IPI number for its own use: *clash*.
I think this is covered in Thomas proposal to reserve IPIs. His thoughts is to use a separate irq-domain for IPIs and use irq_reserve_ipi() and irq_destroy_ipi() to get and release IPIs.
The way I see it would be to have a pool of IPI numbers that the kernel requests for its own use first, leaving whatever remains to drivers.
That's what Thomas thinks too and he covered this by using irq_reserve_ipi() and irq_destroy_ipi().
https://lkml.org/lkml/2015/8/26/713
It's worth noting in the light of this that INT_SPEC should be optional since for hardware similar to mine there's not much to tell the controller if it's all dynamic except where we want the IPI to be routed to - the INT_SPEC is implicitly defined by the notion it's an IPI.
Thanks, Qais
Mark (as *you* are the expert ;-), what do you think?
M.
On 02/09/15 11:48, Qais Yousef wrote:
On 09/02/2015 10:55 AM, Marc Zyngier wrote:
On 02/09/15 10:33, Qais Yousef wrote:
On 08/28/2015 03:22 PM, Thomas Gleixner wrote:
On Fri, 28 Aug 2015, Qais Yousef wrote:
Thanks a lot for the detailed explanation. I wasn't looking for a quick and dirty solution but my view of the problem is much simpler than yours so my idea of a solution would look quick and dirty. I have a better appreciation of the problem now and a way to approach it :-)
From DT point of view are we OK with this form then
coprocessor { interrupt-source = <&intc INT_SPEC COP_HWAFFINITY>; interrupt-sink = <&intc INT_SPEC CPU_HWAFFINITY>; }
and if the root controller sends normal IPI as it sends normal device interrupts then interrupt-sink can be a standard interrupts property (like in my case)
coprocessor { interrupt-source = <&intc INT_SPEC COP_HWAFFINITY>; interrupts = <INT_SPEC>; }
Does this look right to you? Is there something else that needs to be covered still?
I'm not an DT wizard. I leave that to the DT experts.
Hi Marc Zyngier, Mark Rutland,
Any comments about the DT binding for the IPIs?
To recap, the proposal which is based on Marc Zyngier's is to use interrupt-source to represent an IPI from Linux CPU to a coprocessor and interrupt-sink to receive an IPI from coprocessor to Linux CPU. Hopefully the description above is self explanatory. Please let me know if you need more info. Thomas covered the routing, synthesising, and requesting parts in the core code. The remaining (high level) issue is how to describe the IPIs in DT.
I'm definitely *not* a DT expert! ;-) My initial binding proposal was only for wired interrupts, not for IPIs. There is definitely some common aspects, except for one part:
Who decides on the IPI number? So far, we've avoided encoding IPI numbers in the DT just like we don't encode MSIs, because they are programmable things. My feeling is that we shouldn't put the IPI number in the DT because the rest of the kernel uses them as well and could decide to use this particular IPI number for its own use: *clash*.
I think this is covered in Thomas proposal to reserve IPIs. His thoughts is to use a separate irq-domain for IPIs and use irq_reserve_ipi() and irq_destroy_ipi() to get and release IPIs.
The way I see it would be to have a pool of IPI numbers that the kernel requests for its own use first, leaving whatever remains to drivers.
That's what Thomas thinks too and he covered this by using irq_reserve_ipi() and irq_destroy_ipi().
https://lkml.org/lkml/2015/8/26/713
Ah, I missed that, sorry for the noise. This looks very sensible.
It's worth noting in the light of this that INT_SPEC should be optional since for hardware similar to mine there's not much to tell the controller if it's all dynamic except where we want the IPI to be routed to - the INT_SPEC is implicitly defined by the notion it's an IPI.
Well, I'd think that the INT_SPEC should say that it is an IPI, and I don't believe we should omit it. On the ARM GIC side, our interrupts are typed (type 0 is a normal wired interrupt, type 1 a per-cpu interrupt, and we could allocate type 2 to identify an IPI).
But we do need to identify it properly, as we should be able to cover both IPIs and normal wired interrupts.
Thanks,
M.
On 09/02/2015 12:53 PM, Marc Zyngier wrote:
On 02/09/15 11:48, Qais Yousef wrote:
It's worth noting in the light of this that INT_SPEC should be optional since for hardware similar to mine there's not much to tell the controller if it's all dynamic except where we want the IPI to be routed to - the INT_SPEC is implicitly defined by the notion it's an IPI.
Well, I'd think that the INT_SPEC should say that it is an IPI, and I don't believe we should omit it. On the ARM GIC side, our interrupts are typed (type 0 is a normal wired interrupt, type 1 a per-cpu interrupt, and we could allocate type 2 to identify an IPI).
I didn't mean to omit it completely, but just being optional so it's specified if the intc needs this info only. I'm assuming that INT_SPEC is interrupt controller specific. If not, then ignore me :-)
But we do need to identify it properly, as we should be able to cover both IPIs and normal wired interrupts.
I'm a bit confused here. What do you mean by normal wired interrupts? I thought this DT binding is only to describe IPIs that needs reserving and routing. What am I missing?
Thanks, Qais
On 02/09/15 14:25, Qais Yousef wrote:
On 09/02/2015 12:53 PM, Marc Zyngier wrote:
On 02/09/15 11:48, Qais Yousef wrote:
It's worth noting in the light of this that INT_SPEC should be optional since for hardware similar to mine there's not much to tell the controller if it's all dynamic except where we want the IPI to be routed to - the INT_SPEC is implicitly defined by the notion it's an IPI.
Well, I'd think that the INT_SPEC should say that it is an IPI, and I don't believe we should omit it. On the ARM GIC side, our interrupts are typed (type 0 is a normal wired interrupt, type 1 a per-cpu interrupt, and we could allocate type 2 to identify an IPI).
I didn't mean to omit it completely, but just being optional so it's specified if the intc needs this info only. I'm assuming that INT_SPEC is interrupt controller specific. If not, then ignore me :-)
It is, but I don't think it can really be made optional.
But we do need to identify it properly, as we should be able to cover both IPIs and normal wired interrupts.
I'm a bit confused here. What do you mean by normal wired interrupts? I thought this DT binding is only to describe IPIs that needs reserving and routing. What am I missing?
Look at my initial proposal, and the way I was describing a device having an interrupt source, and two possible interrupt sinks, one being a CPU and the other being another device.
I'm looking at solving that case as well, possibly with the same infrastructure (the routing bit should be the same).
Thanks,
M.
On Wed, Sep 02, 2015 at 10:55:20AM +0100, Marc Zyngier wrote:
On 02/09/15 10:33, Qais Yousef wrote:
On 08/28/2015 03:22 PM, Thomas Gleixner wrote:
On Fri, 28 Aug 2015, Qais Yousef wrote:
Thanks a lot for the detailed explanation. I wasn't looking for a quick and dirty solution but my view of the problem is much simpler than yours so my idea of a solution would look quick and dirty. I have a better appreciation of the problem now and a way to approach it :-)
From DT point of view are we OK with this form then
coprocessor { interrupt-source = <&intc INT_SPEC COP_HWAFFINITY>; interrupt-sink = <&intc INT_SPEC CPU_HWAFFINITY>; }
and if the root controller sends normal IPI as it sends normal device interrupts then interrupt-sink can be a standard interrupts property (like in my case)
coprocessor { interrupt-source = <&intc INT_SPEC COP_HWAFFINITY>; interrupts = <INT_SPEC>; }
Does this look right to you? Is there something else that needs to be covered still?
I'm not an DT wizard. I leave that to the DT experts.
Hi Marc Zyngier, Mark Rutland,
Any comments about the DT binding for the IPIs?
To recap, the proposal which is based on Marc Zyngier's is to use interrupt-source to represent an IPI from Linux CPU to a coprocessor and interrupt-sink to receive an IPI from coprocessor to Linux CPU. Hopefully the description above is self explanatory. Please let me know if you need more info. Thomas covered the routing, synthesising, and requesting parts in the core code. The remaining (high level) issue is how to describe the IPIs in DT.
I'm definitely *not* a DT expert! ;-) My initial binding proposal was only for wired interrupts, not for IPIs. There is definitely some common aspects, except for one part:
Who decides on the IPI number? So far, we've avoided encoding IPI numbers in the DT just like we don't encode MSIs, because they are programmable things. My feeling is that we shouldn't put the IPI number in the DT because the rest of the kernel uses them as well and could decide to use this particular IPI number for its own use: *clash*.
Agree. The best way I've found to design DT bindings is to imagine providing the DT to something other than Linux. The DT should *only* be describing the hardware. As such, I think we should be describing the connection here, and leaving the assignment up to the OS.
thx,
Jason.
On Mon, 24 Aug 2015, Qais Yousef wrote:
On 08/24/2015 01:49 PM, Thomas Gleixner wrote:
On Mon, 24 Aug 2015, Qais Yousef wrote:
Some drivers might require to send ipi to other cores. So export it.
Which IPIs do you need to send from a driver which are not exposed by the SMP functions already?
It's not an SMP IPI. We use GIC to exchange interrupts between AXD and the host system since AXD is another MIPS core in the cluster.
So that should have been in the changelog to begin with.
This will be used later by AXD driver.
That smells fishy and it wants a proper explanation WHY and not just a sloppy statement that it will be used later. I can figure that out myself as exporting a function without using it does not make any sense.
Sorry for the terse explanation. As pointed above AXD uses GIC to send and receive interrupts to the host core. Without this change I can't compile the driver as a driver module because the symbol is not exported.
Really? Exporting it solves that problem then. That's interesting news for me.
Thanks,
tglx
On 08/24/2015 03:55 PM, Thomas Gleixner wrote:
On Mon, 24 Aug 2015, Qais Yousef wrote:
On 08/24/2015 01:49 PM, Thomas Gleixner wrote:
On Mon, 24 Aug 2015, Qais Yousef wrote:
Some drivers might require to send ipi to other cores. So export it.
Which IPIs do you need to send from a driver which are not exposed by the SMP functions already?
It's not an SMP IPI. We use GIC to exchange interrupts between AXD and the host system since AXD is another MIPS core in the cluster.
So that should have been in the changelog to begin with.
OK sorry for the confusion. I'll amend the changelog and be more careful in the future.
Thanks, Qais
This will be used later by AXD driver.
That smells fishy and it wants a proper explanation WHY and not just a sloppy statement that it will be used later. I can figure that out myself as exporting a function without using it does not make any sense.
Sorry for the terse explanation. As pointed above AXD uses GIC to send and receive interrupts to the host core. Without this change I can't compile the driver as a driver module because the symbol is not exported.
Really? Exporting it solves that problem then. That's interesting news for me.
Thanks,
tglx
Signed-off-by: Qais Yousef qais.yousef@imgtec.com Cc: Rob Herring robh+dt@kernel.org Cc: Pawel Moll pawel.moll@arm.com Cc: Mark Rutland mark.rutland@arm.com Cc: Ian Campbell ijc+devicetree@hellion.org.uk Cc: Kumar Gala galak@codeaurora.org Cc: devicetree@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- .../devicetree/bindings/sound/img,axd.txt | 34 ++++++++++++++++++++++ 1 file changed, 34 insertions(+) create mode 100644 Documentation/devicetree/bindings/sound/img,axd.txt
diff --git a/Documentation/devicetree/bindings/sound/img,axd.txt b/Documentation/devicetree/bindings/sound/img,axd.txt new file mode 100644 index 000000000000..6a8764a79d01 --- /dev/null +++ b/Documentation/devicetree/bindings/sound/img,axd.txt @@ -0,0 +1,34 @@ +* AXD Audio Processing IP Binding * + +Required properties: +- compatible: "img,axd" +- clocks: phandle for the clock that drives AXD. +- interrupts: the GIC interrupt where AXD is connected +- gic-irq: it takes two non-zero values, the first one is the host hwirq and + the second one is AXD's. Host's hwirq should match the value in + interrupts. + +Optional properties: +- vpe: VPE number on which AXD should start. Must be provided if AXD is + running as a single VPE along Linux on the same core. + It can't be VPE0. + The VPE will be offlined before AXD is loaded. +- inbuf-size: size of shared input buffers area. By default it's 0x7800 bytes. +- outbuf-size: size of shared output buffers area. By default it's 0x3c000 bytes. + + +Example: + + axdclk: axdclk@400M { + #clock-cells = <0>; + compatible = "fixed-clock"; + clock-frequency = <400000000>; + }; + + axd { + compatible = "img,axd"; + clocks = <&axdclk>; + interrupts = <36 IRQ_TYPE_EDGE_RISING>; + gic-irq = <36 37>; + vpe = <1>; + };
On Mon, Aug 24, 2015 at 01:39:11PM +0100, Qais Yousef wrote:
Signed-off-by: Qais Yousef qais.yousef@imgtec.com Cc: Rob Herring robh+dt@kernel.org Cc: Pawel Moll pawel.moll@arm.com Cc: Mark Rutland mark.rutland@arm.com Cc: Ian Campbell ijc+devicetree@hellion.org.uk Cc: Kumar Gala galak@codeaurora.org Cc: devicetree@vger.kernel.org Cc: linux-kernel@vger.kernel.org
.../devicetree/bindings/sound/img,axd.txt | 34 ++++++++++++++++++++++ 1 file changed, 34 insertions(+) create mode 100644 Documentation/devicetree/bindings/sound/img,axd.txt
diff --git a/Documentation/devicetree/bindings/sound/img,axd.txt b/Documentation/devicetree/bindings/sound/img,axd.txt new file mode 100644 index 000000000000..6a8764a79d01 --- /dev/null +++ b/Documentation/devicetree/bindings/sound/img,axd.txt @@ -0,0 +1,34 @@ +* AXD Audio Processing IP Binding *
+Required properties: +- compatible: "img,axd"
This sounds awfully generic. Is there not a more complete name?
+- clocks: phandle for the clock that drives AXD. +- interrupts: the GIC interrupt where AXD is connected +- gic-irq: it takes two non-zero values, the first one is the host hwirq and
the second one is AXD's. Host's hwirq should match the value in
interrupts.
I don't understand what this gic-irq property is for; and it generally doesn't look right.
Could you please describe what this is and why you thing it is necessary?
+Optional properties: +- vpe: VPE number on which AXD should start. Must be provided if AXD is
running as a single VPE along Linux on the same core.
It can't be VPE0.
The VPE will be offlined before AXD is loaded.
Likewise could you please elaborate on this is?
What is a VPE number? What does it mean to start at that number?
+- inbuf-size: size of shared input buffers area. By default it's 0x7800 bytes. +- outbuf-size: size of shared output buffers area. By default it's 0x3c000 bytes.
Is this something the kernel dynamically allocates? Why does this need to be in the DT?
Thanks, Mark.
+Example:
- axdclk: axdclk@400M {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <400000000>;
- };
- axd {
compatible = "img,axd";
clocks = <&axdclk>;
interrupts = <36 IRQ_TYPE_EDGE_RISING>;
gic-irq = <36 37>;
vpe = <1>;
- };
-- 2.1.0
On 08/24/2015 02:26 PM, Mark Rutland wrote:
On Mon, Aug 24, 2015 at 01:39:11PM +0100, Qais Yousef wrote:
Signed-off-by: Qais Yousef qais.yousef@imgtec.com Cc: Rob Herring robh+dt@kernel.org Cc: Pawel Moll pawel.moll@arm.com Cc: Mark Rutland mark.rutland@arm.com Cc: Ian Campbell ijc+devicetree@hellion.org.uk Cc: Kumar Gala galak@codeaurora.org Cc: devicetree@vger.kernel.org Cc: linux-kernel@vger.kernel.org
.../devicetree/bindings/sound/img,axd.txt | 34 ++++++++++++++++++++++ 1 file changed, 34 insertions(+) create mode 100644 Documentation/devicetree/bindings/sound/img,axd.txt
diff --git a/Documentation/devicetree/bindings/sound/img,axd.txt b/Documentation/devicetree/bindings/sound/img,axd.txt new file mode 100644 index 000000000000..6a8764a79d01 --- /dev/null +++ b/Documentation/devicetree/bindings/sound/img,axd.txt @@ -0,0 +1,34 @@ +* AXD Audio Processing IP Binding *
+Required properties: +- compatible: "img,axd"
This sounds awfully generic. Is there not a more complete name?
Shouldn't img prefix to help against this? We can sure use another name though like "img,axd-audio-decoder" or something like that. I'll need to check.
+- clocks: phandle for the clock that drives AXD. +- interrupts: the GIC interrupt where AXD is connected +- gic-irq: it takes two non-zero values, the first one is the host hwirq and
the second one is AXD's. Host's hwirq should match the value in
interrupts.
I don't understand what this gic-irq property is for; and it generally doesn't look right.
Could you please describe what this is and why you thing it is necessary?
AXD and host cores exchange interrupts using GIC interrupt controller. To configure AXD firmware to send and listen to the correct GIC interrupts we need these values, hence this property.
If there's a way to reverse the irq mappings of 'interrupts' property from the driver code we can get rid of that and have 2 interrupts properties instead. But as far as I can see there's no way to get the hw irq value from the mapped linux irq unless I missed something.
+Optional properties: +- vpe: VPE number on which AXD should start. Must be provided if AXD is
running as a single VPE along Linux on the same core.
It can't be VPE0.
The VPE will be offlined before AXD is loaded.
Likewise could you please elaborate on this is?
What is a VPE number? What does it mean to start at that number?
VPE is MIPS notation of hardware thread. Instead of being a complete separate core AXD can run as a hardware thread (VPE) inside a Linux core. This number indicates which hardware thread to run AXD at.
+- inbuf-size: size of shared input buffers area. By default it's 0x7800 bytes. +- outbuf-size: size of shared output buffers area. By default it's 0x3c000 bytes.
Is this something the kernel dynamically allocates? Why does this need to be in the DT?
We use CMA to allocate this buffer area. This optional property is there to give the user a chance to use larger/smaller buffers if they think they need to. This buffer area is shared between Linux and AXD to exchange data.
Thanks, Qais
Thanks, Mark.
+Example:
- axdclk: axdclk@400M {
#clock-cells = <0>;
compatible = "fixed-clock";
clock-frequency = <400000000>;
- };
- axd {
compatible = "img,axd";
clocks = <&axdclk>;
interrupts = <36 IRQ_TYPE_EDGE_RISING>;
gic-irq = <36 37>;
vpe = <1>;
- };
-- 2.1.0
AXD is Audio Processing IP by Imagination Technologies that can decode multiple file formats and play them back. We use alsa compress offload API to represent our audio driver.
This patch adds defs and initialisation files.
Signed-off-by: Qais Yousef qais.yousef@imgtec.com Cc: Liam Girdwood lgirdwood@gmail.com Cc: Mark Brown broonie@kernel.org Cc: Jaroslav Kysela perex@perex.cz Cc: Takashi Iwai tiwai@suse.com Cc: linux-kernel@vger.kernel.org --- sound/soc/img/axd/axd_api.h | 649 +++++++++++++++++++++++++++++++++++ sound/soc/img/axd/axd_module.c | 742 +++++++++++++++++++++++++++++++++++++++++ sound/soc/img/axd/axd_module.h | 83 +++++ 3 files changed, 1474 insertions(+) create mode 100644 sound/soc/img/axd/axd_api.h create mode 100644 sound/soc/img/axd/axd_module.c create mode 100644 sound/soc/img/axd/axd_module.h
diff --git a/sound/soc/img/axd/axd_api.h b/sound/soc/img/axd/axd_api.h new file mode 100644 index 000000000000..316b7bcf8626 --- /dev/null +++ b/sound/soc/img/axd/axd_api.h @@ -0,0 +1,649 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * Main API to the AXD for access from the host. + */ +#ifndef AXD_API_H_ +#define AXD_API_H_ + +#include <linux/types.h> + + +#define THREAD_COUNT 4 +#define AXD_MAX_PIPES 3 + + +#define AXD_DESCRIPTOR_READY_BIT 0x80000000 +#define AXD_DESCRIPTOR_INUSE_BIT 0x40000000 +#define AXD_DESCRIPTOR_EOS_BIT 0x20000000 +#define AXD_DESCRIPTOR_SIZE_MASK 0x0000FFFF + +struct axd_buffer_desc { + uint32_t status_size; + uint32_t data_ptr; + uint32_t pts_high; + uint32_t pts_low; +}; + +#define AXD_INPUT_DESCRIPTORS 10 +struct axd_input { + struct axd_buffer_desc descriptors[AXD_INPUT_DESCRIPTORS]; +}; + +#define AXD_OUTPUT_DESCRIPTORS 10 +struct axd_output { + struct axd_buffer_desc descriptors[AXD_OUTPUT_DESCRIPTORS]; +}; + +struct axd_ctrlbuf_item { + uint32_t reg; + uint32_t val; +}; + +/** + * struct axd_memory_map - axd memory mapped region + * @kick: kick register holds the type of kick to process + * @int_status: interrupt status register + * @int_mask: interrupt mask register + * @in_kick_count: array of number of input kicks to process + * @in_int_count: array of number of input interrupts to process + * @out_kick_count: array of number of output kicks to process + * @out_int_count: array of number of output interrupts to process + * @control_command: this register contains the command type to process + * @control_data: this register contains the command data to process + * @pc: starting pc value of each hardware thread + * @error: last error value + * @gic_irq: which gic irqs to use for host and axd in this format: + * host_gic_irq[31:16]:axd_gic_irq[15:0] + * @freq: count/compare clock frequency in MHz + * @input: array of struct axd_input which holds the descriptors + * @output: array of struct axd_output which holds the descriptors + * @ctrlbuf_size: size of control buffer used to group multiple + * configurations changes into a single request + * @ctrlbuf_ctrl: position of ctrlbuf requests + * @ctrlbuf: the actual control buffer used to group requests + * size of which is defined by the firmware + */ +struct axd_memory_map { + uint32_t kick; + uint32_t int_status; + uint32_t int_mask; + uint32_t in_kick_count[AXD_MAX_PIPES]; + uint32_t in_int_count[AXD_MAX_PIPES]; + uint32_t out_kick_count[AXD_MAX_PIPES]; + uint32_t out_int_count[AXD_MAX_PIPES]; + uint32_t control_command; + uint32_t control_data; + uint32_t pc[THREAD_COUNT]; + uint32_t error; + uint32_t gic_irq; + uint32_t freq; + uint32_t reserved01[0x04]; + struct axd_input input[AXD_MAX_PIPES]; + struct axd_output output[AXD_MAX_PIPES]; + uint32_t reserved02[40]; + uint32_t reserved03[12]; + uint32_t ctrlbuf_size; + uint32_t ctrlbuf_ctrl; + struct axd_ctrlbuf_item ctrlbuf[]; +}; + +#define AXD_ANY_KICK_BIT 0x80000000 +#define AXD_KICK_MASK 0x0000000F +#define AXD_KICK_CTRL_BIT 0x00000001 +#define AXD_KICK_DATA_IN_BIT 0x00000002 +#define AXD_KICK_DATA_OUT_BIT 0x00000004 + +#define AXD_INT_KICK_DONE 0x00000001 +#define AXD_INT_DATAIN 0x00000002 +#define AXD_INT_DATAOUT 0x00000004 +#define AXD_INT_CTRL 0x00000008 +#define AXD_INT_ERROR 0x00000010 + +enum axd_ctrl_cmd { + AXD_CTRL_CMD_NONE = 0, + AXD_CTRL_CMD_BUSY, + AXD_CTRL_CMD_READY, + AXD_CTRL_CMD_FLUSH, + AXD_CTRL_CMD_RESET_BD, + AXD_CTRL_CMD_RESET_PIPE, + AXD_CTRL_CMD_CTRLBUF_FLUSH, + AXD_CTRL_CMD_READ_REGISTER = 0x80000000, /* lower 16bits are address */ + AXD_CTRL_CMD_WRITE_REGISTER = 0xC0000000, /* lower 16bits are address */ +}; + +struct axd_hdr { + uint32_t axd_magic; + uint32_t hdr_size; + uint32_t thread_pc[THREAD_COUNT]; + uint32_t cmd_block_offset; + uint32_t cmd_block_size; + char build_str[64]; + uint32_t log_offset; +}; + +/* Register I/F */ +#define AXD_REG_VERSION 0x0000 +#define AXD_REG_CONFIG0 0x0004 +#define AXD_REG_CONFIG1 0x0008 +#define AXD_REG_CONFIG2 0x000C +#define AXD_REG_CONFIG3 0x0010 +#define AXD_REG_BUFFER_BASE 0x0014 +#define AXD_REG_DEBUG_MASK 0x0018 +/* 0x1c reserved */ +#define AXD_REG_INPUT0_CONTROL 0x0020 +#define AXD_REG_INPUT0_GAIN 0x0024 +#define AXD_REG_INPUT0_UPMIX 0x0028 +#define AXD_REG_INPUT1_CONTROL 0x0030 +#define AXD_REG_INPUT1_GAIN 0x0034 +#define AXD_REG_INPUT1_UPMIX 0x0038 +#define AXD_REG_INPUT2_CONTROL 0x0040 +#define AXD_REG_INPUT2_GAIN 0x0044 +#define AXD_REG_INPUT2_UPMIX 0x0048 +#define AXD_REG_INPUT0_MUTE 0x0050 +#define AXD_REG_INPUT1_MUTE 0x0054 +#define AXD_REG_INPUT2_MUTE 0x0058 +#define AXD_REG_MIXER_CONTROL 0x0080 +#define AXD_REG_EQ_CTRL_GAIN 0x0084 +#define AXD_REG_EQ_BAND0 0x0088 +#define AXD_REG_EQ_BAND1 0x008C +#define AXD_REG_EQ_BAND2 0x0090 +#define AXD_REG_EQ_BAND3 0x0094 +#define AXD_REG_EQ_BAND4 0x0098 +#define AXD_REG_MUX0 0x00B0 +#define AXD_REG_MUX1 0x00B4 +#define AXD_REG_MUX2 0x00B8 +#define AXD_REG_OUTPUT0_CONTROL 0x00D0 +#define AXD_REG_OUTPUT0_DOWNMIX 0x00D4 +#define AXD_REG_OUTPUT0_EQCTRL 0x00D8 +#define AXD_REG_OUTPUT0_EQBAND0 0x00DC +#define AXD_REG_OUTPUT0_EQBAND1 0x00E0 +#define AXD_REG_OUTPUT0_EQBAND2 0x00E4 +#define AXD_REG_OUTPUT0_EQBAND3 0x00E8 +#define AXD_REG_OUTPUT0_EQBAND4 0x00EC +#define AXD_REG_OUTPUT1_CONTROL 0x00F0 +#define AXD_REG_OUTPUT1_DOWNMIX 0x00F4 +#define AXD_REG_OUTPUT1_EQCTRL 0x00F8 +#define AXD_REG_OUTPUT1_EQBAND0 0x00FC +#define AXD_REG_OUTPUT1_EQBAND1 0x0100 +#define AXD_REG_OUTPUT1_EQBAND2 0x0104 +#define AXD_REG_OUTPUT1_EQBAND3 0x0108 +#define AXD_REG_OUTPUT1_EQBAND4 0x010C +#define AXD_REG_OUTPUT2_CONTROL 0x0110 +#define AXD_REG_OUTPUT2_DOWNMIX 0x0114 +#define AXD_REG_OUTPUT2_EQCTRL 0x0118 +#define AXD_REG_OUTPUT2_EQBAND0 0x011C +#define AXD_REG_OUTPUT2_EQBAND1 0x0120 +#define AXD_REG_OUTPUT2_EQBAND2 0x0124 +#define AXD_REG_OUTPUT2_EQBAND3 0x0128 +#define AXD_REG_OUTPUT2_EQBAND4 0x012c +#define AXD_REG_DEC0_AAC_VERSION 0x0200 +#define AXD_REG_DEC0_AAC_CHANNELS 0x0204 +#define AXD_REG_DEC0_AAC_PROFILE 0x0208 +#define AXD_REG_DEC0_AAC_STREAM_TYPE 0x020C +#define AXD_REG_DEC0_AAC_SAMPLERATE 0x0210 +#define AXD_REG_DEC1_AAC_VERSION 0x0220 +#define AXD_REG_DEC1_AAC_CHANNELS 0x0224 +#define AXD_REG_DEC1_AAC_PROFILE 0x0228 +#define AXD_REG_DEC1_AAC_STREAM_TYPE 0x022C +#define AXD_REG_DEC1_AAC_SAMPLERATE 0x0230 +#define AXD_REG_DEC2_AAC_VERSION 0x0240 +#define AXD_REG_DEC2_AAC_CHANNELS 0x0244 +#define AXD_REG_DEC2_AAC_PROFILE 0x0248 +#define AXD_REG_DEC2_AAC_STREAM_TYPE 0x024C +#define AXD_REG_DEC2_AAC_SAMPLERATE 0x0250 +#define AXD_REG_DEC0_COOK_FLAVOUR 0x0260 +#define AXD_REG_DEC1_COOK_FLAVOUR 0x0264 +#define AXD_REG_DEC2_COOK_FLAVOUR 0x0268 +#define AXD_REG_DEC0_FLAC_CHANNELS 0x0270 +#define AXD_REG_DEC0_FLAC_SAMPLERATE 0x0274 +#define AXD_REG_DEC0_FLAC_BITS_PER_SAMPLE 0x0278 +#define AXD_REG_DEC0_FLAC_MD5_CHECKING 0x027C +#define AXD_REG_DEC1_FLAC_CHANNELS 0x0280 +#define AXD_REG_DEC1_FLAC_SAMPLERATE 0x0284 +#define AXD_REG_DEC1_FLAC_BITS_PER_SAMPLE 0x0288 +#define AXD_REG_DEC1_FLAC_MD5_CHECKING 0x028C +#define AXD_REG_DEC2_FLAC_CHANNELS 0x0290 +#define AXD_REG_DEC2_FLAC_SAMPLERATE 0x0294 +#define AXD_REG_DEC2_FLAC_BITS_PER_SAMPLE 0x0298 +#define AXD_REG_DEC2_FLAC_MD5_CHECKING 0x029C +#define AXD_REG_DEC0_MPEG_CHANNELS 0x02A0 +#define AXD_REG_DEC0_MPEG_MLCHANNEL 0x02A4 +#define AXD_REG_DEC1_MPEG_CHANNELS 0x02A8 +#define AXD_REG_DEC1_MPEG_MLCHANNEL 0x02AC +#define AXD_REG_DEC2_MPEG_CHANNELS 0x02B0 +#define AXD_REG_DEC2_MPEG_MLCHANNEL 0x02B4 +#define AXD_REG_DEC0_WMA_PLAYER_OPT 0x02D0 +#define AXD_REG_DEC0_WMA_DRC_SETTING 0x02D4 +#define AXD_REG_DEC0_WMA_PEAK_AMP_REF 0x02D8 +#define AXD_REG_DEC0_WMA_RMS_AMP_REF 0x02DC +#define AXD_REG_DEC0_WMA_PEAK_AMP_TARGET 0x02E0 +#define AXD_REG_DEC0_WMA_RMS_AMP_TARGET 0x02E4 +#define AXD_REG_DEC0_WMA_PCM_VAL_BITS_PER_SAMPLE 0x02F4 +#define AXD_REG_DEC0_WMA_PCM_CONTAINER_SIZE 0x02F8 +#define AXD_REG_DEC0_WMA_WMA_FORMAT_TAG 0x02FC +#define AXD_REG_DEC0_WMA_WMA_CHANNELS 0x0300 +#define AXD_REG_DEC0_WMA_WMA_SAMPLES_PER_SEC 0x0304 +#define AXD_REG_DEC0_WMA_WMA_AVG_BYTES_PER_SEC 0x0308 +#define AXD_REG_DEC0_WMA_WMA_BLOCK_ALIGN 0x030C +#define AXD_REG_DEC0_WMA_WMA_VAL_BITS_PER_SAMPLE 0x0310 +#define AXD_REG_DEC0_WMA_WMA_CHANNEL_MASK 0x0314 +#define AXD_REG_DEC0_WMA_WMA_ENCODE_OPTS 0x0318 +#define AXD_REG_DEC1_WMA_PLAYER_OPT 0x0320 +#define AXD_REG_DEC1_WMA_DRC_SETTING 0x0324 +#define AXD_REG_DEC1_WMA_PEAK_AMP_REF 0x0328 +#define AXD_REG_DEC1_WMA_RMS_AMP_REF 0x032C +#define AXD_REG_DEC1_WMA_PEAK_AMP_TARGET 0x0330 +#define AXD_REG_DEC1_WMA_RMS_AMP_TARGET 0x0334 +#define AXD_REG_DEC1_WMA_PCM_VAL_BITS_PER_SAMPLE 0x0344 +#define AXD_REG_DEC1_WMA_PCM_CONTAINER_SIZE 0x0348 +#define AXD_REG_DEC1_WMA_WMA_FORMAT_TAG 0x034C +#define AXD_REG_DEC1_WMA_WMA_CHANNELS 0x0350 +#define AXD_REG_DEC1_WMA_WMA_SAMPLES_PER_SEC 0x0354 +#define AXD_REG_DEC1_WMA_WMA_AVG_BYTES_PER_SEC 0x0358 +#define AXD_REG_DEC1_WMA_WMA_BLOCK_ALIGN 0x035C +#define AXD_REG_DEC1_WMA_WMA_VAL_BITS_PER_SAMPLE 0x0360 +#define AXD_REG_DEC1_WMA_WMA_CHANNEL_MASK 0x0364 +#define AXD_REG_DEC1_WMA_WMA_ENCODE_OPTS 0x0368 +#define AXD_REG_DEC2_WMA_PLAYER_OPT 0x0370 +#define AXD_REG_DEC2_WMA_DRC_SETTING 0x0374 +#define AXD_REG_DEC2_WMA_PEAK_AMP_REF 0x0378 +#define AXD_REG_DEC2_WMA_RMS_AMP_REF 0x037C +#define AXD_REG_DEC2_WMA_PEAK_AMP_TARGET 0x0380 +#define AXD_REG_DEC2_WMA_RMS_AMP_TARGET 0x0384 +#define AXD_REG_DEC2_WMA_PCM_VAL_BITS_PER_SAMPLE 0x0394 +#define AXD_REG_DEC2_WMA_PCM_CONTAINER_SIZE 0x0398 +#define AXD_REG_DEC2_WMA_WMA_FORMAT_TAG 0x039C +#define AXD_REG_DEC2_WMA_WMA_CHANNELS 0x03A0 +#define AXD_REG_DEC2_WMA_WMA_SAMPLES_PER_SEC 0x03A4 +#define AXD_REG_DEC2_WMA_WMA_AVG_BYTES_PER_SEC 0x03A8 +#define AXD_REG_DEC2_WMA_WMA_BLOCK_ALIGN 0x03AC +#define AXD_REG_DEC2_WMA_WMA_VAL_BITS_PER_SAMPLE 0x03B0 +#define AXD_REG_DEC2_WMA_WMA_CHANNEL_MASK 0x03B4 +#define AXD_REG_DEC2_WMA_WMA_ENCODE_OPTS 0x03B8 +#define AXD_REG_PCMIN0_SAMPLE_RATE 0x3C0 +#define AXD_REG_PCMIN0_CHANNELS 0x3C4 +#define AXD_REG_PCMIN0_BITS_PER_SAMPLE 0x3C8 +#define AXD_REG_PCMIN0_JUSTIFICATION 0x3CC +#define AXD_REG_PCMIN1_SAMPLE_RATE 0x3D0 +#define AXD_REG_PCMIN1_CHANNELS 0x3D4 +#define AXD_REG_PCMIN1_BITS_PER_SAMPLE 0x3D8 +#define AXD_REG_PCMIN1_JUSTIFICATION 0x3DC +#define AXD_REG_PCMIN2_SAMPLE_RATE 0x3E0 +#define AXD_REG_PCMIN2_CHANNELS 0x3E4 +#define AXD_REG_PCMIN2_BITS_PER_SAMPLE 0x3E8 +#define AXD_REG_PCMIN2_JUSTIFICATION 0x3EC +#define AXD_REG_PCMOUT0_BITS_PER_SAMPLE 0x3F0 +#define AXD_REG_PCMOUT0_JUSTIFICATION 0x3F4 +#define AXD_REG_PCMOUT1_BITS_PER_SAMPLE 0x3F8 +#define AXD_REG_PCMOUT1_JUSTIFICATION 0x3FC +#define AXD_REG_PCMOUT2_BITS_PER_SAMPLE 0x400 +#define AXD_REG_PCMOUT2_JUSTIFICATION 0x404 +#define AXD_REG_DEC0_AC3_CHANNELS 0x410 +#define AXD_REG_DEC0_AC3_CHANNEL_ORDER 0x414 +#define AXD_REG_DEC0_AC3_MODE 0x418 +#define AXD_REG_DEC1_AC3_CHANNELS 0x420 +#define AXD_REG_DEC1_AC3_CHANNEL_ORDER 0x424 +#define AXD_REG_DEC1_AC3_MODE 0x428 +#define AXD_REG_DEC2_AC3_CHANNELS 0x430 +#define AXD_REG_DEC2_AC3_CHANNEL_ORDER 0x434 +#define AXD_REG_DEC2_AC3_MODE 0x438 +#define AXD_REG_DEC0_DDPLUS_CONFIG 0x440 +#define AXD_REG_DEC0_DDPLUS_CHANNEL_ORDER 0x444 +#define AXD_REG_DEC1_DDPLUS_CONFIG 0x448 +#define AXD_REG_DEC1_DDPLUS_CHANNEL_ORDER 0x44C +#define AXD_REG_DEC2_DDPLUS_CONFIG 0x450 +#define AXD_REG_DEC2_DDPLUS_CHANNEL_ORDER 0x454 +#define AXD_REG_EQ_OUT0_POWER_B0_C0_C3 0x460 +#define AXD_REG_EQ_OUT0_POWER_B0_C4_C7 0x464 +#define AXD_REG_EQ_OUT0_POWER_B1_C0_C3 0x468 +#define AXD_REG_EQ_OUT0_POWER_B1_C4_C7 0x46C +#define AXD_REG_EQ_OUT0_POWER_B2_C0_C3 0x470 +#define AXD_REG_EQ_OUT0_POWER_B2_C4_C7 0x474 +#define AXD_REG_EQ_OUT0_POWER_B3_C0_C3 0x478 +#define AXD_REG_EQ_OUT0_POWER_B3_C4_C7 0x47C +#define AXD_REG_EQ_OUT0_POWER_B4_C0_C3 0x480 +#define AXD_REG_EQ_OUT0_POWER_B4_C4_C7 0x484 +#define AXD_REG_EQ_OUT1_POWER_B0_C0_C3 0x488 +#define AXD_REG_EQ_OUT1_POWER_B0_C4_C7 0x48C +#define AXD_REG_EQ_OUT1_POWER_B1_C0_C3 0x490 +#define AXD_REG_EQ_OUT1_POWER_B1_C4_C7 0x494 +#define AXD_REG_EQ_OUT1_POWER_B2_C0_C3 0x498 +#define AXD_REG_EQ_OUT1_POWER_B2_C4_C7 0x49C +#define AXD_REG_EQ_OUT1_POWER_B3_C0_C3 0x4A0 +#define AXD_REG_EQ_OUT1_POWER_B3_C4_C7 0x4A4 +#define AXD_REG_EQ_OUT1_POWER_B4_C0_C3 0x4A8 +#define AXD_REG_EQ_OUT1_POWER_B4_C4_C7 0x4AC +#define AXD_REG_EQ_OUT2_POWER_B0_C0_C3 0x4B0 +#define AXD_REG_EQ_OUT2_POWER_B0_C4_C7 0x4B4 +#define AXD_REG_EQ_OUT2_POWER_B1_C0_C3 0x4B8 +#define AXD_REG_EQ_OUT2_POWER_B1_C4_C7 0x4BC +#define AXD_REG_EQ_OUT2_POWER_B2_C0_C3 0x4C0 +#define AXD_REG_EQ_OUT2_POWER_B2_C4_C7 0x4C4 +#define AXD_REG_EQ_OUT2_POWER_B3_C0_C3 0x4C8 +#define AXD_REG_EQ_OUT2_POWER_B3_C4_C7 0x4CC +#define AXD_REG_EQ_OUT2_POWER_B4_C0_C3 0x4D0 +#define AXD_REG_EQ_OUT2_POWER_B4_C4_C7 0x4D4 +#define AXD_REG_RESAMPLER0_FIN 0x4E0 +#define AXD_REG_RESAMPLER0_FOUT 0x4E4 +#define AXD_REG_RESAMPLER1_FIN 0x4E8 +#define AXD_REG_RESAMPLER1_FOUT 0x4EC +#define AXD_REG_RESAMPLER2_FIN 0x4F0 +#define AXD_REG_RESAMPLER2_FOUT 0x4f4 +#define AXD_REG_DEC0_ALAC_CHANNELS 0x500 +#define AXD_REG_DEC0_ALAC_DEPTH 0x504 +#define AXD_REG_DEC0_ALAC_SAMPLE_RATE 0x508 +#define AXD_REG_DEC0_ALAC_FRAME_LENGTH 0x50C +#define AXD_REG_DEC0_ALAC_MAX_FRAME_BYTES 0x510 +#define AXD_REG_DEC0_ALAC_AVG_BIT_RATE 0x514 +#define AXD_REG_DEC1_ALAC_CHANNELS 0x520 +#define AXD_REG_DEC1_ALAC_DEPTH 0x524 +#define AXD_REG_DEC1_ALAC_SAMPLE_RATE 0x528 +#define AXD_REG_DEC1_ALAC_FRAME_LENGTH 0x52C +#define AXD_REG_DEC1_ALAC_MAX_FRAME_BYTES 0x530 +#define AXD_REG_DEC1_ALAC_AVG_BIT_RATE 0x534 +#define AXD_REG_DEC2_ALAC_CHANNELS 0x540 +#define AXD_REG_DEC2_ALAC_DEPTH 0x544 +#define AXD_REG_DEC2_ALAC_SAMPLE_RATE 0x548 +#define AXD_REG_DEC2_ALAC_FRAME_LENGTH 0x54C +#define AXD_REG_DEC2_ALAC_MAX_FRAME_BYTES 0x550 +#define AXD_REG_DEC2_ALAC_AVG_BIT_RATE 0x554 +/* 0x558 to 0x55C reserved */ +#define AXD_REG_ENC0_FLAC_CHANNELS 0x560 +#define AXD_REG_ENC0_FLAC_BITS_PER_SAMPLE 0x564 +#define AXD_REG_ENC0_FLAC_SAMPLE_RATE 0x568 +#define AXD_REG_ENC0_FLAC_TOTAL_SAMPLES 0x56C +#define AXD_REG_ENC0_FLAC_DO_MID_SIDE_STEREO 0x570 +#define AXD_REG_ENC0_FLAC_LOOSE_MID_SIDE_STEREO 0x574 +#define AXD_REG_ENC0_FLAC_DO_EXHAUSTIVE_MODEL_SEARCH 0x578 +#define AXD_REG_ENC0_FLAC_MIN_RESIDUAL_PARTITION_ORDER 0x57C +#define AXD_REG_ENC0_FLAC_MAX_RESIDUAL_PARTITION_ORDER 0x580 +#define AXD_REG_ENC0_FLAC_BLOCK_SIZE 0x584 +#define AXD_REG_ENC0_FLAC_BYTE_COUNT 0x588 +#define AXD_REG_ENC0_FLAC_SAMPLE_COUNT 0x58C +#define AXD_REG_ENC0_FLAC_FRAME_COUNT 0x590 +#define AXD_REG_ENC0_FLAC_FRAME_BYTES 0x594 +/* 0x598 to 0x59C reserved */ +#define AXD_REG_ENC1_FLAC_CHANNELS 0x5A0 +#define AXD_REG_ENC1_FLAC_BITS_PER_SAMPLE 0x5A4 +#define AXD_REG_ENC1_FLAC_SAMPLE_RATE 0x5A8 +#define AXD_REG_ENC1_FLAC_TOTAL_SAMPLES 0x5AC +#define AXD_REG_ENC1_FLAC_DO_MID_SIDE_STEREO 0x5B0 +#define AXD_REG_ENC1_FLAC_LOOSE_MID_SIDE_STEREO 0x5B4 +#define AXD_REG_ENC1_FLAC_DO_EXHAUSTIVE_MODEL_SEARCH 0x5B8 +#define AXD_REG_ENC1_FLAC_MIN_RESIDUAL_PARTITION_ORDER 0x5BC +#define AXD_REG_ENC1_FLAC_MAX_RESIDUAL_PARTITION_ORDER 0x5C0 +#define AXD_REG_ENC1_FLAC_BLOCK_SIZE 0x5C4 +#define AXD_REG_ENC1_FLAC_BYTE_COUNT 0x5C8 +#define AXD_REG_ENC1_FLAC_SAMPLE_COUNT 0x5CC +#define AXD_REG_ENC1_FLAC_FRAME_COUNT 0x5D0 +#define AXD_REG_ENC1_FLAC_FRAME_BYTES 0x5D4 +/* 0x5D8 to 0x5DC reserved */ +#define AXD_REG_ENC2_FLAC_CHANNELS 0x5E0 +#define AXD_REG_ENC2_FLAC_BITS_PER_SAMPLE 0x5E4 +#define AXD_REG_ENC2_FLAC_SAMPLE_RATE 0x5E8 +#define AXD_REG_ENC2_FLAC_TOTAL_SAMPLES 0x5EC +#define AXD_REG_ENC2_FLAC_DO_MID_SIDE_STEREO 0x5F0 +#define AXD_REG_ENC2_FLAC_LOOSE_MID_SIDE_STEREO 0x5F4 +#define AXD_REG_ENC2_FLAC_DO_EXHAUSTIVE_MODEL_SEARCH 0x5F8 +#define AXD_REG_ENC2_FLAC_MIN_RESIDUAL_PARTITION_ORDER 0x5FC +#define AXD_REG_ENC2_FLAC_MAX_RESIDUAL_PARTITION_ORDER 0x600 +#define AXD_REG_ENC2_FLAC_BLOCK_SIZE 0x604 +#define AXD_REG_ENC2_FLAC_BYTE_COUNT 0x608 +#define AXD_REG_ENC2_FLAC_SAMPLE_COUNT 0x60C +#define AXD_REG_ENC2_FLAC_FRAME_COUNT 0x610 +#define AXD_REG_ENC2_FLAC_FRAME_BYTES 0x614 +/* 0x618 to 0x61C reserved */ +#define AXD_REG_ENC0_ALAC_CHANNELS 0x620 +#define AXD_REG_ENC0_ALAC_DEPTH 0x624 +#define AXD_REG_ENC0_ALAC_SAMPLE_RATE 0x628 +#define AXD_REG_ENC0_ALAC_FRAME_LENGTH 0x62C +#define AXD_REG_ENC0_ALAC_MAX_FRAME_BYTES 0x630 +#define AXD_REG_ENC0_ALAC_AVG_BIT_RATE 0x634 +#define AXD_REG_ENC0_ALAC_FAST_MODE 0x638 +/* 0x63C to 0x64C reserved */ +#define AXD_REG_ENC1_ALAC_CHANNELS 0x650 +#define AXD_REG_ENC1_ALAC_DEPTH 0x654 +#define AXD_REG_ENC1_ALAC_SAMPLE_RATE 0x658 +#define AXD_REG_ENC1_ALAC_FRAME_LENGTH 0x65C +#define AXD_REG_ENC1_ALAC_MAX_FRAME_BYTES 0x660 +#define AXD_REG_ENC1_ALAC_AVG_BIT_RATE 0x664 +#define AXD_REG_ENC1_ALAC_FAST_MODE 0x668 +/* 0x66C to 0x67C reserved */ +#define AXD_REG_ENC2_ALAC_CHANNELS 0x680 +#define AXD_REG_ENC2_ALAC_DEPTH 0x684 +#define AXD_REG_ENC2_ALAC_SAMPLE_RATE 0x688 +#define AXD_REG_ENC2_ALAC_FRAME_LENGTH 0x68C +#define AXD_REG_ENC2_ALAC_MAX_FRAME_BYTES 0x690 +#define AXD_REG_ENC2_ALAC_AVG_BIT_RATE 0x694 +#define AXD_REG_ENC2_ALAC_FAST_MODE 0x698 +/* 0x69C to 0x6AC reserved */ +#define AXD_REG_MS11_MODE 0x6B0 +#define AXD_REG_MS11_COMMON_CONFIG0 0x6B4 +#define AXD_REG_MS11_COMMON_CONFIG1 0x6B8 +#define AXD_REG_MS11_DDT_CONFIG0 0x6Bc +#define AXD_REG_MS11_DDC_CONFIG0 0x6C0 +#define AXD_REG_MS11_EXT_PCM_CONFIG0 0x6C4 +/* 0x6C8 and 0x6CC reserved */ +#define AXD_REG_OUTPUT0_DCPP_CONTROL 0x6D0 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_CONTROL 0x6D4 +#define AXD_REG_OUTPUT0_DCPP_BAND_CONTROL 0x6D8 +#define AXD_REG_OUTPUT0_DCPP_MAX_DELAY_SAMPLES 0x6DC +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_DELAY_SAMPLES 0x6E0 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_BASS_SHELF_SHIFT 0x6E4 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_BASS_SHELF_A0 0x6E8 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_BASS_SHELF_A1 0x6EC +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_BASS_SHELF_A2 0x6F0 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_BASS_SHELF_B0 0x6F4 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_BASS_SHELF_B1 0x6F8 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_TREBLE_SHELF_SHIFT 0x6FC +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_TREBLE_SHELF_A0 0x700 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_TREBLE_SHELF_A1 0x704 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_TREBLE_SHELF_A2 0x708 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_TREBLE_SHELF_B0 0x70C +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_TREBLE_SHELF_B1 0x710 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_OUTPUT_VOLUME 0x714 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_PASSTHROUGH_GAIN 0x718 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_INVERSE_PASSTHROUGH_GAIN 0x71C +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_BAND_GAIN 0x720 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_BAND_A0 0x724 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_BAND_A1 0x728 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_BAND_A2 0x72C +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_BAND_B0 0x730 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_BAND_B1 0x734 +#define AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_BAND_SHIFT 0x738 +#define AXD_REG_OUTPUT0_DCPP_SUBBAND_LOW_PASS_FILTER_A0 0x73C +#define AXD_REG_OUTPUT0_DCPP_SUBBAND_LOW_PASS_FILTER_A1 0x740 +#define AXD_REG_OUTPUT0_DCPP_SUBBAND_LOW_PASS_FILTER_A2 0x744 +#define AXD_REG_OUTPUT0_DCPP_SUBBAND_LOW_PASS_FILTER_B0 0x748 +#define AXD_REG_OUTPUT0_DCPP_SUBBAND_LOW_PASS_FILTER_B1 0x74C +/* 0x750 to 0x764 reserved */ +#define AXD_REG_OUTPUT1_DCPP_CONTROL 0x768 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_CONTROL 0x76C +#define AXD_REG_OUTPUT1_DCPP_BAND_CONTROL 0x770 +#define AXD_REG_OUTPUT1_DCPP_MAX_DELAY_SAMPLES 0x774 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_DELAY_SAMPLES 0x778 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_BASS_SHELF_SHIFT 0x77C +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_BASS_SHELF_A0 0x780 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_BASS_SHELF_A1 0x784 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_BASS_SHELF_A2 0x788 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_BASS_SHELF_B0 0x78C +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_BASS_SHELF_B1 0x790 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_TREBLE_SHELF_SHIFT 0x794 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_TREBLE_SHELF_A0 0x798 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_TREBLE_SHELF_A1 0x79C +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_TREBLE_SHELF_A2 0x7A0 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_TREBLE_SHELF_B0 0x7A4 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_TREBLE_SHELF_B1 0x7A8 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_OUTPUT_VOLUME 0x7AC +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_PASSTHROUGH_GAIN 0x7B0 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_INVERSE_PASSTHROUGH_GAIN 0x7B4 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_BAND_GAIN 0x7B8 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_BAND_A0 0x7BC +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_BAND_A1 0x7C0 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_BAND_A2 0x7C4 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_BAND_B0 0x7C8 +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_BAND_B1 0x7CC +#define AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_BAND_SHIFT 0x7D0 +#define AXD_REG_OUTPUT1_DCPP_SUBBAND_LOW_PASS_FILTER_A0 0x7D4 +#define AXD_REG_OUTPUT1_DCPP_SUBBAND_LOW_PASS_FILTER_A1 0x7D8 +#define AXD_REG_OUTPUT1_DCPP_SUBBAND_LOW_PASS_FILTER_A2 0x7DC +#define AXD_REG_OUTPUT1_DCPP_SUBBAND_LOW_PASS_FILTER_B0 0x7E0 +#define AXD_REG_OUTPUT1_DCPP_SUBBAND_LOW_PASS_FILTER_B1 0x7E4 +/* 0x7E8 to 0x7FC reserved */ +#define AXD_REG_OUTPUT2_DCPP_CONTROL 0x800 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_CONTROL 0x804 +#define AXD_REG_OUTPUT2_DCPP_BAND_CONTROL 0x808 +#define AXD_REG_OUTPUT2_DCPP_MAX_DELAY_SAMPLES 0x80C +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_DELAY_SAMPLES 0x810 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_BASS_SHELF_SHIFT 0x814 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_BASS_SHELF_A0 0x818 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_BASS_SHELF_A1 0x81C +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_BASS_SHELF_A2 0x820 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_BASS_SHELF_B0 0x824 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_BASS_SHELF_B1 0x828 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_TREBLE_SHELF_SHIFT 0x82C +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_TREBLE_SHELF_A0 0x830 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_TREBLE_SHELF_A1 0x834 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_TREBLE_SHELF_A2 0x838 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_TREBLE_SHELF_B0 0x83C +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_TREBLE_SHELF_B1 0x840 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_OUTPUT_VOLUME 0x844 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_PASSTHROUGH_GAIN 0x848 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_INVERSE_PASSTHROUGH_GAIN 0x84C +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_BAND_GAIN 0x850 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_BAND_A0 0x854 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_BAND_A1 0x858 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_BAND_A2 0x85C +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_BAND_B0 0x860 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_BAND_B1 0x864 +#define AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_BAND_SHIFT 0x868 +#define AXD_REG_OUTPUT2_DCPP_SUBBAND_LOW_PASS_FILTER_A0 0x86C +#define AXD_REG_OUTPUT2_DCPP_SUBBAND_LOW_PASS_FILTER_A1 0x870 +#define AXD_REG_OUTPUT2_DCPP_SUBBAND_LOW_PASS_FILTER_A2 0x874 +#define AXD_REG_OUTPUT2_DCPP_SUBBAND_LOW_PASS_FILTER_B0 0x878 +#define AXD_REG_OUTPUT2_DCPP_SUBBAND_LOW_PASS_FILTER_B1 0x87C +/* 0x880 to 0x89C reserved */ +#define AXD_REG_DEC0_SBC_SAMPLE_RATE 0x8A0 +#define AXD_REG_DEC0_SBC_AUDIO_MODE 0x8A4 +#define AXD_REG_DEC0_SBC_BLOCKS 0x8A8 +#define AXD_REG_DEC0_SBC_SUBBANDS 0x8AC +#define AXD_REG_DEC0_SBC_BITPOOL 0x8B0 +#define AXD_REG_DEC0_SBC_ALLOCATION_MODE 0x8B4 +#define AXD_REG_DEC1_SBC_SAMPLE_RATE 0x8B8 +#define AXD_REG_DEC1_SBC_AUDIO_MODE 0x8BC +#define AXD_REG_DEC1_SBC_BLOCKS 0x8C0 +#define AXD_REG_DEC1_SBC_SUBBANDS 0x8C4 +#define AXD_REG_DEC1_SBC_BITPOOL 0x8C8 +#define AXD_REG_DEC1_SBC_ALLOCATION_MODE 0x8CC +#define AXD_REG_DEC2_SBC_SAMPLE_RATE 0x8D0 +#define AXD_REG_DEC2_SBC_AUDIO_MODE 0x8D4 +#define AXD_REG_DEC2_SBC_BLOCKS 0x8D8 +#define AXD_REG_DEC2_SBC_SUBBANDS 0x8DC +#define AXD_REG_DEC2_SBC_BITPOOL 0x8E0 +#define AXD_REG_DEC2_SBC_ALLOCATION_MODE 0x8E4 +/* 0x8E8 to 0x8EC reserved */ +#define AXD_REG_SYNC_MODE 0x8F0 +/* 0x8F4 to 0x8FC reserved */ +#define AXD_REG_INPUT0_BUFFER_OCCUPANCY 0x900 +#define AXD_REG_INPUT1_BUFFER_OCCUPANCY 0x904 +#define AXD_REG_INPUT2_BUFFER_OCCUPANCY 0x908 +/* 0x90C reserved */ +#define AXD_REG_OUTPUT0_EVENT 0x910 +#define AXD_REG_OUTPUT1_EVENT 0x914 +#define AXD_REG_OUTPUT2_EVENT 0x918 +/* 0x91C reserved */ + +/* Register masks */ +#define AXD_INCTRL_ENABLE_MASK 0x1 +#define AXD_INCTRL_ENABLE_SHIFT 31 +#define AXD_INCTRL_ENABLE_BITS \ + (AXD_INCTRL_ENABLE_MASK << AXD_INCTRL_ENABLE_SHIFT) +#define AXD_INCTRL_SOURCE_MASK 0x3 +#define AXD_INCTRL_SOURCE_SHIFT 8 +#define AXD_INCTRL_SOURCE_BITS \ + (AXD_INCTRL_SOURCE_MASK << AXD_INCTRL_SOURCE_SHIFT) +#define AXD_INCTRL_CODEC_MASK 0x7FF +#define AXD_INCTRL_CODEC_SHIFT 0 +#define AXD_INCTRL_CODEC_BITS \ + (AXD_INCTRL_CODEC_MASK << AXD_INCTRL_CODEC_SHIFT) + +#define AXD_OUTCTRL_ENABLE_MASK 0x1 +#define AXD_OUTCTRL_ENABLE_SHIFT 31 +#define AXD_OUTCTRL_ENABLE_BITS \ + (AXD_OUTCTRL_ENABLE_MASK << AXD_OUTCTRL_ENABLE_SHIFT) +#define AXD_OUTCTRL_SINK_MASK 0x3 +#define AXD_OUTCTRL_SINK_SHIFT 0 +#define AXD_OUTCTRL_SINK_BITS \ + (AXD_OUTCTRL_SINK_MASK << AXD_OUTCTRL_SINK_SHIFT) +#define AXD_OUTCTRL_CODEC_MASK 0xFF +#define AXD_OUTCTRL_CODEC_SHIFT 2 +#define AXD_OUTCTRL_CODEC_BITS \ + (AXD_OUTCTRL_CODEC_MASK << AXD_OUTCTRL_CODEC_SHIFT) + +#define AXD_EQCTRL_ENABLE_MASK 0x1 +#define AXD_EQCTRL_ENABLE_SHIFT 31 +#define AXD_EQCTRL_ENABLE_BITS \ + (AXD_EQCTRL_ENABLE_MASK << AXD_EQCTRL_ENABLE_SHIFT) +#define AXD_EQCTRL_GAIN_MASK 0x7F +#define AXD_EQCTRL_GAIN_SHIFT 0 +#define AXD_EQCTRL_GAIN_BITS \ + (AXD_EQCTRL_GAIN_MASK << AXD_EQCTRL_GAIN_SHIFT) + +#define AXD_EQBANDX_GAIN_MASK 0xFF +#define AXD_EQBANDX_GAIN_SHIFT 0 +#define AXD_EQBANDX_GAIN_BITS \ + (AXD_EQBANDX_GAIN_MASK << AXD_EQBANDX_GAIN_SHIFT) + +#define AXD_DCPP_CTRL_ENABLE_MASK 0x1 +#define AXD_DCPP_CTRL_ENABLE_SHIFT 31 +#define AXD_DCPP_CTRL_ENABLE_BITS \ + (AXD_DCPP_CTRL_ENABLE_MASK << AXD_DCPP_CTRL_ENABLE_SHIFT) +#define AXD_DCPP_CTRL_CHANNELS_MASK 0xF +#define AXD_DCPP_CTRL_CHANNELS_SHIFT 27 +#define AXD_DCPP_CTRL_CHANNELS_BITS \ + (AXD_DCPP_CTRL_CHANNELS_MASK << AXD_DCPP_CTRL_CHANNELS_SHIFT) +#define AXD_DCPP_CTRL_MODE_MASK 0x1 +#define AXD_DCPP_CTRL_MODE_SHIFT 26 +#define AXD_DCPP_CTRL_MODE_BITS \ + (AXD_DCPP_CTRL_MODE_MASK << AXD_DCPP_CTRL_MODE_SHIFT) +#define AXD_DCPP_CTRL_EQ_MODE_MASK 0x1 +#define AXD_DCPP_CTRL_EQ_MODE_SHIFT 25 +#define AXD_DCPP_CTRL_EQ_MODE_BITS \ + (AXD_DCPP_CTRL_EQ_MODE_MASK << AXD_DCPP_CTRL_EQ_MODE_SHIFT) +#define AXD_DCPP_CTRL_EQ_BANDS_MASK 0xFF +#define AXD_DCPP_CTRL_EQ_BANDS_SHIFT 17 +#define AXD_DCPP_CTRL_EQ_BANDS_BITS \ + (AXD_DCPP_CTRL_EQ_BANDS_MASK << AXD_DCPP_CTRL_EQ_BANDS_SHIFT) +#define AXD_DCPP_CTRL_SUBBAND_ENABLE_MASK 0x1 +#define AXD_DCPP_CTRL_SUBBAND_ENABLE_SHIFT 16 +#define AXD_DCPP_CTRL_SUBBAND_ENABLE_BITS \ + (AXD_DCPP_CTRL_SUBBAND_ENABLE_MASK << AXD_DCPP_CTRL_SUBBAND_ENABLE_SHIFT) +#define AXD_DCPP_CTRL_SUBBAND_CHANNEL_MASK_MASK 0xFF +#define AXD_DCPP_CTRL_SUBBAND_CHANNEL_MASK_SHIFT 8 +#define AXD_DCPP_CTRL_SUBBAND_CHANNEL_MASK_BITS \ + (AXD_DCPP_CTRL_SUBBAND_CHANNEL_MASK_MASK << AXD_DCPP_CTRL_SUBBAND_CHANNEL_MASK_SHIFT) +#define AXD_DCPP_CTRL_SUBBAND_EQ_BANDS_MASK 0xFF +#define AXD_DCPP_CTRL_SUBBAND_EQ_BANDS_SHIFT 0 +#define AXD_DCPP_CTRL_SUBBAND_EQ_BANDS_BITS \ + (AXD_DCPP_CTRL_SUBBAND_EQ_BANDS_MASK << AXD_DCPP_CTRL_SUBBAND_EQ_BANDS_SHIFT) + +#define AXD_DCPP_CHANNEL_CTRL_CHANNEL_MASK 0xFF +#define AXD_DCPP_CHANNEL_CTRL_CHANNEL_SHIFT 24 +#define AXD_DCPP_CHANNEL_CTRL_CHANNEL_BITS \ + (AXD_DCPP_CHANNEL_CTRL_CHANNEL_MASK << AXD_DCPP_CHANNEL_CTRL_CHANNEL_SHIFT) +#define AXD_DCPP_CHANNEL_CTRL_SUBBAND_MASK 0x1 +#define AXD_DCPP_CHANNEL_CTRL_SUBBAND_SHIFT 23 +#define AXD_DCPP_CHANNEL_CTRL_SUBBAND_BITS \ + (AXD_DCPP_CHANNEL_CTRL_SUBBAND_MASK << AXD_DCPP_CHANNEL_CTRL_SUBBAND_SHIFT) + +#endif /* AXD_API_H_ */ diff --git a/sound/soc/img/axd/axd_module.c b/sound/soc/img/axd/axd_module.c new file mode 100644 index 000000000000..b4929fc12292 --- /dev/null +++ b/sound/soc/img/axd/axd_module.c @@ -0,0 +1,742 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * AXD is a hardware IP that provides various audio processing capabilities for + * user applications, offloading the core on which the application is running + * and saving its valuable MIPS. + */ +#include <linux/clk.h> +#include <linux/delay.h> +#include <linux/dma-mapping.h> +#include <linux/firmware.h> +#include <linux/fs.h> +#include <linux/init.h> +#include <linux/io.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/of_platform.h> +#include <linux/platform_device.h> +#include <linux/sched.h> +#include <linux/slab.h> +#include <linux/types.h> +#include <linux/uaccess.h> +#include <linux/wait.h> +#include <sound/compress_driver.h> +#include <sound/core.h> +#include <sound/soc.h> + +/* this is required by MIPS ioremap_cachable() */ +#include <asm/pgtable.h> + +#include "axd_cmds.h" +#include "axd_cmds_internal.h" +#include "axd_hdr.h" +#include "axd_module.h" +#include "axd_platform.h" + +#define AXD_MGCNUM 0x66445841 /* AXDf */ +#define LZO_MGCNUM 0x4f5a4c89 /* .LZO */ + +#define DEFAULT_INBUF_SIZE 0x7800 +#define DEFAULT_OUTBUF_SIZE 0x3c000 + +#define AXD_LDFW_RETRIES 400 + +#define WATCHDOG_TIMEOUT (3*HZ) + +#define AXD_BASE_VADDR 0xD0000000 + +enum axd_devtype { + AXD_UNKNOWN = 0, + AXD_CTRL, + AXD_INPUT, + AXD_OUTPUT, +}; + +extern struct snd_compr_ops axd_compr_ops; + +static struct snd_soc_platform_driver axd_platform = { + .compr_ops = &axd_compr_ops, +}; + +static const struct snd_soc_dapm_widget widgets[] = { + SND_SOC_DAPM_AIF_IN("AXD IN", "AXD Playback", 0, SND_SOC_NOPM, 0, 0), +}; + +static const struct snd_soc_component_driver axd_component = { + .name = "AXD", + .dapm_widgets = widgets, + .num_dapm_widgets = ARRAY_SIZE(widgets), +}; + +static struct snd_soc_dai_driver axd_dai[] = { + { + .name = "AXD Playback", + .compress_dai = 1, + .playback = { + .stream_name = "AXD Playback", + .channels_min = 2, + .channels_max = 2, + .rates = SNDRV_PCM_RATE_48000, + .formats = SNDRV_PCM_FMTBIT_S32_LE, + }, + }, +}; + +#ifdef CONFIG_SND_SOC_IMG_AXD_DEBUGFS +static ssize_t axd_read_log(struct file *filep, + char __user *buff, size_t count, loff_t *offp) +{ + struct axd_dev *axd = filep->f_inode->i_private; + void __iomem *log_addr; + unsigned int log_size; + int ret; + + log_addr = axd->fw_base_m + axd_hdr_get_log_offset(); + log_size = ioread32(log_addr + 4); + + if (!axd->log_rbuf) { + /* + * first time we run, initialise + */ + dev_dbg(axd->dev, + "allocating %u bytes for log buffer\n", log_size); + axd->log_rbuf = devm_kzalloc(axd->dev, log_size, GFP_KERNEL); + if (!axd->log_rbuf) + return -ENOMEM; + } + + if (!*offp) { + unsigned int flags = axd_platform_lock(); + unsigned int log_offset = ioread32(log_addr); + unsigned int log_wrapped = ioread32(log_addr + 8); + char __iomem *log_buff = (char __iomem *)(log_addr + 12); + + /* new read from beginning, fill up our internal buffer */ + if (!log_wrapped) { + memcpy_fromio(axd->log_rbuf, log_buff, log_offset); + axd->log_rbuf_rem = log_offset; + } else { + char __iomem *pos = log_buff + log_offset; + unsigned int rem = log_size - log_offset; + + memcpy_fromio(axd->log_rbuf, pos, rem); + memcpy_fromio(axd->log_rbuf + rem, log_buff, log_offset); + axd->log_rbuf_rem = log_size; + } + axd_platform_unlock(flags); + } + + if (count > axd->log_rbuf_rem) + count = axd->log_rbuf_rem; + + ret = copy_to_user(buff, axd->log_rbuf + *offp, count); + if (ret < 0) + return ret; + + dev_dbg(axd->dev, "read %d bytes from %d\n", count, (int)*offp); + *offp += count; + axd->log_rbuf_rem -= count; + + return count; +} + +static ssize_t axd_read_mask(struct file *filep, + char __user *buff, size_t count, loff_t *offp) +{ + struct axd_dev *axd = filep->f_inode->i_private; + unsigned int mask; + char buffer[32]; + int ret; + + if (!*offp) { + axd_read_reg(&axd->cmd, AXD_REG_DEBUG_MASK, &mask); + + count = sprintf(buffer, "0x%08x\n", mask); + + ret = copy_to_user(buff, buffer, count); + if (ret < 0) + return ret; + + *offp += count; + return count; + } + + return 0; +} + +static ssize_t axd_write_mask(struct file *filep, + const char __user *buff, size_t count, loff_t *offp) +{ + struct axd_dev *axd = filep->f_inode->i_private; + unsigned int mask; + char buffer[32] = {}; + int ret; + + /* ensure we always have null at the end */ + ret = copy_from_user(buffer, buff, min(31u, count)); + if (ret < 0) + return ret; + + if (!kstrtouint(buffer, 0, &mask)) + axd_write_reg(&axd->cmd, AXD_REG_DEBUG_MASK, mask); + + return count; +} + +const struct file_operations dfslogfops = { + .read = axd_read_log, + .llseek = no_llseek, +}; + +const struct file_operations dfsmaskfops = { + .read = axd_read_mask, + .write = axd_write_mask, + .llseek = no_llseek, +}; + +static void axd_debugfs_create(struct axd_dev *axd) +{ + axd->debugfs = debugfs_create_dir(dev_name(axd->dev), NULL); + if (IS_ERR_OR_NULL(axd->debugfs)) { + dev_err(axd->dev, "failed to create debugfs node\n"); + return; + } + axd->dfslog = debugfs_create_file("log", S_IRUGO | S_IWUSR, + axd->debugfs, axd, &dfslogfops); + if (IS_ERR_OR_NULL(axd->dfslog)) + dev_err(axd->dev, "failed to create debugfs log file\n"); + axd->dfsmask = debugfs_create_file("mask", S_IRUGO | S_IWUSR, + axd->debugfs, axd, &dfsmaskfops); + if (IS_ERR_OR_NULL(axd->dfsmask)) + dev_err(axd->dev, "failed to create debugfs mask file\n"); + axd->dfswatchdog = debugfs_create_bool("watchdog", S_IRUGO | S_IWUSR, + axd->debugfs, &axd->cmd.watchdogenabled); + if (IS_ERR_OR_NULL(axd->dfswatchdog)) + dev_err(axd->dev, "failed to create debugfs watchdog file\n"); +} + +static void axd_debugfs_destroy(struct axd_dev *axd) +{ + debugfs_remove_recursive(axd->debugfs); +} +#else +#define axd_debugfs_create(x) +#define axd_debugfs_destroy(x) +#endif /* CONFIG_SND_SOC_IMG_AXD_DEBUGFS */ + +#ifdef CONFIG_CRYPTO_LZO +#include <linux/crypto.h> +static int decompress_fw(struct axd_dev *axd, const struct firmware *fw) +{ + struct crypto_comp *tfm; + unsigned int size; + char *cached_fw_base; + int ret; + + tfm = crypto_alloc_comp("lzo", 0, 0); + if (IS_ERR(tfm)) + return PTR_ERR(tfm); + + /* allocate bigger memory for uncompressed fw */ + dma_free_coherent(axd->dev, axd->fw_size, + axd->fw_base_m, axd->fw_base_p); + axd->fw_size = *(int *)(fw->data + 4); + axd->fw_base_m = dma_alloc_coherent(axd->dev, axd->fw_size, + &axd->fw_base_p, GFP_KERNEL); + if (!axd->fw_base_m) { + ret = -ENOMEM; + goto out; + } + + /* first 8 bytes contain lzo magic number and raw file size, skip them */ + size = axd->fw_size; + cached_fw_base = (char *)CAC_ADDR((int)axd->fw_base_m); + ret = crypto_comp_decompress(tfm, fw->data + 8, + fw->size - 8, cached_fw_base, &size); + if (ret) + dev_err(axd->dev, "Failed to decompress the firmware\n"); + + if (size != axd->fw_size) { + dev_err(axd->dev, "Uncompressed file size doesn't match reported file size\n"); + ret = -EINVAL; + } + +out: + crypto_free_comp(tfm); + return ret; +} +#else /* !CONFIG_CRYPTO_LZO */ +static int decompress_fw(struct axd_dev *axd, const struct firmware *fw) +{ + dev_err(axd->dev, "The firmware must be lzo decompressed first, compile driver again with CONFIG_CRYPTO_LZO enabled in kernel or do the decompression in user space.\n"); + return -EIO; +} +#endif /* CONFIG_CRYPTO_LZO */ + +static int copy_fw(struct axd_dev *axd, const struct firmware *fw) +{ + int mgcnum = *(int *)fw->data; + int cached_fw_base = CAC_ADDR((int)axd->fw_base_m); + + if (mgcnum != AXD_MGCNUM) { + if (mgcnum == LZO_MGCNUM) + return decompress_fw(axd, fw); + + dev_err(axd->dev, "Not a valid firmware binary.\n"); + return -EIO; + } + /* + * We copy through the cache, fw will do the necessary cache + * flushes and syncing at startup. + * Copying from uncached makes it more difficult for the + * firmware to keep the caches coherent with memory when it sets + * tlbs and start running. + */ + memcpy_toio((void *)cached_fw_base, fw->data, fw->size); + + /* TODO: do MD5 checksum verification */ + return 0; +} + +static void axd_free(struct axd_dev *axd) +{ + if (axd->buf_base_m) { + dma_free_noncoherent(axd->dev, axd->inbuf_size+axd->outbuf_size, + axd->buf_base_m, axd->buf_base_p); + axd->buf_base_m = NULL; + } + if (axd->fw_base_m) { + dma_free_coherent(axd->dev, axd->fw_size, + axd->fw_base_m, axd->fw_base_p); + axd->fw_base_m = NULL; + } +} + +static int axd_alloc(struct axd_dev *axd) +{ + /* do the allocation once, return immediately if fw_base_m is set */ + if (axd->fw_base_m) + return 0; + + axd->fw_base_m = dma_alloc_coherent(axd->dev, axd->fw_size, + &axd->fw_base_p, GFP_KERNEL); + if (!axd->fw_base_m) + return -ENOMEM; + + axd->buf_base_m = dma_alloc_noncoherent(axd->dev, + axd->inbuf_size+axd->outbuf_size, + &axd->buf_base_p, GFP_KERNEL); + if (!axd->buf_base_m) { + axd_free(axd); + return -ENOMEM; + } + return 0; +} + +static int axd_fw_start(struct axd_dev *axd) +{ + unsigned long t0_new_pc; + unsigned int num_threads = axd_platform_num_threads(); + struct axd_cmd *axd_cmd = &axd->cmd; + const struct firmware *fw; + int ret = 0, i; + unsigned int gic_irq; + + /* request the firmware */ + ret = request_firmware(&fw, "img/axd_firmware.bin", axd->dev); + if (ret) { + dev_err(axd->dev, "Failed to load firmware, check that firmware loading is setup correctly in userspace and kernel and that axd_firmware.bin is present in the FS\n"); + goto out; + } + + axd->fw_size = fw->size; + if (!axd->inbuf_size) + axd->inbuf_size = DEFAULT_INBUF_SIZE; + if (!axd->outbuf_size) + axd->outbuf_size = DEFAULT_OUTBUF_SIZE; + + ret = axd_alloc(axd); + if (ret) { + dev_err(axd->dev, "Failed to allocate memory for AXD f/w and buffers\n"); + release_firmware(fw); + goto out; + } + + dev_info(axd->dev, "Loading firmware at 0x%p ...\n", axd->fw_base_m); + + ret = copy_fw(axd, fw); + release_firmware(fw); + if (ret) + goto out; + + /* setup hdr and memmapped regs */ + axd_hdr_init((unsigned long)axd->fw_base_m); + /* initialize the cmd structure and the buffers */ + axd_cmd_init(axd_cmd, + axd_hdr_get_cmdblock_offset()+(unsigned long)axd->fw_base_m, + (unsigned long)axd->buf_base_m, axd->buf_base_p); + + /* + * Tell AXD the count/compare frequency and the IRQs it must use + */ + gic_irq = (axd->host_irq << 16) | axd->axd_irq; + iowrite32(gic_irq, &axd_cmd->message->gic_irq); + iowrite32(clk_get_rate(axd->clk)/2000000, &axd_cmd->message->freq); + + axd_platform_init(axd); + for (i = 0; i < num_threads; i++) { + ret = axd_cmd_set_pc(axd_cmd, i, axd_hdr_get_pc(i)); + if (ret == -1) { + dev_err(axd->dev, "Failed to set PC of T%d\n", i); + goto out; + } + } + /* setup and start master thread */ + t0_new_pc = axd_hdr_get_pc(0); + if (t0_new_pc == -1UL) { + ret = -1; + goto out; + } + t0_new_pc = (unsigned long) axd->fw_base_m + (t0_new_pc - AXD_BASE_VADDR); + axd_platform_set_pc(t0_new_pc); + ret = axd_platform_start(); + if (ret) + goto out; + + /* install the IRQ */ + ret = axd_cmd_install_irq(&axd->cmd, axd->irqnum); + if (ret) { + dev_err(axd->dev, "Failed to install IRQ %d, error %d\n", + axd->irqnum, ret); + goto out; + } + + for (i = 0; i < AXD_LDFW_RETRIES; i++) { + ret = axd_wait_ready(axd_cmd->message); + if (!ret) { + /* + * Let the firmware know the address of the buffer + * region + */ + ret = axd_write_reg(axd_cmd, + AXD_REG_BUFFER_BASE, axd->buf_base_p); + if (ret) { + dev_err(axd->dev, + "Failed to setup buffers base address\n"); + goto out; + } + return 0; + + } + } +out: + axd_free(axd); + return ret; +} + +static void axd_fw_stop(struct axd_dev *axd) +{ + axd_cmd_free_irq(&axd->cmd, axd->irqnum); + axd_platform_stop(); +} + +/* + * Stops the firmware, reload it, and start it back again to recover from a + * fatal error. + */ +static void axd_reset(struct work_struct *work) +{ + unsigned int major, minor, patch; + int i; + + struct axd_dev *axd = container_of(work, struct axd_dev, watchdogwork); + + + /* if we got a fatal error, don't reset if watchdog is disabled */ + if (unlikely(!axd->cmd.watchdogenabled)) + return; + + /* stop the watchdog timer until we restart */ + del_timer(&axd->watchdogtimer); + + if (!axd_get_flag(&axd->cmd.fw_stopped_flg)) { + /* ping the firmware by requesting its version info */ + axd_cmd_get_version(&axd->cmd, &major, &minor, &patch); + if (!major && !minor && !patch) { + dev_warn(axd->dev, "Firmware stopped responding...\n"); + axd_set_flag(&axd->cmd.fw_stopped_flg, 1); + } else { + goto out; + } + } + + axd_platform_print_regs(); + dev_warn(axd->dev, "Reloading AXD firmware...\n"); + + axd_fw_stop(axd); + + /* Signal to any active tasks first */ + for (i = 0; i < axd->num_inputs; i++) + axd_cmd_send_buffer_abort(&axd->cmd, i); + + for (i = 0; i < axd->num_outputs; i++) + axd_cmd_recv_buffer_abort(&axd->cmd, i); + + /* wake up any task sleeping on command response */ + wake_up(&axd->cmd.wait); + /* give chance to user land tasks to react to the crash */ + ssleep(2); + + axd_fw_start(axd); + + for (i = 0; i < axd->num_inputs; i++) + axd_cmd_inpipe_reset(&axd->cmd, i); + + for (i = 0; i < axd->num_outputs; i++) + axd_cmd_outpipe_reset(&axd->cmd, i); + + axd_set_flag(&axd->cmd.fw_stopped_flg, 0); +out: + axd->watchdogtimer.expires = jiffies + WATCHDOG_TIMEOUT; + add_timer(&axd->watchdogtimer); +} + +/* + * Schedule to perform a reset. + * We don't perform the reset directly because the request comes from atomic + * context, and resetting must be done from process context. + */ +void axd_schedule_reset(struct axd_cmd *cmd) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + + axd_set_flag(&axd->cmd.fw_stopped_flg, 1); + schedule_work(&axd->watchdogwork); +} + +/* + * Verifies that the firmware is still running by reading the version every few + * seconds. + */ +static void axd_watchdog_timer(unsigned long arg) +{ + struct axd_dev *axd = (struct axd_dev *)arg; + + /* skip if watchdog is not enabled */ + if (unlikely(!axd->cmd.watchdogenabled)) + goto out; + + schedule_work(&axd->watchdogwork); + return; +out: + mod_timer(&axd->watchdogtimer, jiffies + WATCHDOG_TIMEOUT); +} + +static void axd_start_watchdog(struct axd_dev *axd) +{ + INIT_WORK(&axd->watchdogwork, axd_reset); + init_timer(&axd->watchdogtimer); + axd->watchdogtimer.function = axd_watchdog_timer; + axd->watchdogtimer.data = (unsigned long)axd; + axd->watchdogtimer.expires = jiffies + HZ; + add_timer(&axd->watchdogtimer); +} + +static void axd_stop_watchdog(struct axd_dev *axd) +{ + del_timer(&axd->watchdogtimer); +} + +static int axd_create(struct axd_dev *axd) +{ + int ret = 0, i = 0; + unsigned int major, minor, patch; + + axd_set_flag(&axd->timestamps_out_flg, 0); + + /* Setup and start the threads */ + ret = axd_fw_start(axd); + if (ret) { + dev_err(axd->dev, "Failed to start\n"); + return -EIO; + } + + /* + * Verify that the firmware is ready. In normal cases the firmware + * should start immediately, but to be more robust we do this + * verification and give the firmware a chance of 3 seconds to be ready + * otherwise we exit in failure. + */ + for (i = 0; i < AXD_LDFW_RETRIES; i++) { + axd_cmd_get_version(&axd->cmd, &major, &minor, &patch); + if (major || minor || patch) { + /* firmware is ready */ + break; + } + /* if we couldn't read the version after 3 tries, error */ + if (i == AXD_LDFW_RETRIES - 1) { + dev_err(axd->dev, "Failed to communicate with the firmware\n"); + ret = -EIO; + goto error; + } + /* wait for 10 ms for the firmware to start */ + msleep(10); + } + dev_info(axd->dev, "Running firmware version %u.%u.%u %s\n", + major, minor, patch, axd_hdr_get_build_str()); + + /* Get num of input/output pipes */ + ret = axd_cmd_get_num_pipes(&axd->cmd, + &axd->num_inputs, &axd->num_outputs); + if (ret) { + dev_err(axd->dev, "Failed to get numer of supported pipes\n"); + ret = -EIO; + goto error; + } + axd->cmd.num_inputs = axd->num_inputs; + axd->cmd.num_outputs = axd->num_outputs; + + /* Invalidate DCPP selector caches */ + for (i = 0; i < axd->cmd.num_outputs; i++) { + axd->cmd.dcpp_channel_ctrl_cache[i] = -1; + axd->cmd.dcpp_band_ctrl_cache[i] = -1; + } + + ret = snd_soc_register_platform(axd->dev, &axd_platform); + if (ret) { + dev_err(axd->dev, "Failed to register platform, %d\n", ret); + goto error; + } + + ret = snd_soc_register_component(axd->dev, &axd_component, axd_dai, ARRAY_SIZE(axd_dai)); + if (ret) { + snd_soc_unregister_platform(axd->dev); + dev_err(axd->dev, "Failed to register DAI, %d\n", ret); + goto error; + } + + axd_start_watchdog(axd); + axd_debugfs_create(axd); + + return 0; + +error: + axd_fw_stop(axd); + + return ret; +} + +static void axd_destroy(struct axd_dev *axd) +{ + axd_stop_watchdog(axd); + axd_fw_stop(axd); + axd_debugfs_destroy(axd); + snd_soc_unregister_component(axd->dev); + snd_soc_unregister_platform(axd->dev); +} + +static int axd_probe(struct platform_device *pdev) +{ + struct device_node *of_node = pdev->dev.of_node; + struct axd_dev *axd; + int ret; + u32 val[2] = {0, 0}; + + axd = devm_kzalloc(&pdev->dev, sizeof(struct axd_dev), GFP_KERNEL); + if (!axd) + return -ENOMEM; + + ret = platform_get_irq(pdev, 0); + if (ret < 0) { + dev_err(&pdev->dev, "Couldn't get parameter: 'irq'\n"); + return ret; + } + axd->irqnum = ret; + + ret = of_property_read_u32_array(of_node, "gic-irq", val, 2); + if (ret) { + dev_err(&pdev->dev, + "'gic-irq' parameter must be set\n"); + return ret; + } + axd->host_irq = val[0]; + axd->axd_irq = val[1]; + + ret = of_property_read_u32(of_node, "vpe", val); + if (ret) { + dev_err(&pdev->dev, "'vpe' parameter must be set\n"); + return ret; + } + + if (!val[0]) { + dev_err(&pdev->dev, "'vpe' parameter can't be 0\n"); + return -EINVAL; + } + axd->vpe = val[0]; + + axd->clk = devm_clk_get(&pdev->dev, NULL); + if (IS_ERR_OR_NULL(axd->clk)) { + dev_err(&pdev->dev, "Couldn't get parameter: 'clocks'\n"); + return PTR_ERR(axd->clk); + } + + of_property_read_u32(of_node, "inbuf-size", &axd->inbuf_size); + of_property_read_u32(of_node, "outbuf-size", &axd->outbuf_size); + + ret = clk_prepare_enable(axd->clk); + if (ret) { + dev_err(&pdev->dev, "Failed to enable the clock\n"); + return ret; + } + + axd->dev = &pdev->dev; + dev_set_drvdata(axd->dev, axd); + ret = axd_create(axd); + if (ret) { + clk_disable_unprepare(axd->clk); + return ret; + } + + return 0; +} + +static int axd_remove(struct platform_device *pdev) +{ + struct axd_dev *axd = dev_get_drvdata(&pdev->dev); + + clk_disable_unprepare(axd->clk); + axd_destroy(axd); + axd_free(axd); + + return 0; +} + +static const struct of_device_id axd_match[] = { + { .compatible = "img,axd" }, + {} +}; + +static struct platform_driver axd_driver = { + .driver = { + .name = "axd", + .of_match_table = axd_match, + }, + .probe = axd_probe, + .remove = axd_remove, +}; + +module_platform_driver(axd_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Imagination Technologies Ltd."); +MODULE_DESCRIPTION("AXD Audio Processing IP Driver"); diff --git a/sound/soc/img/axd/axd_module.h b/sound/soc/img/axd/axd_module.h new file mode 100644 index 000000000000..8dbc20dff63f --- /dev/null +++ b/sound/soc/img/axd/axd_module.h @@ -0,0 +1,83 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * AXD is a hardware IP that provides various audio decoding capabilities for + * user applications, offloading the core on which the application is running + * and saving its valuable MIPS. + */ +#ifndef AXD_MODULE_H_ +#define AXD_MODULE_H_ +#include <linux/cdev.h> +#include <linux/clk.h> +#include <linux/debugfs.h> + +#include "axd_api.h" +#include "axd_cmds.h" + + +void axd_schedule_reset(struct axd_cmd *cmd); + + +/** + * struct axd_dev - axd device structure + * @dev: pointer to struct device from platform_device + * @num_inputs: number of inputs AXD hardware reported it can handle + * @num_outputs: number of outputs AXD hardware reported it provides + * @axd_cmd: axd_cmd structure + * @fw_base_m: pointer to mapped fw base address + * @fw_base_p: physical address of fw base + * @fw_size: size of reserved fw region + * @buf_base_m: pointer to mapped buffers base address + * @buf_base_p: physical address of buffers base + * @inbuf_size: size of reserved input buffers region + * @outbuf_size: size of reserved output buffers region + * @host_irq: gic irq of the host + * @axd_irq: gic irq of axd + * @irqnum: linux linear irq number for request_irq() + * @clk: pointer to clock structure for AXD + * @vpe: vpe number AXD is running on + * @watchdogtimer: software watchdogtimer to check if axd is alive + * @watchdogwork: the work to execute to check if firwmare is still alive + * and restart if it discovers the firmware stopped + * responding. + * @timestamps_out_flg: a flag that indicates whether we should pass output + * timestamps or not + */ +struct axd_dev { + struct device *dev; + int num_inputs; + int num_outputs; + struct axd_cmd cmd; + void __iomem *fw_base_m; + dma_addr_t fw_base_p; + unsigned int fw_size; + void __iomem *buf_base_m; + dma_addr_t buf_base_p; + unsigned int inbuf_size; + unsigned int outbuf_size; + int host_irq; + int axd_irq; + int irqnum; + struct clk *clk; + unsigned int vpe; + struct timer_list watchdogtimer; + struct work_struct watchdogwork; + int timestamps_out_flg; + /* debugfs related */ + struct dentry *debugfs; + struct dentry *dfslog; + struct dentry *dfsmask; + struct dentry *dfswatchdog; + char *log_rbuf; + int log_rbuf_rem; +}; +#endif /* AXD_MODULE_H_ */
On Mon, Aug 24, 2015 at 01:39:12PM +0100, Qais Yousef wrote:
+#define THREAD_COUNT 4
This is a very generic name that looks likely to collide with something else, please namespace.
+#define AXD_INPUT_DESCRIPTORS 10 +struct axd_input {
- struct axd_buffer_desc descriptors[AXD_INPUT_DESCRIPTORS];
+};
Where do these numbers come from? Are they hardware limits or something else?
+/* this is required by MIPS ioremap_cachable() */ +#include <asm/pgtable.h>
Don't work around this here, fix it in the relevant header.
+#define AXD_BASE_VADDR 0xD0000000
This sounds like something that is going to be platform dependant, should this be supplied from board configuration?
+extern struct snd_compr_ops axd_compr_ops;
Prototype shared definitions in headers not in C files please so we know the definition matches.
+static struct snd_soc_dai_driver axd_dai[] = {
- {
Why an array with only one entry?
- if (!*offp) {
unsigned int flags = axd_platform_lock();
unsigned int log_offset = ioread32(log_addr);
unsigned int log_wrapped = ioread32(log_addr + 8);
char __iomem *log_buff = (char __iomem *)(log_addr + 12);
/* new read from beginning, fill up our internal buffer */
if (!log_wrapped) {
memcpy_fromio(axd->log_rbuf, log_buff, log_offset);
axd->log_rbuf_rem = log_offset;
} else {
char __iomem *pos = log_buff + log_offset;
unsigned int rem = log_size - log_offset;
memcpy_fromio(axd->log_rbuf, pos, rem);
memcpy_fromio(axd->log_rbuf + rem, log_buff, log_offset);
axd->log_rbuf_rem = log_size;
}
axd_platform_unlock(flags);
I didn't see the lock being taken?
+static ssize_t axd_write_mask(struct file *filep,
const char __user *buff, size_t count, loff_t *offp)
+{
- struct axd_dev *axd = filep->f_inode->i_private;
- unsigned int mask;
- char buffer[32] = {};
- int ret;
- /* ensure we always have null at the end */
- ret = copy_from_user(buffer, buff, min(31u, count));
- if (ret < 0)
return ret;
- if (!kstrtouint(buffer, 0, &mask))
axd_write_reg(&axd->cmd, AXD_REG_DEBUG_MASK, mask);
What are we writing here? If we're going behind the driver's back on something that might confuse it it's generally better to taint the kernel so we know dodgy stuff happened later on.
+static void axd_debugfs_create(struct axd_dev *axd) +{
- axd->debugfs = debugfs_create_dir(dev_name(axd->dev), NULL);
- if (IS_ERR_OR_NULL(axd->debugfs)) {
dev_err(axd->dev, "failed to create debugfs node\n");
return;
- }
It'd be nicer to create this under the relevant ASoC debugfs directory so it's easier to find.
+#ifdef CONFIG_CRYPTO_LZO +#include <linux/crypto.h>
This include should be with all the other includes, not down here.
- size = axd->fw_size;
- cached_fw_base = (char *)CAC_ADDR((int)axd->fw_base_m);
- ret = crypto_comp_decompress(tfm, fw->data + 8,
fw->size - 8, cached_fw_base, &size);
- if (ret)
dev_err(axd->dev, "Failed to decompress the firmware\n");
Print return codes if you get them.
- if (size != axd->fw_size) {
dev_err(axd->dev, "Uncompressed file size doesn't match reported file size\n");
ret = -EINVAL;
- }
Should we be checking this if the decompression failed?
+} +#else /* !CONFIG_CRYPTO_LZO */ +static int decompress_fw(struct axd_dev *axd, const struct firmware *fw)
Blank lines between things please.
+{
- dev_err(axd->dev, "The firmware must be lzo decompressed first, compile driver again with CONFIG_CRYPTO_LZO enabled in kernel or do the decompression in user space.\n");
Please split this up into a few prints for wrapping, similarly in several other places.
- return -EIO;
-ENOTSUPP.
return -EIO;
- }
- /*
More vertical blanks missing.
* We copy through the cache, fw will do the necessary cache
* flushes and syncing at startup.
* Copying from uncached makes it more difficult for the
* firmware to keep the caches coherent with memory when it sets
* tlbs and start running.
*/
- memcpy_toio((void *)cached_fw_base, fw->data, fw->size);
Why the cast here? I'm also not seeing where we handled the copying to I/O in the decompression case?
- dev_info(axd->dev, "Loading firmware at 0x%p ...\n", axd->fw_base_m);
This should be _dbg() at most, otherwise it's going to get noisy.
- t0_new_pc = (unsigned long) axd->fw_base_m + (t0_new_pc - AXD_BASE_VADDR);
Those casts look fishy...
- for (i = 0; i < AXD_LDFW_RETRIES; i++) {
ret = axd_wait_ready(axd_cmd->message);
if (!ret) {
/*
* Let the firmware know the address of the buffer
* region
*/
ret = axd_write_reg(axd_cmd,
AXD_REG_BUFFER_BASE, axd->buf_base_p);
if (ret) {
dev_err(axd->dev,
"Failed to setup buffers base address\n");
Again print errors please.
goto out;
}
return 0;
}
- }
I'm not seeing any diagnostics if we fall out of the retry loop here?
+static void axd_reset(struct work_struct *work) +{
- unsigned int major, minor, patch;
- int i;
- struct axd_dev *axd = container_of(work, struct axd_dev, watchdogwork);
- /* if we got a fatal error, don't reset if watchdog is disabled */
- if (unlikely(!axd->cmd.watchdogenabled))
return;
There's generally no need for unlikely() annotations outside of hot paths.
- /* stop the watchdog timer until we restart */
- del_timer(&axd->watchdogtimer);
I'd expect del_timer_sync() to make sure that the timer stopped.
- if (!axd_get_flag(&axd->cmd.fw_stopped_flg)) {
/* ping the firmware by requesting its version info */
axd_cmd_get_version(&axd->cmd, &major, &minor, &patch);
if (!major && !minor && !patch) {
dev_warn(axd->dev, "Firmware stopped responding...\n");
axd_set_flag(&axd->cmd.fw_stopped_flg, 1);
} else {
goto out;
}
- }
It might be useful to display the firmware version we loaded.
- axd_platform_print_regs();
- dev_warn(axd->dev, "Reloading AXD firmware...\n");
This is going to get noisy and isn't adding much.
- /* wake up any task sleeping on command response */
- wake_up(&axd->cmd.wait);
- /* give chance to user land tasks to react to the crash */
- ssleep(2);
This looks horribly racy, I'd expect us to be trashing and/or killing off any active work and resources here.
+static void axd_watchdog_timer(unsigned long arg) +{
- struct axd_dev *axd = (struct axd_dev *)arg;
- /* skip if watchdog is not enabled */
- if (unlikely(!axd->cmd.watchdogenabled))
goto out;
- schedule_work(&axd->watchdogwork);
- return;
+out:
- mod_timer(&axd->watchdogtimer, jiffies + WATCHDOG_TIMEOUT);
+}
So we have a timer that just schedules some work? Why not just schedule_delayed_work()?
- /*
* Verify that the firmware is ready. In normal cases the firmware
* should start immediately, but to be more robust we do this
* verification and give the firmware a chance of 3 seconds to be ready
* otherwise we exit in failure.
*/
- for (i = 0; i < AXD_LDFW_RETRIES; i++) {
axd_cmd_get_version(&axd->cmd, &major, &minor, &patch);
if (major || minor || patch) {
/* firmware is ready */
break;
}
/* if we couldn't read the version after 3 tries, error */
if (i == AXD_LDFW_RETRIES - 1) {
dev_err(axd->dev, "Failed to communicate with the firmware\n");
ret = -EIO;
goto error;
}
/* wait for 10 ms for the firmware to start */
msleep(10);
- }
- dev_info(axd->dev, "Running firmware version %u.%u.%u %s\n",
major, minor, patch, axd_hdr_get_build_str());
Why is this code not shared with the restart case?
- ret = of_property_read_u32_array(of_node, "gic-irq", val, 2);
- if (ret) {
dev_err(&pdev->dev,
"'gic-irq' parameter must be set\n");
return ret;
- }
This appears to have a DT binding but the binding is not documented. All new DT bindings must be documented. I'm concerned that some of the properties being read from DT may not be ideal here...
On 08/26/2015 07:37 PM, Mark Brown wrote:
On Mon, Aug 24, 2015 at 01:39:12PM +0100, Qais Yousef wrote:
+#define THREAD_COUNT 4
This is a very generic name that looks likely to collide with something else, please namespace.
OK.
+#define AXD_INPUT_DESCRIPTORS 10 +struct axd_input {
- struct axd_buffer_desc descriptors[AXD_INPUT_DESCRIPTORS];
+};
Where do these numbers come from? Are they hardware limits or something else?
These numbers are what the firmware designed to work with. We had to set a limit and we sought 10 to be a good one for our purposes. We don't expect to need to change this number.
+/* this is required by MIPS ioremap_cachable() */ +#include <asm/pgtable.h>
Don't work around this here, fix it in the relevant header.
Will do.
+#define AXD_BASE_VADDR 0xD0000000
This sounds like something that is going to be platform dependant, should this be supplied from board configuration?
I don't expect this to change. Can we add the configuration later if we hit the need to change it?
+extern struct snd_compr_ops axd_compr_ops;
Prototype shared definitions in headers not in C files please so we know the definition matches.
OK.
+static struct snd_soc_dai_driver axd_dai[] = {
- {
Why an array with only one entry?
Will fix it.
- if (!*offp) {
unsigned int flags = axd_platform_lock();
unsigned int log_offset = ioread32(log_addr);
unsigned int log_wrapped = ioread32(log_addr + 8);
char __iomem *log_buff = (char __iomem *)(log_addr + 12);
/* new read from beginning, fill up our internal buffer */
if (!log_wrapped) {
memcpy_fromio(axd->log_rbuf, log_buff, log_offset);
axd->log_rbuf_rem = log_offset;
} else {
char __iomem *pos = log_buff + log_offset;
unsigned int rem = log_size - log_offset;
memcpy_fromio(axd->log_rbuf, pos, rem);
memcpy_fromio(axd->log_rbuf + rem, log_buff, log_offset);
axd->log_rbuf_rem = log_size;
}
axd_platform_unlock(flags);
I didn't see the lock being taken?
The lock is the first line in the block (unsigned int flags = axd_platform_lock()). I'll tidy it up to make it more readable.
+static ssize_t axd_write_mask(struct file *filep,
const char __user *buff, size_t count, loff_t *offp)
+{
- struct axd_dev *axd = filep->f_inode->i_private;
- unsigned int mask;
- char buffer[32] = {};
- int ret;
- /* ensure we always have null at the end */
- ret = copy_from_user(buffer, buff, min(31u, count));
- if (ret < 0)
return ret;
- if (!kstrtouint(buffer, 0, &mask))
axd_write_reg(&axd->cmd, AXD_REG_DEBUG_MASK, mask);
What are we writing here? If we're going behind the driver's back on something that might confuse it it's generally better to taint the kernel so we know dodgy stuff happened later on.
The debug mask will cause AXD firmware to provide more or less debug information. We are not going behind the driver's back.
+static void axd_debugfs_create(struct axd_dev *axd) +{
- axd->debugfs = debugfs_create_dir(dev_name(axd->dev), NULL);
- if (IS_ERR_OR_NULL(axd->debugfs)) {
dev_err(axd->dev, "failed to create debugfs node\n");
return;
- }
It'd be nicer to create this under the relevant ASoC debugfs directory so it's easier to find.
Sure. I'll try to find an example and follow what it does.
+#ifdef CONFIG_CRYPTO_LZO +#include <linux/crypto.h>
This include should be with all the other includes, not down here.
Was trying to reduce the ifdefery. Will fix.
- size = axd->fw_size;
- cached_fw_base = (char *)CAC_ADDR((int)axd->fw_base_m);
- ret = crypto_comp_decompress(tfm, fw->data + 8,
fw->size - 8, cached_fw_base, &size);
- if (ret)
dev_err(axd->dev, "Failed to decompress the firmware\n");
Print return codes if you get them.
Will do.
- if (size != axd->fw_size) {
dev_err(axd->dev, "Uncompressed file size doesn't match reported file size\n");
ret = -EINVAL;
- }
Should we be checking this if the decompression failed?
Nope. I'll fix it.
+} +#else /* !CONFIG_CRYPTO_LZO */ +static int decompress_fw(struct axd_dev *axd, const struct firmware *fw)
Blank lines between things please.
OK.
+{
- dev_err(axd->dev, "The firmware must be lzo decompressed first, compile driver again with CONFIG_CRYPTO_LZO enabled in kernel or do the decompression in user space.\n");
Please split this up into a few prints for wrapping, similarly in several other places.
OK. I thought the convention for strings to leave them as is to allow grepping. I'll fix it.
- return -EIO;
-ENOTSUPP.
OK.
return -EIO;
- }
- /*
More vertical blanks missing.
OK.
* We copy through the cache, fw will do the necessary cache
* flushes and syncing at startup.
* Copying from uncached makes it more difficult for the
* firmware to keep the caches coherent with memory when it sets
* tlbs and start running.
*/
- memcpy_toio((void *)cached_fw_base, fw->data, fw->size);
Why the cast here? I'm also not seeing where we handled the copying to I/O in the decompression case?
I couldn't avoid the cast. If cached_fw_base is 'void *' I'll get a warning when initialising cached_fw_base from CAC_ADDR(). So I'll have to either cast here or there, I chose here. If I pass axd->fw_base_m I encounter the issue described in the commit message.
Good point. When decompressing crypto_comp_decompress() will write directly to the memory. It is safe but it doesn't go through the correct API. Not sure what I can do here.
- dev_info(axd->dev, "Loading firmware at 0x%p ...\n", axd->fw_base_m);
This should be _dbg() at most, otherwise it's going to get noisy.
- t0_new_pc = (unsigned long) axd->fw_base_m + (t0_new_pc - AXD_BASE_VADDR);
Those casts look fishy...
I am happy to try something else. axd->fw_base_m is of type void * __iomem but we want to do some arithmetic on it. Is there a better way to do it?
- for (i = 0; i < AXD_LDFW_RETRIES; i++) {
ret = axd_wait_ready(axd_cmd->message);
if (!ret) {
/*
* Let the firmware know the address of the buffer
* region
*/
ret = axd_write_reg(axd_cmd,
AXD_REG_BUFFER_BASE, axd->buf_base_p);
if (ret) {
dev_err(axd->dev,
"Failed to setup buffers base address\n");
Again print errors please.
goto out;
}
return 0;
}
- }
I'm not seeing any diagnostics if we fall out of the retry loop here?
Will add one.
+static void axd_reset(struct work_struct *work) +{
- unsigned int major, minor, patch;
- int i;
- struct axd_dev *axd = container_of(work, struct axd_dev, watchdogwork);
- /* if we got a fatal error, don't reset if watchdog is disabled */
- if (unlikely(!axd->cmd.watchdogenabled))
return;
There's generally no need for unlikely() annotations outside of hot paths.
OK.
- /* stop the watchdog timer until we restart */
- del_timer(&axd->watchdogtimer);
I'd expect del_timer_sync() to make sure that the timer stopped.
OK.
- if (!axd_get_flag(&axd->cmd.fw_stopped_flg)) {
/* ping the firmware by requesting its version info */
axd_cmd_get_version(&axd->cmd, &major, &minor, &patch);
if (!major && !minor && !patch) {
dev_warn(axd->dev, "Firmware stopped responding...\n");
axd_set_flag(&axd->cmd.fw_stopped_flg, 1);
} else {
goto out;
}
- }
It might be useful to display the firmware version we loaded.
OK.
- axd_platform_print_regs();
- dev_warn(axd->dev, "Reloading AXD firmware...\n");
This is going to get noisy and isn't adding much.
OK.
- /* wake up any task sleeping on command response */
- wake_up(&axd->cmd.wait);
- /* give chance to user land tasks to react to the crash */
- ssleep(2);
This looks horribly racy, I'd expect us to be trashing and/or killing off any active work and resources here.
OK. I was trying to play nicely by giving the chance to userland to repond to -ERESTART which would be sent from aborting any pending reads/writes.
Are you suggesting to send SIGKILL using force_sig()?
+static void axd_watchdog_timer(unsigned long arg) +{
- struct axd_dev *axd = (struct axd_dev *)arg;
- /* skip if watchdog is not enabled */
- if (unlikely(!axd->cmd.watchdogenabled))
goto out;
- schedule_work(&axd->watchdogwork);
- return;
+out:
- mod_timer(&axd->watchdogtimer, jiffies + WATCHDOG_TIMEOUT);
+}
So we have a timer that just schedules some work? Why not just schedule_delayed_work()?
Either wasn't there the time this was first written or was missed. Either case thanks for the suggestion I'll change it.
- /*
* Verify that the firmware is ready. In normal cases the firmware
* should start immediately, but to be more robust we do this
* verification and give the firmware a chance of 3 seconds to be ready
* otherwise we exit in failure.
*/
- for (i = 0; i < AXD_LDFW_RETRIES; i++) {
axd_cmd_get_version(&axd->cmd, &major, &minor, &patch);
if (major || minor || patch) {
/* firmware is ready */
break;
}
/* if we couldn't read the version after 3 tries, error */
if (i == AXD_LDFW_RETRIES - 1) {
dev_err(axd->dev, "Failed to communicate with the firmware\n");
ret = -EIO;
goto error;
}
/* wait for 10 ms for the firmware to start */
msleep(10);
- }
- dev_info(axd->dev, "Running firmware version %u.%u.%u %s\n",
major, minor, patch, axd_hdr_get_build_str());
Why is this code not shared with the restart case?
I didn't think it's necessary but I see how it can be better to move it inside axd_fw_start() now.
- ret = of_property_read_u32_array(of_node, "gic-irq", val, 2);
- if (ret) {
dev_err(&pdev->dev,
"'gic-irq' parameter must be set\n");
return ret;
- }
This appears to have a DT binding but the binding is not documented. All new DT bindings must be documented. I'm concerned that some of the properties being read from DT may not be ideal here...
It is documented on a different patch. Sorry I think I just added the DT maintainers to the CC for that patch and sent it to the ALSA list. I'll be more careful in the next series to include all ALSA maintainers for all patches.
Yes the DT will need to be enhanced. There's a separate discussion generated by one of the patches on this series about how IPI should be defined in DT.
See this
https://lkml.org/lkml/2015/8/26/713
Again sorry for not explicitly adding you to the CC list for all the patches.
Thanks, Qais
On Thu, Aug 27, 2015 at 01:15:51PM +0100, Qais Yousef wrote:
On 08/26/2015 07:37 PM, Mark Brown wrote:
On Mon, Aug 24, 2015 at 01:39:12PM +0100, Qais Yousef wrote:
+#define AXD_INPUT_DESCRIPTORS 10 +struct axd_input {
- struct axd_buffer_desc descriptors[AXD_INPUT_DESCRIPTORS];
+};
Where do these numbers come from? Are they hardware limits or something else?
These numbers are what the firmware designed to work with. We had to set a limit and we sought 10 to be a good one for our purposes. We don't expect to need to change this number.
So we have hard coded numbers in the firmware that we need in the driver but we can't read those numbers back from the firmware. That's sad.
+#define AXD_BASE_VADDR 0xD0000000
This sounds like something that is going to be platform dependant, should this be supplied from board configuration?
I don't expect this to change. Can we add the configuration later if we hit the need to change it?
It should be trivial to make things configurable shouldn't it?
- if (!*offp) {
unsigned int flags = axd_platform_lock();
unsigned int log_offset = ioread32(log_addr);
unsigned int log_wrapped = ioread32(log_addr + 8);
char __iomem *log_buff = (char __iomem *)(log_addr + 12);
/* new read from beginning, fill up our internal buffer */
if (!log_wrapped) {
memcpy_fromio(axd->log_rbuf, log_buff, log_offset);
axd->log_rbuf_rem = log_offset;
} else {
char __iomem *pos = log_buff + log_offset;
unsigned int rem = log_size - log_offset;
memcpy_fromio(axd->log_rbuf, pos, rem);
memcpy_fromio(axd->log_rbuf + rem, log_buff, log_offset);
axd->log_rbuf_rem = log_size;
}
axd_platform_unlock(flags);
I didn't see the lock being taken?
The lock is the first line in the block (unsigned int flags = axd_platform_lock()). I'll tidy it up to make it more readable.
It's very bad practice to bury lock taking in with the variable declaration.
+#ifdef CONFIG_CRYPTO_LZO +#include <linux/crypto.h>
This include should be with all the other includes, not down here.
Was trying to reduce the ifdefery. Will fix.
You don't need any ifdefs for the include, you can just include the header.
+{
- dev_err(axd->dev, "The firmware must be lzo decompressed first, compile driver again with CONFIG_CRYPTO_LZO enabled in kernel or do the decompression in user space.\n");
Please split this up into a few prints for wrapping, similarly in several other places.
OK. I thought the convention for strings to leave them as is to allow grepping. I'll fix it.
You should keep strings that are displayed as a single string together but if you are splitting something in the output then that split won't hurt grepping in the source.
* We copy through the cache, fw will do the necessary cache
* flushes and syncing at startup.
* Copying from uncached makes it more difficult for the
* firmware to keep the caches coherent with memory when it sets
* tlbs and start running.
*/
- memcpy_toio((void *)cached_fw_base, fw->data, fw->size);
Why the cast here? I'm also not seeing where we handled the copying to I/O in the decompression case?
I couldn't avoid the cast. If cached_fw_base is 'void *' I'll get a warning when initialising cached_fw_base from CAC_ADDR().
Why do you get a warning from that? Perhaps the warnings are trying to tell us something...
Good point. When decompressing crypto_comp_decompress() will write directly to the memory. It is safe but it doesn't go through the correct API. Not sure what I can do here.
Uncompress to a buffer then write that buffer to the final destination?
- dev_info(axd->dev, "Loading firmware at 0x%p ...\n", axd->fw_base_m);
This should be _dbg() at most, otherwise it's going to get noisy.
- t0_new_pc = (unsigned long) axd->fw_base_m + (t0_new_pc - AXD_BASE_VADDR);
Those casts look fishy...
I am happy to try something else. axd->fw_base_m is of type void * __iomem but we want to do some arithmetic on it. Is there a better way to do it?
Pointer arithmetic or converting it to a number?
- /* wake up any task sleeping on command response */
- wake_up(&axd->cmd.wait);
- /* give chance to user land tasks to react to the crash */
- ssleep(2);
This looks horribly racy, I'd expect us to be trashing and/or killing off any active work and resources here.
OK. I was trying to play nicely by giving the chance to userland to repond to -ERESTART which would be sent from aborting any pending reads/writes.
Are you suggesting to send SIGKILL using force_sig()?
No, I'm suggesting tearing down the kernel side of any work and kicking errors back to userspace if it continues to interact with anything that was ongoing.
On 08/27/2015 04:32 PM, Mark Brown wrote:
On Thu, Aug 27, 2015 at 01:15:51PM +0100, Qais Yousef wrote:
On 08/26/2015 07:37 PM, Mark Brown wrote:
On Mon, Aug 24, 2015 at 01:39:12PM +0100, Qais Yousef wrote:
+#define AXD_INPUT_DESCRIPTORS 10 +struct axd_input {
- struct axd_buffer_desc descriptors[AXD_INPUT_DESCRIPTORS];
+};
Where do these numbers come from? Are they hardware limits or something else?
These numbers are what the firmware designed to work with. We had to set a limit and we sought 10 to be a good one for our purposes. We don't expect to need to change this number.
So we have hard coded numbers in the firmware that we need in the driver but we can't read those numbers back from the firmware. That's sad.
+#define AXD_BASE_VADDR 0xD0000000
This sounds like something that is going to be platform dependant, should this be supplied from board configuration?
I don't expect this to change. Can we add the configuration later if we hit the need to change it?
It should be trivial to make things configurable shouldn't it?
Yes and I am all with configurability but I don't think it makes sense here. AXD will always have its own MMU and will not share virtual address space, so the possibility of us wanting to move this somewhere else is really very thin. Also I don't think this is the kind of detail we need to concern the user with. I'll see if I can make the binary header parsing more flexible so we can add more info like this and the one above in the future and be more future proof.
- if (!*offp) {
unsigned int flags = axd_platform_lock();
unsigned int log_offset = ioread32(log_addr);
unsigned int log_wrapped = ioread32(log_addr + 8);
char __iomem *log_buff = (char __iomem *)(log_addr + 12);
/* new read from beginning, fill up our internal buffer */
if (!log_wrapped) {
memcpy_fromio(axd->log_rbuf, log_buff, log_offset);
axd->log_rbuf_rem = log_offset;
} else {
char __iomem *pos = log_buff + log_offset;
unsigned int rem = log_size - log_offset;
memcpy_fromio(axd->log_rbuf, pos, rem);
memcpy_fromio(axd->log_rbuf + rem, log_buff, log_offset);
axd->log_rbuf_rem = log_size;
}
axd_platform_unlock(flags);
I didn't see the lock being taken?
The lock is the first line in the block (unsigned int flags = axd_platform_lock()). I'll tidy it up to make it more readable.
It's very bad practice to bury lock taking in with the variable declaration.
Yes. I'll fix it.
+#ifdef CONFIG_CRYPTO_LZO +#include <linux/crypto.h>
This include should be with all the other includes, not down here.
Was trying to reduce the ifdefery. Will fix.
You don't need any ifdefs for the include, you can just include the header.
+{
- dev_err(axd->dev, "The firmware must be lzo decompressed first, compile driver again with CONFIG_CRYPTO_LZO enabled in kernel or do the decompression in user space.\n");
Please split this up into a few prints for wrapping, similarly in several other places.
OK. I thought the convention for strings to leave them as is to allow grepping. I'll fix it.
You should keep strings that are displayed as a single string together but if you are splitting something in the output then that split won't hurt grepping in the source.
* We copy through the cache, fw will do the necessary cache
* flushes and syncing at startup.
* Copying from uncached makes it more difficult for the
* firmware to keep the caches coherent with memory when it sets
* tlbs and start running.
*/
- memcpy_toio((void *)cached_fw_base, fw->data, fw->size);
Why the cast here? I'm also not seeing where we handled the copying to I/O in the decompression case?
I couldn't avoid the cast. If cached_fw_base is 'void *' I'll get a warning when initialising cached_fw_base from CAC_ADDR().
Why do you get a warning from that? Perhaps the warnings are trying to tell us something...
Because we try to assign an int to a pointer. So the error is 'makes pointer from integer without a cast'. To convert an address from uncached to cached we need to convert to an int as in MIPS it's a case of adding or subtracting a value then convert this value back to it's original form. I'll see if I can find a better way to fix the coherency issue when we copy through uncached.
Good point. When decompressing crypto_comp_decompress() will write directly to the memory. It is safe but it doesn't go through the correct API. Not sure what I can do here.
Uncompress to a buffer then write that buffer to the final destination?
Yes but the binary could be multi MiB so we can't get a temp buffer that large. If the crypto API allows decompressing in steps we can use a small buffer to move the data iteratively. I'll have a look.
- dev_info(axd->dev, "Loading firmware at 0x%p ...\n", axd->fw_base_m);
This should be _dbg() at most, otherwise it's going to get noisy.
- t0_new_pc = (unsigned long) axd->fw_base_m + (t0_new_pc - AXD_BASE_VADDR);
Those casts look fishy...
I am happy to try something else. axd->fw_base_m is of type void * __iomem but we want to do some arithmetic on it. Is there a better way to do it?
Pointer arithmetic or converting it to a number?
We are just converting to a number.
- /* wake up any task sleeping on command response */
- wake_up(&axd->cmd.wait);
- /* give chance to user land tasks to react to the crash */
- ssleep(2);
This looks horribly racy, I'd expect us to be trashing and/or killing off any active work and resources here.
OK. I was trying to play nicely by giving the chance to userland to repond to -ERESTART which would be sent from aborting any pending reads/writes. Are you suggesting to send SIGKILL using force_sig()?
No, I'm suggesting tearing down the kernel side of any work and kicking errors back to userspace if it continues to interact with anything that was ongoing.
OK. This is what we do (see my other email about abort). I'll have a think for a way to get rid of the ssleep(). Any ideas are welcome.
Thanks, Qais
On Fri, Aug 28, 2015 at 10:22:57AM +0100, Qais Yousef wrote:
On 08/27/2015 04:32 PM, Mark Brown wrote:
On Thu, Aug 27, 2015 at 01:15:51PM +0100, Qais Yousef wrote:
+#define AXD_BASE_VADDR 0xD0000000
This sounds like something that is going to be platform dependant, should this be supplied from board configuration?
I don't expect this to change. Can we add the configuration later if we hit the need to change it?
It should be trivial to make things configurable shouldn't it?
Yes and I am all with configurability but I don't think it makes sense here. AXD will always have its own MMU and will not share virtual address space, so the possibility of us wanting to move this somewhere else is really very thin. Also I don't think this is the kind of detail we need to concern the user with. I'll see if I can make the binary header parsing more flexible so we can add more info like this and the one above in the future and be more future proof.
So this is a virtual address in the memory map of the DSP? That's not what I thought it was.
- memcpy_toio((void *)cached_fw_base, fw->data, fw->size);
Why the cast here? I'm also not seeing where we handled the copying to I/O in the decompression case?
I couldn't avoid the cast. If cached_fw_base is 'void *' I'll get a warning when initialising cached_fw_base from CAC_ADDR().
Why do you get a warning from that? Perhaps the warnings are trying to tell us something...
Because we try to assign an int to a pointer. So the error is 'makes pointer from integer without a cast'. To convert an address from uncached to cached we need to convert to an int as in MIPS it's a case of adding or subtracting a value then convert this value back to it's original form. I'll see if I can find a better way to fix the coherency issue when we copy through uncached.
Why can't you just use pointer arithmmetic?
Good point. When decompressing crypto_comp_decompress() will write directly to the memory. It is safe but it doesn't go through the correct API. Not sure what I can do here.
Uncompress to a buffer then write that buffer to the final destination?
Yes but the binary could be multi MiB so we can't get a temp buffer that large. If the crypto API allows decompressing in steps we can use a small buffer to move the data iteratively. I'll have a look.
A few megabytes doesn't seem like that big an ask (it's not *nice* but it's doable with vmalloc()). Iteratively copying is nicer though.
- /* wake up any task sleeping on command response */
- wake_up(&axd->cmd.wait);
- /* give chance to user land tasks to react to the crash */
- ssleep(2);
This looks horribly racy, I'd expect us to be trashing and/or killing off any active work and resources here.
OK. I was trying to play nicely by giving the chance to userland to repond to -ERESTART which would be sent from aborting any pending reads/writes. Are you suggesting to send SIGKILL using force_sig()?
No, I'm suggesting tearing down the kernel side of any work and kicking errors back to userspace if it continues to interact with anything that was ongoing.
OK. This is what we do (see my other email about abort). I'll have a think for a way to get rid of the ssleep(). Any ideas are welcome.
Just delete it?
These files provide functions to get information from the fw binary header.
Signed-off-by: Qais Yousef qais.yousef@imgtec.com Cc: Liam Girdwood lgirdwood@gmail.com Cc: Mark Brown broonie@kernel.org Cc: Jaroslav Kysela perex@perex.cz Cc: Takashi Iwai tiwai@suse.com Cc: linux-kernel@vger.kernel.org --- sound/soc/img/axd/axd_hdr.c | 64 +++++++++++++++++++++++++++++++++++++++++++++ sound/soc/img/axd/axd_hdr.h | 24 +++++++++++++++++ 2 files changed, 88 insertions(+) create mode 100644 sound/soc/img/axd/axd_hdr.c create mode 100644 sound/soc/img/axd/axd_hdr.h
diff --git a/sound/soc/img/axd/axd_hdr.c b/sound/soc/img/axd/axd_hdr.c new file mode 100644 index 000000000000..7be3d11df120 --- /dev/null +++ b/sound/soc/img/axd/axd_hdr.c @@ -0,0 +1,64 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * Helper functions to parse AXD Header in the firmware binary. + */ +#include <linux/kernel.h> + +#include "axd_api.h" +#include "axd_hdr.h" + +static struct axd_hdr *hdr; + +static void dump_hdr(void) +{ + unsigned int offset = 0; + unsigned long address = (unsigned long)hdr; + + pr_debug("header <0x%08lX>:\n", address); + while (offset <= sizeof(*hdr)) { + pr_debug("0x%08X\t", *(unsigned int *)(address+offset)); + offset += 4; + if ((offset % (4*4)) == 0) + pr_debug("\n"); + } + pr_debug("\n"); +} + +void axd_hdr_init(unsigned long address) +{ + hdr = (struct axd_hdr *)address; + dump_hdr(); +} + +unsigned long axd_hdr_get_pc(unsigned int thread) +{ + if (thread >= THREAD_COUNT) + return -1; + return hdr->thread_pc[thread]; +} + +unsigned long axd_hdr_get_cmdblock_offset(void) +{ + pr_debug("cmdblock_offset = 0x%08X\n", hdr->cmd_block_offset); + return hdr->cmd_block_offset; +} + +char *axd_hdr_get_build_str(void) +{ + return hdr->build_str; +} + +unsigned long axd_hdr_get_log_offset(void) +{ + return hdr->log_offset; +} diff --git a/sound/soc/img/axd/axd_hdr.h b/sound/soc/img/axd/axd_hdr.h new file mode 100644 index 000000000000..dc0b1e3be5a2 --- /dev/null +++ b/sound/soc/img/axd/axd_hdr.h @@ -0,0 +1,24 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * Helper functions to parse AXD Header in the firmware binary + */ +#ifndef AXD_HDR_H_ +#define AXD_HDR_H_ + +void axd_hdr_init(unsigned long address); +unsigned long axd_hdr_get_pc(unsigned int thread); +unsigned long axd_hdr_get_cmdblock_offset(void); +char *axd_hdr_get_build_str(void); +unsigned long axd_hdr_get_log_offset(void); + +#endif /* AXD_HDR_H_ */
These files support initilising and managing access to the shared buffers area in memory that is used to exchange data between AXD and linux.
Signed-off-by: Qais Yousef qais.yousef@imgtec.com Cc: Liam Girdwood lgirdwood@gmail.com Cc: Mark Brown broonie@kernel.org Cc: Jaroslav Kysela perex@perex.cz Cc: Takashi Iwai tiwai@suse.com Cc: linux-kernel@vger.kernel.org --- sound/soc/img/axd/axd_buffers.c | 243 ++++++++++++++++++++++++++++++++++++++++ sound/soc/img/axd/axd_buffers.h | 74 ++++++++++++ 2 files changed, 317 insertions(+) create mode 100644 sound/soc/img/axd/axd_buffers.c create mode 100644 sound/soc/img/axd/axd_buffers.h
diff --git a/sound/soc/img/axd/axd_buffers.c b/sound/soc/img/axd/axd_buffers.c new file mode 100644 index 000000000000..891344a806f6 --- /dev/null +++ b/sound/soc/img/axd/axd_buffers.c @@ -0,0 +1,243 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * AXD generic buffer management API. + */ +#include <linux/err.h> +#include <linux/slab.h> + +#include "axd_buffers.h" + +/** + * axd_buffer_init - sets up axd buffer as a pool of fixed sized buffers. + * @address: starting address of the buffer as set up in the system + * @total_size: total size of available buffer + * @element_size: size of each buffer element + * + * axd_buffer_t *buffer is a memory pool of size @element_size and starting at + * address @address and of @total_size size. + */ +static int bufferq_init(struct axd_bufferq *bufferq, const char *name, + char *address, unsigned int num_elements, + unsigned int element_size, unsigned int nonblock) +{ + int i; + char **queue; + unsigned int *size; + + strncpy(bufferq->name, name, 16); + bufferq->stride = element_size; + bufferq->max = num_elements; + bufferq->rd_idx = 0; + bufferq->wr_idx = 0; + bufferq->nonblock = nonblock; + queue = kcalloc(num_elements, sizeof(char *), GFP_KERNEL); + if (!queue) + return -ENOMEM; + bufferq->queue = queue; + size = kcalloc(num_elements, sizeof(unsigned int), GFP_KERNEL); + if (!size) { + kfree(queue); + bufferq->queue = NULL; + return -ENOMEM; + } + bufferq->size = size; + /* + * setup the queue with all available buffer addresses if the base + * address is passed. Set it up as emptry if base address is NULL. + */ + if (address) { + for (i = 0; i < num_elements; i++) { + queue[i] = address + (element_size * i); + size[i] = element_size; + } + sema_init(&bufferq->rd_sem, num_elements); + sema_init(&bufferq->wr_sem, 0); + } else { + for (i = 0; i < num_elements; i++) { + queue[i] = NULL; + size[i] = element_size; + } + sema_init(&bufferq->rd_sem, 0); + sema_init(&bufferq->wr_sem, num_elements); + } + spin_lock_init(&bufferq->q_rdlock); + spin_lock_init(&bufferq->q_wrlock); + pr_debug("Initialized %s of %d elements of size %d bytes\n", + name, num_elements, element_size); + pr_debug("Address of %s: 0x%08X\n", name, (unsigned int)bufferq); + return 0; +} + +int axd_bufferq_init(struct axd_bufferq *bufferq, const char *name, + char *address, unsigned int num_elements, + unsigned int element_size, unsigned int nonblock) +{ + return bufferq_init(bufferq, + name, address, num_elements, element_size, nonblock); +} + +int axd_bufferq_init_empty(struct axd_bufferq *bufferq, const char *name, + unsigned int num_elements, unsigned int element_size, + unsigned int nonblock) +{ + return bufferq_init(bufferq, + name, NULL, num_elements, element_size, nonblock); +} + +void axd_bufferq_clear(struct axd_bufferq *bufferq) +{ + kfree(bufferq->queue); + kfree(bufferq->size); + bufferq->queue = NULL; + bufferq->size = NULL; +} + +/** + * axd_buffer_take - returns a valid buffer pointer + * @buffer: the buffers pool to be accessed + * + * This function will go into interruptible sleep if the pool is empty. + */ +char *axd_bufferq_take(struct axd_bufferq *bufferq, int *buf_size) +{ + char *buf; + int ret; + + if (!bufferq->queue) + return NULL; + + pr_debug("--(%s)-- taking new buffer\n", bufferq->name); + if (bufferq->nonblock) { + ret = down_trylock(&bufferq->rd_sem); + if (ret) + return ERR_PTR(-EAGAIN); + + } else { + ret = down_interruptible(&bufferq->rd_sem); + if (ret) + return ERR_PTR(-ERESTARTSYS); + if (bufferq->abort_take) { + bufferq->abort_take = 0; + return ERR_PTR(-ERESTARTSYS); + } + } + /* + * must ensure we have one access at a time to the queue and rd_idx + * to be preemption and SMP safe + * Sempahores will ensure that we will only read after a complete write + * has finished, so we will never read and write from the same location. + */ + spin_lock(&bufferq->q_rdlock); + buf = bufferq->queue[bufferq->rd_idx]; + if (buf_size) + *buf_size = bufferq->size[bufferq->rd_idx]; + bufferq->rd_idx++; + if (bufferq->rd_idx >= bufferq->max) + bufferq->rd_idx = 0; + spin_unlock(&bufferq->q_rdlock); + up(&bufferq->wr_sem); + pr_debug("--(%s)-- took buffer <0x%08X>\n", bufferq->name, + (unsigned int)buf); + return buf; +} + +/** + * axd_buffer_put - returns a buffer to the pool. + * @buffer: the buffers pool to be accessed + * @buf: the buffer to be returned. + * + * This function will go into interruptible sleep if the pool is full. + */ +int axd_bufferq_put(struct axd_bufferq *bufferq, char *buf, int buf_size) +{ + int ret; + + if (!bufferq->queue) + return 0; + + if (buf_size < 0) + buf_size = bufferq->stride; + + pr_debug("++(%s)++ returning buffer\n", bufferq->name); + if (bufferq->nonblock) { + ret = down_trylock(&bufferq->wr_sem); + if (ret) + return -EAGAIN; + + } else { + ret = down_interruptible(&bufferq->wr_sem); + if (ret) + return -ERESTARTSYS; + if (bufferq->abort_put) { + bufferq->abort_put = 0; + return -ERESTARTSYS; + } + } + /* + * must ensure we have one access at a time to the queue and wr_idx + * to be preemption and SMP safe. + * Semaphores will ensure that we only write after a complete read has + * finished, so we will never write and read from the same location. + */ + spin_lock(&bufferq->q_wrlock); + bufferq->queue[bufferq->wr_idx] = buf; + bufferq->size[bufferq->wr_idx] = buf_size; + bufferq->wr_idx++; + if (bufferq->wr_idx >= bufferq->max) + bufferq->wr_idx = 0; + spin_unlock(&bufferq->q_wrlock); + up(&bufferq->rd_sem); + pr_debug("++(%s)++ returned buffer <0x%08X>\n", bufferq->name, + (unsigned int)buf); + return 0; +} + +int axd_bufferq_is_full(struct axd_bufferq *bufferq) +{ + int ret; + /* + * if we can't put a buffer, then we're full. + */ + ret = down_trylock(&bufferq->wr_sem); + if (!ret) + up(&bufferq->wr_sem); + return ret; +} + +int axd_bufferq_is_empty(struct axd_bufferq *bufferq) +{ + int ret; + /* + * if we can't take more buffers, then its empty. + */ + ret = down_trylock(&bufferq->rd_sem); + if (!ret) + up(&bufferq->rd_sem); + return ret; +} + +void axd_bufferq_abort_take(struct axd_bufferq *bufferq) +{ + if (axd_bufferq_is_empty(bufferq)) { + bufferq->abort_take = 1; + up(&bufferq->rd_sem); + } +} + +void axd_bufferq_abort_put(struct axd_bufferq *bufferq) +{ + if (axd_bufferq_is_full(bufferq)) { + bufferq->abort_put = 1; + up(&bufferq->wr_sem); + } +} diff --git a/sound/soc/img/axd/axd_buffers.h b/sound/soc/img/axd/axd_buffers.h new file mode 100644 index 000000000000..c585044a8f1f --- /dev/null +++ b/sound/soc/img/axd/axd_buffers.h @@ -0,0 +1,74 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * AXD generic buffer management API. + */ +#ifndef AXD_BUFFERS_H_ +#define AXD_BUFFERS_H_ + +#include <linux/semaphore.h> +#include <linux/spinlock.h> + +/** + * struct axd_bufferq - axd buffer management structure + * @name: name of the buffer queue + * @stride: the space between buffers in memory + * @max: total number of buffers this queue can handle + * @rd_idx: read index of the circular buffer + * @wr_idx: write index of the circular buffer + * @rd_sem: semaphore to block when full + * @wr_sem: semaphore to block when empty + * @q_rdlock: smp critical section protection for reads + * @q_wrlock: smp critical section protection for writes + * @queue: array of pointers to buffer addresses + * @size: array of buffer's actual amount of data it has inside or it can + * store. + * @nonblock: return an error instead of block when empty/full + * @abort_take: abort any pending blocked take operation + * @abort_put: abort any pending blocked put operation + * + * axd_bufferq takes a contiguous memory region and divides it into smaller + * buffers regions of equal size and represents it as a queue. To avoid + * excessive locking it's done as a circular buffer queue. + */ +struct axd_bufferq { + char name[16]; + unsigned int stride; + unsigned int max; + unsigned int rd_idx; + unsigned int wr_idx; + struct semaphore rd_sem; + struct semaphore wr_sem; + spinlock_t q_rdlock; + spinlock_t q_wrlock; + char **queue; + unsigned int *size; + unsigned int nonblock; + unsigned int abort_take; + unsigned int abort_put; +}; + +int axd_bufferq_init(struct axd_bufferq *bufferq, const char *name, + char *address, unsigned int num_elements, + unsigned int element_size, unsigned int nonblock); +int axd_bufferq_init_empty(struct axd_bufferq *bufferq, const char *name, + unsigned int num_elements, unsigned int element_size, + unsigned int nonblock); +void axd_bufferq_clear(struct axd_bufferq *bufferq); +char *axd_bufferq_take(struct axd_bufferq *bufferq, int *buf_size); +int axd_bufferq_put(struct axd_bufferq *bufferq, char *buf, int buf_size); +int axd_bufferq_is_full(struct axd_bufferq *bufferq); +int axd_bufferq_is_empty(struct axd_bufferq *bufferq); +void axd_bufferq_abort_take(struct axd_bufferq *bufferq); +void axd_bufferq_abort_put(struct axd_bufferq *bufferq); + +#endif /* AXD_BUFFERS_H_ */
On Mon, Aug 24, 2015 at 01:39:14PM +0100, Qais Yousef wrote:
- /*
* must ensure we have one access at a time to the queue and rd_idx
* to be preemption and SMP safe
* Sempahores will ensure that we will only read after a complete write
* has finished, so we will never read and write from the same location.
*/
In what way will sempahores ensure that we will only read after a complete write?
- buf = bufferq->queue[bufferq->rd_idx];
So buffers are always retired in the same order that they are acquired?
+int axd_bufferq_put(struct axd_bufferq *bufferq, char *buf, int buf_size) +{
- int ret;
- if (!bufferq->queue)
return 0;
- if (buf_size < 0)
buf_size = bufferq->stride;
We've got strides as well? What is that?
+void axd_bufferq_abort_take(struct axd_bufferq *bufferq) +{
- if (axd_bufferq_is_empty(bufferq)) {
bufferq->abort_take = 1;
up(&bufferq->rd_sem);
- }
+}
+void axd_bufferq_abort_put(struct axd_bufferq *bufferq) +{
- if (axd_bufferq_is_full(bufferq)) {
bufferq->abort_put = 1;
up(&bufferq->wr_sem);
- }
+}
These look *incredibly* racy. Why are they here and why are they safe?
On 08/26/2015 07:43 PM, Mark Brown wrote:
On Mon, Aug 24, 2015 at 01:39:14PM +0100, Qais Yousef wrote:
- /*
* must ensure we have one access at a time to the queue and rd_idx
* to be preemption and SMP safe
* Sempahores will ensure that we will only read after a complete write
* has finished, so we will never read and write from the same location.
*/
In what way will sempahores ensure that we will only read after a complete write?
This comment needs fixing. What it is trying to say is that if we reached this point of the code then we're certainly allowed to modify the buffer queue and {rd, wr}_idx because the semaphore would have gone to sleep otherwise if the queue is full/empty.
Should I just remove the reference to Semaphores from the comment or worth rephrasing it?
Would it be better to rename {rd, wr}_{idx, sem} to {take, put}_{idx, sem}?
- buf = bufferq->queue[bufferq->rd_idx];
So buffers are always retired in the same order that they are acquired?
I don't think I get you here. axd_bufferq_take() and axd_bufferq_put() could be called in any order.
What this code is trying to do is make a contiguous memory area behave as a ring buffer. Then this ring buffer behave as a queue. We use semaphore counts to control how many are available to take/put. rd_idx and wr_idx should always point at the next location to take/put from/to.
Does this help answering your question?
+int axd_bufferq_put(struct axd_bufferq *bufferq, char *buf, int buf_size) +{
- int ret;
- if (!bufferq->queue)
return 0;
- if (buf_size < 0)
buf_size = bufferq->stride;
We've got strides as well? What is that?
We break the contiguous buffer area allocated for us into smaller buffers separated by (or of size) stride.
+void axd_bufferq_abort_take(struct axd_bufferq *bufferq) +{
- if (axd_bufferq_is_empty(bufferq)) {
bufferq->abort_take = 1;
up(&bufferq->rd_sem);
- }
+}
+void axd_bufferq_abort_put(struct axd_bufferq *bufferq) +{
- if (axd_bufferq_is_full(bufferq)) {
bufferq->abort_put = 1;
up(&bufferq->wr_sem);
- }
+}
These look *incredibly* racy. Why are they here and why are they safe?
If we want to restart the firmware we will need to abort any blocking reads or writes for the user space to react. I also needed that to implement nonblocking access in user space when this was a sysfs based driver. It was important then to implement omx IL component correctly.
Do I need to support nonblock reads and writes in ALSA? If I use SIGKILL as you suggested in the other email when restarting and nonblock is not important then I can remove this.
I just looked at the code history and I was in the past sending SIGBUS to the user if we needed to restart then I opted to the abort approach as it will allow the application to terminate gracefully as it should get EOF instead then and hide the need to restart the firmware in a better way. What do you think?
Thanks, Qais
On Thu, Aug 27, 2015 at 03:21:17PM +0100, Qais Yousef wrote:
On 08/26/2015 07:43 PM, Mark Brown wrote:
On Mon, Aug 24, 2015 at 01:39:14PM +0100, Qais Yousef wrote:
- /*
* must ensure we have one access at a time to the queue and rd_idx
* to be preemption and SMP safe
* Sempahores will ensure that we will only read after a complete write
* has finished, so we will never read and write from the same location.
*/
In what way will sempahores ensure that we will only read after a complete write?
This comment needs fixing. What it is trying to say is that if we reached this point of the code then we're certainly allowed to modify the buffer queue and {rd, wr}_idx because the semaphore would have gone to sleep otherwise if the queue is full/empty.
Should I just remove the reference to Semaphores from the comment or worth rephrasing it?
Any comments need to be comprehensible.
Would it be better to rename {rd, wr}_{idx, sem} to {take, put}_{idx, sem}?
I'm not sure that helps to be honest, the main issue is that the scheme is fairly complex and unexplained.
- buf = bufferq->queue[bufferq->rd_idx];
So buffers are always retired in the same order that they are acquired?
I don't think I get you here. axd_bufferq_take() and axd_bufferq_put() could be called in any order.
Retiring buffers in the order they are acquired means that buffers are always freed in the same order they are acquired, you can't free one buffer before another that was acquired first.
What this code is trying to do is make a contiguous memory area behave as a ring buffer. Then this ring buffer behave as a queue. We use semaphore counts to control how many are available to take/put. rd_idx and wr_idx should always point at the next location to take/put from/to.
Does this help answering your question?
No. Why are we doing this? Essentially all ALSA buffers are ring buffers handled in blocks, why does this one need this complex locking scheme?
+void axd_bufferq_abort_put(struct axd_bufferq *bufferq) +{
- if (axd_bufferq_is_full(bufferq)) {
bufferq->abort_put = 1;
up(&bufferq->wr_sem);
- }
+}
These look *incredibly* racy. Why are they here and why are they safe?
If we want to restart the firmware we will need to abort any blocking reads or writes for the user space to react. I also needed that to implement
I'm not questioning what the functionns are doing, I'm questioning their implementation - it doesn't look like they are safe or reliable. They just set a flag, relying on something else to notice that the flag has been set and act appropriately before it goes on and corrupts data. That just screams concurrency issues.
nonblocking access in user space when this was a sysfs based driver. It was important then to implement omx IL component correctly.
Nobody cares about OMX ILs in mainline or sysfs based interfaces.
Do I need to support nonblock reads and writes in ALSA? If I use SIGKILL as you suggested in the other email when restarting and nonblock is not important then I can remove this.
It would be better to support non blocking access.
On 08/29/2015 10:47 AM, Mark Brown wrote:
On Thu, Aug 27, 2015 at 03:21:17PM +0100, Qais Yousef wrote:
On 08/26/2015 07:43 PM, Mark Brown wrote:
On Mon, Aug 24, 2015 at 01:39:14PM +0100, Qais Yousef wrote:
- /*
* must ensure we have one access at a time to the queue and rd_idx
* to be preemption and SMP safe
* Sempahores will ensure that we will only read after a complete write
* has finished, so we will never read and write from the same location.
*/
In what way will sempahores ensure that we will only read after a complete write?
This comment needs fixing. What it is trying to say is that if we reached this point of the code then we're certainly allowed to modify the buffer queue and {rd, wr}_idx because the semaphore would have gone to sleep otherwise if the queue is full/empty. Should I just remove the reference to Semaphores from the comment or worth rephrasing it?
Any comments need to be comprehensible.
Would it be better to rename {rd, wr}_{idx, sem} to {take, put}_{idx, sem}?
I'm not sure that helps to be honest, the main issue is that the scheme is fairly complex and unexplained.
- buf = bufferq->queue[bufferq->rd_idx];
So buffers are always retired in the same order that they are acquired?
I don't think I get you here. axd_bufferq_take() and axd_bufferq_put() could be called in any order.
Retiring buffers in the order they are acquired means that buffers are always freed in the same order they are acquired, you can't free one buffer before another that was acquired first.
What this code is trying to do is make a contiguous memory area behave as a ring buffer. Then this ring buffer behave as a queue. We use semaphore counts to control how many are available to take/put. rd_idx and wr_idx should always point at the next location to take/put from/to. Does this help answering your question?
No. Why are we doing this? Essentially all ALSA buffers are ring buffers handled in blocks, why does this one need this complex locking scheme?
There are 2 sides to this. The ALSA/driver iface and the driver/firmware one. The ALSA/driver iface is called from ALSA ops but the driver/firmware is handled by the interrupt and workqueues. The code is trying to deal with this concurrency. Also once AXD consumed a buffer it sends back an interrupt to the driver that it can reuse it, there's no guarantee that this returned buffer is in the same order it was sent.
I hear you though. Let me see how I can simplify this :-)
+void axd_bufferq_abort_put(struct axd_bufferq *bufferq) +{
- if (axd_bufferq_is_full(bufferq)) {
bufferq->abort_put = 1;
up(&bufferq->wr_sem);
- }
+}
These look *incredibly* racy. Why are they here and why are they safe?
If we want to restart the firmware we will need to abort any blocking reads or writes for the user space to react. I also needed that to implement
I'm not questioning what the functionns are doing, I'm questioning their implementation - it doesn't look like they are safe or reliable. They just set a flag, relying on something else to notice that the flag has been set and act appropriately before it goes on and corrupts data. That just screams concurrency issues.
OK. I'll see how I can rework the code to address all of your comments.
Thanks, Qais
nonblocking access in user space when this was a sysfs based driver. It was important then to implement omx IL component correctly.
Nobody cares about OMX ILs in mainline or sysfs based interfaces.
Do I need to support nonblock reads and writes in ALSA? If I use SIGKILL as you suggested in the other email when restarting and nonblock is not important then I can remove this.
It would be better to support non blocking access.
On Tue, Sep 01, 2015 at 11:00:42AM +0100, Qais Yousef wrote:
On 08/29/2015 10:47 AM, Mark Brown wrote:
Please delete unneeded context from replies, it makes it easier to find the new content you have added. Please also leave blank lines between paragraphs, it makes it much easier to read messages.
What this code is trying to do is make a contiguous memory area behave as a ring buffer. Then this ring buffer behave as a queue. We use semaphore counts to control how many are available to take/put. rd_idx and wr_idx should always point at the next location to take/put from/to. Does this help answering your question?
No. Why are we doing this? Essentially all ALSA buffers are ring buffers handled in blocks, why does this one need this complex locking scheme?
There are 2 sides to this. The ALSA/driver iface and the driver/firmware one. The ALSA/driver iface is called from ALSA ops but the driver/firmware is handled by the interrupt and workqueues. The code is trying to deal with this concurrency. Also once AXD consumed a buffer it sends back an interrupt
This is just the same as any other ALSA device...
to the driver that it can reuse it, there's no guarantee that this returned buffer is in the same order it was sent.
If that's the case I'm not sure the code is correct - it seemed to have assumptions that the buffers were going to be retired in the order.
On 09/03/2015 01:32 PM, Mark Brown wrote:
On Tue, Sep 01, 2015 at 11:00:42AM +0100, Qais Yousef wrote:
On 08/29/2015 10:47 AM, Mark Brown wrote:
Please delete unneeded context from replies, it makes it easier to find the new content you have added. Please also leave blank lines between paragraphs, it makes it much easier to read messages.
Sorry about that and for the delayed response. I added one more blank line now, hopefully this looks better.
to the driver that it can reuse it, there's no guarantee that this returned buffer is in the same order it was sent.
If that's the case I'm not sure the code is correct - it seemed to have assumptions that the buffers were going to be retired in the order.
What this code is trying to do is the same as what vring do. I'll move this driver to be rproc based which hopefully should make things simpler and trim all of this out and address your other review comments as well.
I didn't write an rproc driver before, if there's anything that you think I need to be aware of when writing an rproc based *ALSA* driver I'd appreciate pointing it out.
Thanks, Qais
On Mon, Sep 14, 2015 at 10:11:58AM +0100, Qais Yousef wrote:
I didn't write an rproc driver before, if there's anything that you think I need to be aware of when writing an rproc based *ALSA* driver I'd appreciate pointing it out.
I'm not aware of anything, the biggest driver currently communicating with a DSP is the Intel one.
These files do the important part of talking with AXD to send and receive data buffers.
Signed-off-by: Qais Yousef qais.yousef@imgtec.com Cc: Liam Girdwood lgirdwood@gmail.com Cc: Mark Brown broonie@kernel.org Cc: Jaroslav Kysela perex@perex.cz Cc: Takashi Iwai tiwai@suse.com Cc: linux-kernel@vger.kernel.org --- sound/soc/img/axd/axd_cmds.c | 102 +++ sound/soc/img/axd/axd_cmds.h | 532 ++++++++++++++ sound/soc/img/axd/axd_cmds_pipes.c | 1387 ++++++++++++++++++++++++++++++++++++ 3 files changed, 2021 insertions(+) create mode 100644 sound/soc/img/axd/axd_cmds.c create mode 100644 sound/soc/img/axd/axd_cmds.h create mode 100644 sound/soc/img/axd/axd_cmds_pipes.c
diff --git a/sound/soc/img/axd/axd_cmds.c b/sound/soc/img/axd/axd_cmds.c new file mode 100644 index 000000000000..eb160f46489b --- /dev/null +++ b/sound/soc/img/axd/axd_cmds.c @@ -0,0 +1,102 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * AXD Commands API - generic setup functions. + */ +#include "axd_api.h" +#include "axd_cmds.h" +#include "axd_cmds_internal.h" +#include "axd_module.h" + +static unsigned long __io_address; +static unsigned long __phys_address; + +void axd_cmd_init(struct axd_cmd *cmd, unsigned long cmd_address, + unsigned long io_address, unsigned long phys_address) +{ + int i; + + cmd->message = (struct axd_memory_map __iomem *)cmd_address; + mutex_init(&cmd->cm_lock); + init_waitqueue_head(&cmd->wait); + axd_set_flag(&cmd->response_flg, 0); + axd_set_flag(&cmd->fw_stopped_flg, 0); + for (i = 0; i < AXD_MAX_PIPES; i++) { + axd_cmd_inpipe_init(cmd, i); + axd_cmd_outpipe_init(cmd, i); + } + __io_address = io_address; + __phys_address = phys_address; + cmd->watchdogenabled = 1; + /* + * By default, always discard any pending buffers if an output device is + * closed before EOS is reached. + * This behaviour can be changed through kcontrol. If discard is disabled, + * then upon closing an output device before EOS is reached, it'll + * resume from where it stopped. + */ + axd_set_flag(&cmd->discard_flg, 1); + axd_set_flag(&cmd->ctrlbuf_active_flg, 0); +} + +int axd_cmd_set_pc(struct axd_cmd *cmd, unsigned int thread, unsigned long pc) +{ + if (thread >= THREAD_COUNT) + return -1; + iowrite32(pc, &cmd->message->pc[thread]); + return 0; +} + +unsigned long axd_cmd_get_datain_address(struct axd_cmd *cmd) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + + return (unsigned long) axd->buf_base_m; +} + +unsigned long axd_cmd_get_datain_size(struct axd_cmd *cmd) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + + return axd->inbuf_size; +} + +unsigned long axd_cmd_get_dataout_address(struct axd_cmd *cmd) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + + return ((unsigned long) axd->buf_base_m) + axd->inbuf_size; +} + +unsigned long axd_cmd_get_dataout_size(struct axd_cmd *cmd) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + + return axd->outbuf_size; +} + +/* + * The driver understands IO address, while f/w understands physical addresses. + * A couple of helper functions to aid in converting when exchanging buffers. + * + * NOTE: + * buf must NOT be NULL - we want this as fast as possible, so omit the check + * for NULLl + */ +inline char *axd_io_2_phys(const char *buf) +{ + return (char *)(buf - __io_address + __phys_address); +} +inline char *axd_phys_2_io(const char *buf) +{ + return (char *)(buf - __phys_address + __io_address); +} diff --git a/sound/soc/img/axd/axd_cmds.h b/sound/soc/img/axd/axd_cmds.h new file mode 100644 index 000000000000..d8f3db29eea3 --- /dev/null +++ b/sound/soc/img/axd/axd_cmds.h @@ -0,0 +1,532 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * AXD API commands Helper functions. + */ +#ifndef AXD_CMDS_H_ +#define AXD_CMDS_H_ + +#include <linux/interrupt.h> +#include <linux/io.h> +#include <linux/mutex.h> +#include <linux/semaphore.h> +#include <linux/spinlock.h> +#include <linux/wait.h> +#include "linux/workqueue.h" +#include <sound/compress_offload.h> +#include <sound/compress_params.h> + +#include "axd_api.h" +#include "axd_buffers.h" + +/** + * struct axd_desc_ctrl - axd desctriptors control structure + * @rd_idx: read index of next available descriptor + * @wr_idx: write index of empty slot ot return a descriptor to + * @rd_sem: semaphore to block when no more descriptors are available + * @wr_sem: semaphore to block when all descriptors are available + * @rd_lock: smp critical section protection for reads + * @wr_lock: smp critical section protection for writes + * @buf_desc: pointer to iomem where the descriptors are + * @num_desc: total number of descriptors provided by axd + * + * axd has a number of input and output descriptors to pass buffers around, this + * structure provides a mean for the driver to manage access to these + * descriptors. + */ +struct axd_desc_ctrl { + unsigned int rd_idx; + unsigned int wr_idx; + struct semaphore rd_sem; + struct semaphore wr_sem; + spinlock_t rd_lock; + spinlock_t wr_lock; + struct axd_buffer_desc __iomem *buf_desc; + unsigned int num_desc; +}; + +struct axd_cmd; + +/** + * struct axd_pipe - axd pipe management structure + * @work: work for top half of the interrupt + * @desc_ctrl: axd_desc_ctrl structure to manage this pipe's + * descriptors + * @desc_bufferq: buffer queue send through the descriptors + * @user_bufferq: buffer queue of buffers to be read by the user. only + * makes sense for an output pipe where the user doesn't + * have to read the returned buffers synchronously when we + * get an interrupt + * @cur_buf: pointer to the current user_bufferq being read + * @cur_buf_offset: offset of the current user_bufferq to start reading from + * @cur_buf_size: remaining size of data in current user_bufferq + * @discard_flg: a flag to indicate we should discard the remaining data + * if the user closed output node before reading all data, + * default to true. + * @enabled_flg: a flag indicates that this pipe is actively handling a + * stream + * @eos_flg: a flag indicates that eos was reached and we should do + * clean up as soon as possible + * @eos_mutex: for input pipes we need to protect against possible + * simulataneous sending of eos + * @intcount: number of interrupts received since last service. + * indicates the number of buffers services by axd. + * used by top half workqueue to know how many interrupts + * it needs to service in one go + * @id: pipe number or id + * @tsk: the userland task that opened this pipe + * @buf_size: the size of the buffer this pipe is configured to use + * @current_ts_low: lower half of the 64-bit timestamp for current buffer + * @current_ts_high: top half of the 64-bit timestamp for current buffer + * @cmd: pointer to axd_cmd struct for quick access + * @buf_desc: pointer to axd_buffer_desc struct for quick access + * + * axd could provide a number of pipes each of which handles a separate stream. + * this structure manages descriptors, buffers and other control bits associated + * with each input/output pipe. + */ +struct axd_pipe { + struct work_struct work; + struct axd_desc_ctrl desc_ctrl; + struct axd_bufferq desc_bufferq; + struct axd_bufferq user_bufferq; + char *cur_buf; + unsigned int cur_buf_offset; + unsigned int cur_buf_size; + unsigned int discard_flg; + unsigned int enabled_flg; + unsigned int eos_flg; + struct mutex eos_mutex; + atomic_t intcount; + unsigned int id; + struct task_struct *tsk; + unsigned int buf_size; + u32 current_ts_low; + u32 current_ts_high; + struct axd_cmd *cmd; + struct axd_buffer_desc __iomem *buf_desc; +}; + +/** + * struct axd_cmd - axd command structure + * @message: iomapped axd massage region, see axd_memory_map struct + * @cm_lock: mutex to control access to the message region + * @wait: wait for ControlCommand response, or command completion + * @response_flg: condition variable to wait on for response + * @in_workq: array of workqueues for input pipes + * @out_workq: array of workqueues for output pipes + * @in_pipes: array of input axd_pipe structs + * @out_pipes: array of output axd_pipe structs + * @watchdogenabled: software watchdog switch + * @discard_flg: master flag to control whether to discard data when user + * closes output node + * @nonblock: operate in nonblocking mode + * @fw_stopped_flg: this flag indicates that software watchdog detected that + * the firmware is not responding + * @num_inputs: number of input pipes + * @num_outputs: number of output pipes + * @ctrlbuf_active_flg: this flag indicates ctrl buffer mechanism is in use + * @dcpp_channel_ctrl_cache: dcpp channel configuration cache + * @dcpp_band_ctrl_cache: dcpp band configuration cache + * + * manage the iomapped area to exchange messages/commands with axd + */ +struct axd_cmd { + struct axd_memory_map __iomem *message; + struct mutex cm_lock; + wait_queue_head_t wait; + unsigned int response_flg; + struct workqueue_struct *in_workq; + struct workqueue_struct *out_workq; + struct axd_pipe in_pipes[AXD_MAX_PIPES]; + struct axd_pipe out_pipes[AXD_MAX_PIPES]; + int watchdogenabled; + unsigned int discard_flg; + unsigned int nonblock; + unsigned int fw_stopped_flg; + int num_inputs; + int num_outputs; + unsigned int ctrlbuf_active_flg; + int dcpp_channel_ctrl_cache[AXD_MAX_PIPES]; + int dcpp_band_ctrl_cache[AXD_MAX_PIPES]; + unsigned int started_flg; +}; + +static inline void axd_set_flag(unsigned int *flag, unsigned int value) +{ + *flag = value; + smp_wmb(); /* guarantee smp ordering */ +} + +static inline unsigned int axd_get_flag(unsigned int *flag) +{ + smp_rmb(); /* guarantee smp ordering */ + return *flag; +} + +#define CMD_TIMEOUT (1*HZ) + +/* Generic setup API */ +void axd_cmd_init(struct axd_cmd *cmd, unsigned long cmd_address, + unsigned long io_address, unsigned long phys_address); +int axd_cmd_set_pc(struct axd_cmd *cmd, unsigned int thread, unsigned long pc); +unsigned long axd_cmd_get_datain_address(struct axd_cmd *cmd); +unsigned long axd_cmd_get_datain_size(struct axd_cmd *cmd); +unsigned long axd_cmd_get_dataout_address(struct axd_cmd *cmd); +unsigned long axd_cmd_get_dataout_size(struct axd_cmd *cmd); + +/* Info API */ +void axd_cmd_get_version(struct axd_cmd *cmd, int *major, + int *minor, int *patch); +int axd_cmd_get_num_pipes(struct axd_cmd *cmd, unsigned int *inpipes, + unsigned int *outpipes); +void axd_cmd_get_decoders(struct axd_cmd *cmd, struct snd_compr_caps *caps); +void axd_cmd_get_encoders(struct axd_cmd *cmd, struct snd_compr_caps *caps); +int axd_cmd_xbar_present(struct axd_cmd *cmd); +int axd_cmd_mixer_get_eqenabled(struct axd_cmd *cmd, unsigned int pipe); +void axd_cmd_mixer_get_eqmastergain(struct axd_cmd *cmd, unsigned int pipe, + int *gain); +void axd_cmd_mixer_get_eqband0gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain); +void axd_cmd_mixer_get_eqband1gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain); +void axd_cmd_mixer_get_eqband2gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain); +void axd_cmd_mixer_get_eqband3gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain); +void axd_cmd_mixer_get_eqband4gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain); +int axd_cmd_mixer_get_mux(struct axd_cmd *cmd, unsigned int pipe); +int axd_cmd_input_get_enabled(struct axd_cmd *cmd, unsigned int pipe); +int axd_cmd_input_get_source(struct axd_cmd *cmd, unsigned int pipe); +int axd_cmd_input_get_codec(struct axd_cmd *cmd, unsigned int pipe); +void axd_cmd_input_get_gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain); +void axd_cmd_input_get_mute(struct axd_cmd *cmd, unsigned int pipe, + int *muted); +int axd_cmd_input_get_upmix(struct axd_cmd *cmd, unsigned int pipe); +int axd_cmd_input_get_decoder_params(struct axd_cmd *cmd, unsigned int pipe, + struct snd_codec *codec); +int axd_cmd_output_get_enabled(struct axd_cmd *cmd, unsigned int pipe); +int axd_cmd_output_get_codec(struct axd_cmd *cmd, unsigned int pipe); +int axd_cmd_output_get_sink(struct axd_cmd *cmd, unsigned int pipe); +int axd_cmd_output_get_downmix(struct axd_cmd *cmd, unsigned int pipe); +int axd_cmd_output_get_eqenabled(struct axd_cmd *cmd, unsigned int pipe); +void axd_cmd_output_get_eqmastergain(struct axd_cmd *cmd, unsigned int pipe, + int *gain); +void axd_cmd_output_get_eqband0gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain); +void axd_cmd_output_get_eqband1gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain); +void axd_cmd_output_get_eqband2gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain); +void axd_cmd_output_get_eqband3gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain); +void axd_cmd_output_get_eqband4gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain); +void axd_cmd_output_get_encoder_config(struct axd_cmd *cmd, unsigned int pipe, + char *config); +void axd_cmd_output_get_geq_power(struct axd_cmd *cmd, unsigned int pipe, + char *buf, int channel); +/* DCPP */ +int axd_cmd_output_dcpp_select_channel(struct axd_cmd *cmd, unsigned int pipe, + bool subband, unsigned int channel); +int axd_cmd_output_dcpp_select_band(struct axd_cmd *cmd, unsigned int pipe, + unsigned int band); + +unsigned int axd_cmd_output_get_dcpp_enabled(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_mode(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_channels(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_eq_mode(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_eq_bands(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_max_delay_samples(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_channel_delay_samples(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_eq_output_volume( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_eq_passthrough_gain( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_eq_inverse_passthrough_gain( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_bass_shelf_shift( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_bass_shelf_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_bass_shelf_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_bass_shelf_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_bass_shelf_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_bass_shelf_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_treble_shelf_shift( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_treble_shelf_a0( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_treble_shelf_a1( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_treble_shelf_a2( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_treble_shelf_b0( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_treble_shelf_b1( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel); +unsigned int axd_cmd_output_get_dcpp_channel_eq_gain(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int band); +unsigned int axd_cmd_output_get_dcpp_channel_eq_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int band); +unsigned int axd_cmd_output_get_dcpp_channel_eq_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int band); +unsigned int axd_cmd_output_get_dcpp_channel_eq_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int band); +unsigned int axd_cmd_output_get_dcpp_channel_eq_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int band); +unsigned int axd_cmd_output_get_dcpp_channel_eq_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int band); +unsigned int axd_cmd_output_get_dcpp_channel_eq_shift(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int band); +unsigned int axd_cmd_output_get_dcpp_subband_eq_bands(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_subband_enabled(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_subband_input_channel_mask( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_subband_delay_samples(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_subband_eq_output_volume( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_subband_eq_passthrough_gain( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_subband_eq_inverse_passthrough_gain( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_subband_eq_gain(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band); +unsigned int axd_cmd_output_get_dcpp_subband_eq_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band); +unsigned int axd_cmd_output_get_dcpp_subband_eq_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band); +unsigned int axd_cmd_output_get_dcpp_subband_eq_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band); +unsigned int axd_cmd_output_get_dcpp_subband_eq_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band); +unsigned int axd_cmd_output_get_dcpp_subband_eq_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band); +unsigned int axd_cmd_output_get_dcpp_subband_eq_shift(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band); +unsigned int axd_cmd_output_get_dcpp_subband_low_pass_filter_a0( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_subband_low_pass_filter_a1( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_subband_low_pass_filter_a2( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_subband_low_pass_filter_b0( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_cmd_output_get_dcpp_subband_low_pass_filter_b1( + struct axd_cmd *cmd, unsigned int pipe); +/* Config API */ +void axd_cmd_mixer_set_eqenabled(struct axd_cmd *cmd, unsigned int pipe, + int enable); +void axd_cmd_mixer_set_eqmastergain(struct axd_cmd *cmd, unsigned int pipe, + int gain); +void axd_cmd_mixer_set_eqband0gain(struct axd_cmd *cmd, unsigned int pipe, + int gain); +void axd_cmd_mixer_set_eqband1gain(struct axd_cmd *cmd, unsigned int pipe, + int gain); +void axd_cmd_mixer_set_eqband2gain(struct axd_cmd *cmd, unsigned int pipe, + int gain); +void axd_cmd_mixer_set_eqband3gain(struct axd_cmd *cmd, unsigned int pipe, + int gain); +void axd_cmd_mixer_set_eqband4gain(struct axd_cmd *cmd, unsigned int pipe, + int gain); +void axd_cmd_mixer_set_mux(struct axd_cmd *cmd, unsigned int pipe, + int mux); +int axd_cmd_input_set_enabled(struct axd_cmd *cmd, unsigned int pipe, + int enable); +void axd_cmd_input_set_source(struct axd_cmd *cmd, unsigned int pipe, + int source); +int axd_cmd_input_set_codec(struct axd_cmd *cmd, unsigned int pipe, + int codec); +void axd_cmd_input_set_gain(struct axd_cmd *cmd, unsigned int pipe, + int gain); +void axd_cmd_input_set_mute(struct axd_cmd *cmd, unsigned int pipe, + int mute); +void axd_cmd_input_set_upmix(struct axd_cmd *cmd, unsigned int pipe, + int upmix); +int axd_cmd_input_set_decoder_params(struct axd_cmd *cmd, unsigned int pipe, + struct snd_codec *codec); +int axd_cmd_output_set_enabled(struct axd_cmd *cmd, unsigned int pipe, + int enable); +int axd_cmd_output_set_codec(struct axd_cmd *cmd, unsigned int pipe, + int codec); +void axd_cmd_output_set_sink(struct axd_cmd *cmd, unsigned int pipe, + int sink); +void axd_cmd_output_set_downmix(struct axd_cmd *cmd, unsigned int pipe, + int downmix); +void axd_cmd_output_set_event(struct axd_cmd *cmd, unsigned int pipe, + int event); +void axd_cmd_output_set_eqenabled(struct axd_cmd *cmd, unsigned int pipe, + int enable); +void axd_cmd_output_set_eqmastergain(struct axd_cmd *cmd, unsigned int pipe, + int gain); +void axd_cmd_output_set_eqband0gain(struct axd_cmd *cmd, unsigned int pipe, + int gain); +void axd_cmd_output_set_eqband1gain(struct axd_cmd *cmd, unsigned int pipe, + int gain); +void axd_cmd_output_set_eqband2gain(struct axd_cmd *cmd, unsigned int pipe, + int gain); +void axd_cmd_output_set_eqband3gain(struct axd_cmd *cmd, unsigned int pipe, + int gain); +void axd_cmd_output_set_eqband4gain(struct axd_cmd *cmd, unsigned int pipe, + int gain); +/* DCPP */ +int axd_cmd_output_set_dcpp_enabled(struct axd_cmd *cmd, unsigned int pipe, + int enable); +int axd_cmd_output_set_dcpp_mode(struct axd_cmd *cmd, unsigned int pipe, + unsigned int mode); +int axd_cmd_output_set_dcpp_eq_mode(struct axd_cmd *cmd, unsigned int pipe, + unsigned int mode); +int axd_cmd_output_set_dcpp_channel_delay_samples(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_eq_output_volume(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_eq_passthrough_gain(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_eq_inverse_passthrough_gain( + struct axd_cmd *cmd, unsigned int pipe, + unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_bass_shelf_shift(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_bass_shelf_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_bass_shelf_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_bass_shelf_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_bass_shelf_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_bass_shelf_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_treble_shelf_shift(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_treble_shelf_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_treble_shelf_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_treble_shelf_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_treble_shelf_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_treble_shelf_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data); +int axd_cmd_output_set_dcpp_channel_eq_gain(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, + unsigned int band, unsigned int data); +int axd_cmd_output_set_dcpp_channel_eq_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, + unsigned int band, unsigned int data); +int axd_cmd_output_set_dcpp_channel_eq_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, + unsigned int band, unsigned int data); +int axd_cmd_output_set_dcpp_channel_eq_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, + unsigned int band, unsigned int data); +int axd_cmd_output_set_dcpp_channel_eq_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, + unsigned int band, unsigned int data); +int axd_cmd_output_set_dcpp_channel_eq_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, + unsigned int band, unsigned int data); +int axd_cmd_output_set_dcpp_channel_eq_shift(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, + unsigned int band, unsigned int data); +int axd_cmd_output_set_dcpp_subband_enabled(struct axd_cmd *cmd, + unsigned int pipe, int enable); +int axd_cmd_output_set_dcpp_subband_input_channel_mask(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data); +int axd_cmd_output_set_dcpp_subband_delay_samples(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data); +int axd_cmd_output_set_dcpp_subband_eq_output_volume(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data); +int axd_cmd_output_set_dcpp_subband_eq_passthrough_gain(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data); +int axd_cmd_output_set_dcpp_subband_eq_inverse_passthrough_gain( + struct axd_cmd *cmd, unsigned int pipe, unsigned int data); +int axd_cmd_output_set_dcpp_subband_eq_gain(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band, unsigned int data); +int axd_cmd_output_set_dcpp_subband_eq_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band, unsigned int data); +int axd_cmd_output_set_dcpp_subband_eq_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band, unsigned int data); +int axd_cmd_output_set_dcpp_subband_eq_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band, unsigned int data); +int axd_cmd_output_set_dcpp_subband_eq_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band, unsigned int data); +int axd_cmd_output_set_dcpp_subband_eq_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band, unsigned int data); +int axd_cmd_output_set_dcpp_subband_eq_shift(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band, unsigned int data); +int axd_cmd_output_set_dcpp_subband_low_pass_filter_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data); +int axd_cmd_output_set_dcpp_subband_low_pass_filter_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data); +int axd_cmd_output_set_dcpp_subband_low_pass_filter_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data); +int axd_cmd_output_set_dcpp_subband_low_pass_filter_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data); +int axd_cmd_output_set_dcpp_subband_low_pass_filter_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data); +void axd_cmd_output_set_encoder_config(struct axd_cmd *cmd, unsigned int pipe, + const char *config); +unsigned int axd_cmd_info_get_resampler_fin(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_cmd_info_get_resampler_fout(struct axd_cmd *cmd, + unsigned int pipe); +void axd_cmd_info_set_resampler_fout(struct axd_cmd *cmd, unsigned int pipe, + unsigned int fout); +unsigned int axd_cmd_input_get_buffer_occupancy(struct axd_cmd *cmd, + unsigned int pipe); +void axd_cmd_input_set_buffer_occupancy(struct axd_cmd *cmd, unsigned int pipe, + unsigned int bo); + +/* Channel setup and access API */ +int axd_cmd_inpipe_start(struct axd_cmd *cmd, unsigned int pipe); +void axd_cmd_inpipe_stop(struct axd_cmd *cmd, unsigned int pipe); +void axd_cmd_inpipe_reset(struct axd_cmd *cmd, unsigned int pipe); +int axd_cmd_inpipe_active(struct axd_cmd *cmd, unsigned int pipe); +int axd_cmd_outpipe_start(struct axd_cmd *cmd, unsigned int pipe); +void axd_cmd_outpipe_stop(struct axd_cmd *cmd, unsigned int pipe); +void axd_cmd_outpipe_reset(struct axd_cmd *cmd, unsigned int pipe); +int axd_cmd_send_buffer(struct axd_cmd *cmd, unsigned int pipe, + const char __user *buf, unsigned int size); +void axd_cmd_send_buffer_abort(struct axd_cmd *cmd, unsigned int pipe); +int axd_cmd_recv_buffer(struct axd_cmd *cmd, unsigned int pipe, + char __user *buf, unsigned int size); +void axd_cmd_recv_buffer_abort(struct axd_cmd *cmd, unsigned int pipe); +int axd_cmd_install_irq(struct axd_cmd *cmd, unsigned int irqnum); +void axd_cmd_free_irq(struct axd_cmd *cmd, unsigned int irqnum); +int axd_cmd_reset_pipe(struct axd_cmd *cmd, unsigned int pipe); + +/* generic helper function required in several places */ +char *axd_io_2_phys(const char *buf); +char *axd_phys_2_io(const char *buf); + +/* Register write buffer */ +int axd_flush_reg_buf(struct axd_cmd *cmd); + +#endif /* AXD_CMDS_H_ */ diff --git a/sound/soc/img/axd/axd_cmds_pipes.c b/sound/soc/img/axd/axd_cmds_pipes.c new file mode 100644 index 000000000000..db355b531f76 --- /dev/null +++ b/sound/soc/img/axd/axd_cmds_pipes.c @@ -0,0 +1,1387 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * AXD Commands API - Pipes/Buffers Accessing functions. + */ +#include <linux/device.h> +#include <linux/err.h> +#include <linux/sched.h> +#include <linux/signal.h> +#include <linux/uaccess.h> +#include <linux/wait.h> + +#include "axd_api.h" +#include "axd_cmds.h" +#include "axd_cmds_internal.h" +#include "axd_hdr.h" +#include "axd_module.h" +#include "axd_platform.h" + +/* + * axd_pipe->eos_flg for output pipes is overloaded to mean two things: + * + * - EOS_REACHED: indicates that firmware has processed all input buffers + * including EOS but userland hasn't read them all yet. + * + * - EOF_REACHED: indicates that firmware sent EOS back to us AND userland read + * all the data till EOS. + */ +#define EOS_REACHED 1 +#define EOF_REACHED 2 + +/* + * axd_pipe->enabled_flg for output pipes is overloaded to mean two things: + * + * - PIPE_STARTED: indicates that pipe was opened but no buffers were passed. + * When stopping the pipes, we know that we don't need to discard anything if + * the discard_flg is set in cmd struct. Which allows us to terminate easily + * and quickly. + * + * - PIPE_RUNNING: indicates that pipe has processed some buffers, so we should + * discard if user terminates early (and discard_flg is set in cmd struct). + */ +#define PIPE_STARTED 1 +#define PIPE_RUNNING 2 + +#ifdef AXD_DEBUG_DIAG +static unsigned int inSentCount[AXD_MAX_PIPES]; +static unsigned int inRecvCount[AXD_MAX_PIPES]; +static unsigned int outSentCount[AXD_MAX_PIPES]; +static unsigned int outRecvCount[AXD_MAX_PIPES]; +static unsigned int primeupCount[AXD_MAX_PIPES]; +static unsigned int read_size[AXD_MAX_PIPES]; +static unsigned int write_size[AXD_MAX_PIPES]; +static unsigned int recv_size[AXD_MAX_PIPES]; +#define debugdiag printk +#else +#define debugdiag(format, ...) +#endif + +static void axd_cmd_inpipe_clear(struct axd_cmd *cmd, unsigned int pipe); +static void axd_cmd_outpipe_clear(struct axd_cmd *cmd, unsigned int pipe); +static void axd_cmd_send_eos(struct axd_pipe *axd_pipe); +static void axd_output_prime_up(struct axd_pipe *axd_pipe); +static void axd_cmd_output_eos_reached(struct axd_cmd *cmd, unsigned int pipe); + +/* + * Send/Clear data{in, out} kicks. + * + * NOTE: + * Must acquire axd_platform_lock() before accessing kick and interrupt status + * registers as the AXD firmware might be accessing them at the same time. + */ +static inline void axd_datain_kick(struct axd_pipe *axd_pipe) +{ + unsigned long flags; + struct axd_memory_map __iomem *message = axd_pipe->cmd->message; + unsigned int pipe = axd_pipe->id; + unsigned int temp; + +#ifdef AXD_DEBUG_DIAG + inSentCount[pipe]++; +#endif + pr_debug("----> Send datain kick\n"); + flags = axd_platform_lock(); + temp = ioread32(&message->kick) | + AXD_ANY_KICK_BIT | AXD_KICK_DATA_IN_BIT; + iowrite32(temp, &message->kick); + temp = ioread32(&message->in_kick_count[pipe]) + 1; + iowrite32(temp, &message->in_kick_count[pipe]); + axd_platform_unlock(flags); + axd_platform_kick(); +} + +static inline void axd_dataout_kick(struct axd_pipe *axd_pipe) +{ + unsigned long flags; + struct axd_memory_map __iomem *message = axd_pipe->cmd->message; + unsigned int pipe = axd_pipe->id; + unsigned int temp; + +#ifdef AXD_DEBUG_DIAG + outSentCount[pipe]++; +#endif + pr_debug("----> Send dataout kick\n"); + flags = axd_platform_lock(); + temp = ioread32(&message->kick) | + AXD_ANY_KICK_BIT | AXD_KICK_DATA_OUT_BIT; + iowrite32(temp, &message->kick); + temp = ioread32(&message->out_kick_count[pipe]) + 1; + iowrite32(temp, &message->out_kick_count[pipe]); + axd_platform_unlock(flags); + axd_platform_kick(); +} + +/* Assumes axd_platform_lock() is already acquired before calling this */ +static inline int axd_datain_status_clear(struct axd_pipe *axd_pipe) +{ + struct axd_memory_map __iomem *message = axd_pipe->cmd->message; + unsigned int pipe = axd_pipe->id; + unsigned int intcount = ioread32(&message->in_int_count[pipe]); + + pr_debug("Clearing in_int_count[%u] = %u\n", pipe, intcount); + if (intcount == 0) + return -1; + atomic_add(intcount, &axd_pipe->intcount); + iowrite32(0, &message->in_int_count[pipe]); + return 0; +} + +/* Assumes axd_platform_lock() is already acquired before calling this */ +static inline int axd_dataout_status_clear(struct axd_pipe *axd_pipe) +{ + struct axd_memory_map __iomem *message = axd_pipe->cmd->message; + unsigned int pipe = axd_pipe->id; + unsigned int intcount = ioread32(&message->out_int_count[pipe]); + + pr_debug("Clearing out_int_count[%u] = %u\n", pipe, intcount); + if (intcount == 0) + return -1; + atomic_add(intcount, &axd_pipe->intcount); + iowrite32(0, &message->out_int_count[pipe]); + return 0; +} + +/* IRQ Handler */ +static irqreturn_t axd_irq(int irq, void *data) +{ + struct axd_cmd *cmd = data; + unsigned int int_status; + unsigned long flags; + int i, ret; + + /* + * int_status is ioremapped() which means it could page fault. When axd + * is running on the same core as the host, holding lock2 would disable + * exception handling in that core which means a page fault would stuff + * host thread executing the driver. We do a double read here to ensure + * that we stall until the memory access is done before lock2 is + * acquired, hence ensuring that any page fault is handled outside lock2 + * region. + */ + int_status = ioread32(&cmd->message->int_status); + int_status = ioread32(&cmd->message->int_status); + + axd_platform_irq_ack(); + flags = axd_platform_lock(); + int_status = ioread32(&cmd->message->int_status); + iowrite32(0, &cmd->message->int_status); + + if (!int_status) + goto out; + + pr_debug("<---- Received int_status = 0x%08X\n", int_status); + if (int_status & AXD_INT_KICK_DONE) + pr_debug("<---- Received kick done interrupt\n"); + if (int_status & AXD_INT_DATAIN) { + pr_debug("<---- Received datain interrupt\n"); + for (i = 0; i < AXD_MAX_PIPES; i++) { + struct axd_pipe *axd_pipe = &cmd->in_pipes[i]; + + if (axd_get_flag(&axd_pipe->enabled_flg)) { + ret = axd_datain_status_clear(axd_pipe); + if (!ret) + queue_work(cmd->in_workq, &axd_pipe->work); + } + } + } + if (int_status & AXD_INT_DATAOUT) { + pr_debug("<---- Received dataout interrupt\n"); + for (i = 0; i < AXD_MAX_PIPES; i++) { + struct axd_pipe *axd_pipe = &cmd->out_pipes[i]; + + if (axd_get_flag(&axd_pipe->enabled_flg)) { + ret = axd_dataout_status_clear(axd_pipe); + if (!ret && !axd_get_flag(&axd_pipe->eos_flg)) + queue_work(cmd->out_workq, &axd_pipe->work); + } + } + } + if (int_status & AXD_INT_CTRL) { + pr_debug("<---- Received ctrl interrupt\n"); + axd_set_flag(&cmd->response_flg, 1); + wake_up(&cmd->wait); + } + if (int_status & AXD_INT_ERROR) { + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + int error = ioread32(&cmd->message->error); + + pr_debug("<---- Received error interrupt\n"); + switch (error) { + default: + case 0: + break; + case 1: + dev_err(axd->dev, "Fatal error received...\n"); + axd_schedule_reset(cmd); + break; + case 2: + dev_warn(axd->dev, "Failed to set last configuration command\n"); + break; + } + + iowrite32(0, &cmd->message->error); + } +out: + /* + * ensure all writes to uncached shared memory are visible to AXD + * before releasing axd_platform_lock() + */ + wmb(); + axd_platform_unlock(flags); + return IRQ_HANDLED; +} + +/* + * Initialize the drivers descriptors control structre. + * @desc_ctrl: the desc control structure to initialize. + * @buf_desc: pointer to the buffer descriptor to control. + * @num_desc: total number of descriptors inside @buf_desc. + */ +static void desc_init(struct axd_desc_ctrl *desc_ctrl, + struct axd_buffer_desc __iomem *buf_desc, unsigned int num_desc) +{ + /* Reset ctrl desc struct */ + desc_ctrl->rd_idx = 0; + desc_ctrl->wr_idx = 0; + sema_init(&desc_ctrl->rd_sem, num_desc); + sema_init(&desc_ctrl->wr_sem, 0); + spin_lock_init(&desc_ctrl->rd_lock); + spin_lock_init(&desc_ctrl->wr_lock); + desc_ctrl->buf_desc = buf_desc; + desc_ctrl->num_desc = num_desc; +} + +/* + * Prepare a descriptor to be sent to firmware. + * @desc_ctrl: the control structure of the descriptor. + * @buf: physical address of the buffer to enqueue. + * @size: size of the buffer. + * @last: non-zero of this is the last buffer ie: EOS. + */ +static int desc_enqueue(struct axd_desc_ctrl *desc_ctrl, char *buf, + unsigned int size, int last, struct axd_pipe *chan) +{ + struct axd_buffer_desc __iomem *buf_desc = desc_ctrl->buf_desc; + unsigned int num_desc = desc_ctrl->num_desc; + unsigned int status_size = size | AXD_DESCRIPTOR_READY_BIT; + int ret; + + pr_debug("Enqueuing a descriptor, pipe[%u]... ", chan->id); + /* only proceed if we're not full */ + ret = down_trylock(&desc_ctrl->rd_sem); + if (ret) { + pr_debug("FAILED - full\n"); + return -1; + } + pr_debug("SUCCEEDED\n"); + + if (last) + status_size |= AXD_DESCRIPTOR_EOS_BIT; + + /* + * if we could lock the semaphore, then we're guaranteed that the + * current rd_idx is valid and ready to be used. So no need to verify + * that the status of the descriptor at rd_idx is valid. + */ + spin_lock(&desc_ctrl->rd_lock); + iowrite32(status_size, &buf_desc[desc_ctrl->rd_idx].status_size); + iowrite32((unsigned int)axd_io_2_phys(buf), + &buf_desc[desc_ctrl->rd_idx].data_ptr); + iowrite32(chan->current_ts_high, &buf_desc[desc_ctrl->rd_idx].pts_high); + iowrite32(chan->current_ts_low, &buf_desc[desc_ctrl->rd_idx].pts_low); + desc_ctrl->rd_idx++; + if (desc_ctrl->rd_idx >= num_desc) + desc_ctrl->rd_idx = 0; + spin_unlock(&desc_ctrl->rd_lock); + up(&desc_ctrl->wr_sem); /* we can dequeue 1 more item */ + return 0; +} + +/* + * Takes a buffer out of the descriptor queue. + * @desc_ctrl: the control structure of the descriptor. + * @size: sets it tot he size of the buffer returned if not NULL. + * @last: sets it to non-zero of this is the last buffer ie: EOS (if last is not + * NULL) + * + * On success, a valid pointer is received. NULL otherwise. + */ +static char *desc_dequeue(struct axd_desc_ctrl *desc_ctrl, unsigned int *size, + int *last, struct axd_pipe *chan, int is_out) +{ + struct axd_buffer_desc __iomem *buf_desc = desc_ctrl->buf_desc; + unsigned int num_desc = desc_ctrl->num_desc; + unsigned int status_size; + char *buf; + int ret; + + pr_debug("Dequeuing a descriptor, pipe[%u]... ", chan->id); + /* only proceed if we're not empty */ + ret = down_trylock(&desc_ctrl->wr_sem); + if (ret) { + pr_debug("FAILED - empty\n"); + return NULL; + } + spin_lock(&desc_ctrl->wr_lock); + status_size = ioread32(&buf_desc[desc_ctrl->wr_idx].status_size); + /* + * if ready and in_use bit are set, then the rest of the buffers are + * still owned by the AXD fw, we can't dequeue them then. exit. + */ + if ((status_size & AXD_DESCRIPTOR_INUSE_BIT) && + !(status_size & AXD_DESCRIPTOR_READY_BIT)) { + + pr_debug("SUCCEEDED\n"); + /* clear the in_use bit */ + iowrite32(status_size & ~AXD_DESCRIPTOR_INUSE_BIT, + &buf_desc[desc_ctrl->wr_idx].status_size); + + /* + * Return the pointer to the buffer and its size to caller. + * The caller might need to read it or return it to the pool. + */ + buf = (char *)ioread32(&buf_desc[desc_ctrl->wr_idx].data_ptr); + if (size) + *size = status_size & AXD_DESCRIPTOR_SIZE_MASK; + if (last) + *last = status_size & AXD_DESCRIPTOR_EOS_BIT; + + if (is_out) { + /* update any timestamps if is use */ + chan->current_ts_high = + ioread32(&buf_desc[desc_ctrl->wr_idx].pts_high); + chan->current_ts_low = + ioread32(&buf_desc[desc_ctrl->wr_idx].pts_low); + } + + desc_ctrl->wr_idx++; + if (desc_ctrl->wr_idx >= num_desc) + desc_ctrl->wr_idx = 0; + + spin_unlock(&desc_ctrl->wr_lock); + up(&desc_ctrl->rd_sem); /* we can enqueue 1 more item */ + return axd_phys_2_io(buf); + } + pr_debug("FAILED - AXD holds the rest of the descriptors\n"); + /* + * failed due to busy buffer, return writer locks + * as we haven't dequeued + */ + spin_unlock(&desc_ctrl->wr_lock); + up(&desc_ctrl->wr_sem); + return NULL; +} + +/* + * This is the function executed by the workqueue to process return input + * pipes descriptors. + * Each pipe will have its own version of this function executed when datain + * interrupt is received. + */ +static void in_desc_workq(struct work_struct *work) +{ + struct axd_pipe *axd_pipe = container_of(work, struct axd_pipe, work); + struct axd_bufferq *desc_bufferq = &axd_pipe->desc_bufferq; + struct axd_desc_ctrl *desc_ctrl = &axd_pipe->desc_ctrl; + struct axd_cmd *cmd = axd_pipe->cmd; + unsigned int pipe = axd_pipe->id; + char *ret_buf; + int ret, last; + + pr_debug("*** Processing datain[%u] buffer ***\n", pipe); + do { /* we should have at least 1 desc to process */ + ret_buf = desc_dequeue(desc_ctrl, NULL, &last, axd_pipe, 0); + if (!ret_buf) + /* + * This could happen if an interrupt occurs while this + * work is already running, causing us to run twice in a + * row unnecessarily. Not harmful, so just return. + */ + return; +#ifdef AXD_DEBUG_DIAG + inRecvCount[pipe]++; +#endif + ret = axd_bufferq_put(desc_bufferq, ret_buf, -1); + if (ret) + return; + if (last) { + pr_debug("Received input[%u] EOS\n", pipe); + debugdiag("inSentCount[%u]= %u, inRecvCount[%u]= %u, write_size[%u]= %u\n", + pipe, inSentCount[pipe], + pipe, inRecvCount[pipe], + pipe, write_size[pipe]); + axd_cmd_inpipe_clear(cmd, pipe); + } + } while (!atomic_dec_and_test(&axd_pipe->intcount)); + + /* Do we need to send EOS? */ + if (axd_get_flag(&axd_pipe->eos_flg)) + axd_cmd_send_eos(axd_pipe); +} + +/* + * This is the function executed by the workqueue to process return output + * pipes descriptors. + * Each pipe will have its own version of this function executed when dataout + * interrupt is received. + */ +static void out_desc_workq(struct work_struct *work) +{ + struct axd_pipe *axd_pipe = container_of(work, struct axd_pipe, work); + struct axd_bufferq *desc_bufferq = &axd_pipe->desc_bufferq; + struct axd_bufferq *user_bufferq = &axd_pipe->user_bufferq; + struct axd_desc_ctrl *desc_ctrl = &axd_pipe->desc_ctrl; + char *ret_buf; + unsigned int buf_size; + int ret, last; + + pr_debug("*** Processing dataout[%u] buffer ***\n", axd_pipe->id); + do { /* we should have at least 1 desc to process */ + ret_buf = desc_dequeue(desc_ctrl, + &buf_size, &last, axd_pipe, 1); + if (!ret_buf || axd_get_flag(&axd_pipe->eos_flg)) { + /* + * This could happen if an interrupt occurs while this + * work is already running, causing us to run twice in a + * row unnecessarily. Not harmful, so just return. + * + * OR if we prime up the output bufferq a tad too much + * we could end up with extra buffers after eos is + * reached, in this case we shouldn't process these + * extra buffers and just return. + */ + return; + } +#ifdef AXD_DEBUG_DIAG + outRecvCount[axd_pipe->id]++; + recv_size[axd_pipe->id] += buf_size; +#endif + if (likely(!axd_get_flag(&axd_pipe->discard_flg))) { + if (last) { + pr_debug("Received output[%u] EOS\n", + axd_pipe->id); + + axd_set_flag(&axd_pipe->eos_flg, EOS_REACHED); + } + ret = axd_bufferq_put(user_bufferq, ret_buf, buf_size); + if (ret) + return; + } else { /* drop all buffers until EOS is reached */ + if (last) { + pr_debug("Received output[%u] EOS - discard\n", + axd_pipe->id); + axd_set_flag(&axd_pipe->eos_flg, EOS_REACHED); + axd_cmd_output_eos_reached(axd_pipe->cmd, + axd_pipe->id); + return; + } + ret = axd_bufferq_put(desc_bufferq, ret_buf, -1); + if (ret) + return; + axd_output_prime_up(axd_pipe); + } + } while (!atomic_dec_and_test(&axd_pipe->intcount)); +} + +/* Send a stream flush command to firmware */ +static int axd_flush_input_stream(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + struct axd_memory_map __iomem *message = cmd->message; + struct mutex *cm_lock = &cmd->cm_lock; + int ret; + + mutex_lock(cm_lock); + if (axd_get_flag(&cmd->fw_stopped_flg)) { + mutex_unlock(cm_lock); + return -1; + } + axd_set_flag(&cmd->response_flg, 0); + iowrite32(AXD_CTRL_CMD_FLUSH, &message->control_command); + iowrite32(pipe, &message->control_data); + axd_ctrl_kick(message); + ret = wait_event_timeout(cmd->wait, + axd_get_flag(&cmd->response_flg) != 0, CMD_TIMEOUT); + mutex_unlock(cm_lock); + if (!ret) { + dev_warn(axd->dev, "[%d] failed to flush input stream\n", pipe); + return -1; + } + return 0; +} + +static int axd_flush_output_stream(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + struct axd_memory_map __iomem *message = cmd->message; + struct mutex *cm_lock = &cmd->cm_lock; + int ret; + + mutex_lock(cm_lock); + if (axd_get_flag(&cmd->fw_stopped_flg)) { + mutex_unlock(cm_lock); + return -1; + } + axd_set_flag(&cmd->response_flg, 0); + iowrite32(AXD_CTRL_CMD_FLUSH, &message->control_command); + iowrite32(pipe + AXD_MAX_PIPES, &message->control_data); + axd_ctrl_kick(message); + ret = wait_event_timeout(cmd->wait, + axd_get_flag(&cmd->response_flg) != 0, CMD_TIMEOUT); + mutex_unlock(cm_lock); + if (!ret) { + dev_warn(axd->dev, "[%d] failed to flush output stream\n", pipe); + return -1; + } + return 0; +} + +/* Send a reset buffer descriptor commands to firmware - input */ +static int axd_reset_input_bd(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + struct axd_memory_map __iomem *message = cmd->message; + struct mutex *cm_lock = &cmd->cm_lock; + int ret; + + mutex_lock(cm_lock); + if (axd_get_flag(&cmd->fw_stopped_flg)) { + mutex_unlock(cm_lock); + return -1; + } + axd_set_flag(&cmd->response_flg, 0); + iowrite32(AXD_CTRL_CMD_RESET_BD, &message->control_command); + iowrite32(pipe, &message->control_data); + axd_ctrl_kick(message); + ret = wait_event_timeout(cmd->wait, + axd_get_flag(&cmd->response_flg) != 0, CMD_TIMEOUT); + mutex_unlock(cm_lock); + if (!ret) { + dev_warn(axd->dev, "[%d] failed to reset input buffer descriptors\n", pipe); + return -1; + } + return 0; +} + +/* Send a reset buffer descriptor commands to firmware - output */ +static int axd_reset_output_bd(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + struct axd_memory_map __iomem *message = cmd->message; + struct mutex *cm_lock = &cmd->cm_lock; + int ret; + + mutex_lock(cm_lock); + if (axd_get_flag(&cmd->fw_stopped_flg)) { + mutex_unlock(cm_lock); + return -1; + } + axd_set_flag(&cmd->response_flg, 0); + iowrite32(AXD_CTRL_CMD_RESET_BD, &message->control_command); + iowrite32(pipe + AXD_MAX_PIPES, &message->control_data); + axd_ctrl_kick(message); + ret = wait_event_timeout(cmd->wait, + axd_get_flag(&cmd->response_flg) != 0, CMD_TIMEOUT); + mutex_unlock(cm_lock); + if (!ret) { + dev_warn(axd->dev, "[%d] failed to reset output buffer descriptors\n", pipe); + return -1; + } + return 0; +} +/* Send a reset pipe command to the firmware */ +int axd_cmd_reset_pipe(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + struct axd_memory_map __iomem *message = cmd->message; + struct mutex *cm_lock = &cmd->cm_lock; + int ret; + + mutex_lock(cm_lock); + if (axd_get_flag(&cmd->fw_stopped_flg)) { + mutex_unlock(cm_lock); + return -1; + } + axd_set_flag(&cmd->response_flg, 0); + iowrite32(AXD_CTRL_CMD_RESET_PIPE, &message->control_command); + iowrite32(pipe, &message->control_data); + axd_ctrl_kick(message); + ret = wait_event_timeout(cmd->wait, + axd_get_flag(&cmd->response_flg) != 0, CMD_TIMEOUT); + mutex_unlock(cm_lock); + if (!ret) { + dev_warn(axd->dev, "failed to reset pipe%d", pipe); + return -1; + } + return 0; +} + +/* Sends a dummy buffer indicating EOS to a pipe */ +static void axd_cmd_send_eos(struct axd_pipe *axd_pipe) +{ + struct axd_dev *axd = container_of(axd_pipe->cmd, struct axd_dev, cmd); + struct axd_bufferq *desc_bufferq = &axd_pipe->desc_bufferq; + struct axd_desc_ctrl *desc_ctrl = &axd_pipe->desc_ctrl; + int ret; + char *p; + + mutex_lock(&axd_pipe->eos_mutex); + /* + * If eos is cleared, then a previous call successfully sent it, nothing + * to do then, so exit. + */ + if (!axd_get_flag(&axd_pipe->eos_flg)) + goto out; + + /* Only proceed if we know a buffer is available, don't block */ + if (axd_bufferq_is_empty(desc_bufferq)) + goto out; + p = axd_bufferq_take(desc_bufferq, NULL); + if (unlikely(IS_ERR(p))) + goto out; + ret = desc_enqueue(desc_ctrl, p, 0, 1, axd_pipe); + if (unlikely(ret)) { + /* shouldn't happen, print a warning */ + dev_warn(axd->dev, "[%d] Warning, failed to enqueue buffer\n", axd_pipe->id); + goto out; + } + /* enqueued successfully, inform the axd firmware */ + axd_datain_kick(axd_pipe); + pr_debug("Sent input[%u] EOS\n", axd_pipe->id); + /* + * clear if eos sent successfully + */ + axd_set_flag(&axd_pipe->eos_flg, 0); +out: + mutex_unlock(&axd_pipe->eos_mutex); +} + +/* + * Send as many buffers to the output pipe as possible. + * Keeping the firmware output buffer primed up prevents the firmware from + * getting deprived of buffers to fill. + */ +static void axd_output_prime_up(struct axd_pipe *axd_pipe) +{ + struct axd_bufferq *desc_bufferq = &axd_pipe->desc_bufferq; + struct axd_desc_ctrl *desc_ctrl = &axd_pipe->desc_ctrl; + unsigned int stride; + char *p; + int ret; + + /* + * Try not to send too much. Make sure to stop as soon as we receive + * EOS. + */ + if (axd_get_flag(&axd_pipe->eos_flg)) + return; + + /* prime up the output buffer as much as we can */ + while (!axd_bufferq_is_empty(desc_bufferq)) { +#ifdef AXD_DEBUG_DIAG + primeupCount[axd_pipe->id]++; +#endif + p = axd_bufferq_take(desc_bufferq, &stride); + if (IS_ERR(p)) + break; + ret = desc_enqueue(desc_ctrl, p, stride, 0, axd_pipe); + if (ret) { + /* + * error, return the buffer to the pool + */ + axd_bufferq_put(desc_bufferq, p, -1); + break; + } + /* inform axd firmware */ + axd_dataout_kick(axd_pipe); + } +} + +/* Exported functions */ +/* Initialize the input pipe structure */ +void axd_cmd_inpipe_init(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_pipe *axd_pipe = &cmd->in_pipes[pipe]; + + axd_pipe->cmd = cmd; + axd_pipe->buf_desc = cmd->message->input[pipe].descriptors; + axd_pipe->id = pipe; + + axd_set_flag(&axd_pipe->enabled_flg, 0); + axd_set_flag(&axd_pipe->eos_flg, 0); + mutex_init(&axd_pipe->eos_mutex); + atomic_set(&axd_pipe->intcount, 0); + + /* default buffer size, could be changed through sysfs */ + axd_pipe->buf_size = 1024*2; +} + +/* Initialize the output pipe structure */ +void axd_cmd_outpipe_init(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_pipe *axd_pipe = &cmd->out_pipes[pipe]; + + axd_pipe->cmd = cmd; + axd_pipe->buf_desc = cmd->message->output[pipe].descriptors; + axd_pipe->id = pipe; + + axd_set_flag(&axd_pipe->discard_flg, 0); + axd_set_flag(&axd_pipe->enabled_flg, 0); + axd_set_flag(&axd_pipe->eos_flg, 0); + atomic_set(&axd_pipe->intcount, 0); + + /* default buffer size, could be changed through sysfs */ + axd_pipe->buf_size = 1024*16; +} + +/* Set up the IRQ handler and workqueues */ +int axd_cmd_install_irq(struct axd_cmd *cmd, unsigned int irqnum) +{ + int i; + + cmd->in_workq = create_workqueue("axd_din_q"); + if (!cmd->in_workq) + return -ENOMEM; + for (i = 0; i < AXD_MAX_PIPES; i++) + INIT_WORK(&cmd->in_pipes[i].work, in_desc_workq); + cmd->out_workq = create_workqueue("axd_dout_q"); + if (!cmd->out_workq) { + destroy_workqueue(cmd->in_workq); + return -ENOMEM; + } + for (i = 0; i < AXD_MAX_PIPES; i++) + INIT_WORK(&cmd->out_pipes[i].work, out_desc_workq); + iowrite32(AXD_INT_KICK_DONE, &cmd->message->int_mask); + return request_irq(irqnum, axd_irq, IRQF_NOBALANCING, "axd_irq", cmd); +} + +void axd_cmd_free_irq(struct axd_cmd *cmd, unsigned int irqnum) +{ + flush_workqueue(cmd->in_workq); + destroy_workqueue(cmd->in_workq); + flush_workqueue(cmd->out_workq); + destroy_workqueue(cmd->out_workq); + free_irq(irqnum, cmd); +} + +/* + * Calculate the starting address of input pipe's buffers based on the + * information provided in firmware's header + */ +static char *axd_inpipe_datain_address(struct axd_cmd *cmd, unsigned int pipe, + unsigned int *num_avail_buffers) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + struct axd_pipe *axd_pipe = &cmd->in_pipes[pipe]; + unsigned long base_address = axd_cmd_get_datain_address(cmd); + unsigned long total_size = axd_cmd_get_datain_size(cmd); + unsigned long num_desc, offset; + + /* + * Based on the defined axd_pipe->buf_size and number of input pipes + * supported by the firmware, we calculate the number of descriptors we + * need to use using this formula: + * + * axd_pipe->buf_size * num_desc = total_size / num_inputs + */ + num_desc = total_size / (cmd->num_inputs * axd_pipe->buf_size); + if (num_desc > AXD_INPUT_DESCRIPTORS) { + num_desc = AXD_INPUT_DESCRIPTORS; + } else if (num_desc == 0) { + dev_err(axd->dev, + "[%d] Error: input buffer element size is too large\n", pipe); + return NULL; + } + offset = (total_size / cmd->num_inputs) * pipe; + if (num_avail_buffers) + *num_avail_buffers = num_desc; + + return (char *)(base_address + offset); +} + +static int axd_cmd_inpipe_buffers_init(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_pipe *axd_pipe = &cmd->in_pipes[pipe]; + struct axd_desc_ctrl *desc_ctrl = &axd_pipe->desc_ctrl; + struct axd_buffer_desc __iomem *in_buf_desc = axd_pipe->buf_desc; + unsigned int num_avail_buffers; + char bufname[16]; + int ret; + + char *buf_address = axd_inpipe_datain_address(cmd, pipe, + &num_avail_buffers); + if (!buf_address) + return -EIO; + + /* initialize descriptors & control semaphores/locks */ + desc_init(desc_ctrl, in_buf_desc, AXD_INPUT_DESCRIPTORS); + + /* initialize buffers */ + sprintf(bufname, "in_bufferq[%u]", pipe); + ret = axd_bufferq_init(&axd_pipe->desc_bufferq, bufname, buf_address, + num_avail_buffers, axd_pipe->buf_size, cmd->nonblock); + return ret; +} + +/* prepare inpipe for processing data */ +static int axd_cmd_inpipe_prepare(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_pipe *axd_pipe = &cmd->in_pipes[pipe]; + int ret; + + ret = axd_cmd_inpipe_buffers_init(cmd, pipe); + if (ret) + return ret; + + atomic_set(&axd_pipe->intcount, 0); + axd_set_flag(&axd_pipe->enabled_flg, PIPE_STARTED); + if (axd_reset_input_bd(cmd, pipe)) + goto out; + if (axd_flush_input_stream(cmd, pipe)) + goto out; + if (axd_cmd_input_set_enabled(cmd, pipe, 1)) + goto out; + + /* Set PTS values for streams received without sync data */ + axd_pipe->current_ts_high = -1; + axd_pipe->current_ts_low = -1; + + return 0; +out: + axd_set_flag(&axd_pipe->enabled_flg, 0); + return -EIO; +} + +/* Start processing data on input pipe @pipe */ +int axd_cmd_inpipe_start(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_pipe *axd_pipe = &cmd->in_pipes[pipe]; + int ret; + + /* + * If enabled is locked, it means that the firmware is still busy + * processing buffers until EOS is reached. Tell to try again shortly. + */ + if (axd_get_flag(&axd_pipe->enabled_flg)) + return -EAGAIN; + + pr_debug("Starting input[%u]\n", pipe); + ret = axd_cmd_inpipe_prepare(cmd, pipe); + if (ret) + return ret; + axd_pipe->tsk = current; +#ifdef AXD_DEBUG_DIAG + inSentCount[pipe] = 0; + inRecvCount[pipe] = 0; + write_size[pipe] = 0; +#endif + return 0; +} + +/* Stop processing data on input pipe @pipe */ +void axd_cmd_inpipe_stop(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_pipe *axd_pipe = &cmd->in_pipes[pipe]; + + pr_debug("Stopping input[%u]\n", pipe); + /* + * If we haven't sent any data to the firmware, then clear ourselves + * immediately without having to send EOS which could never return. + */ + if (axd_get_flag(&axd_pipe->discard_flg)) { + /* + * Setting eos indicates that an eos buffer need to be sent. In + * some cases (ie: error occurs in the application), the buffer + * queue would be full and eos would fail to send. When an + * interrupt is received then and a buffer becomes free, we + * send eos buffer if the eos flag is set. + */ + axd_set_flag(&axd_pipe->eos_flg, EOS_REACHED); + axd_cmd_send_eos(axd_pipe); + } else { + axd_cmd_inpipe_clear(cmd, pipe); + } + axd_pipe->tsk = NULL; +} + +/* clears input pipe so that it can be prepared to start again */ +static void axd_cmd_inpipe_clear(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_pipe *axd_pipe = &cmd->in_pipes[pipe]; + + /* disable input and clear buffers */ + axd_cmd_input_set_enabled(cmd, pipe, 0); + axd_bufferq_clear(&axd_pipe->desc_bufferq); + /* + * NOTE: disabling the enabled flag must be done at the end to make sure + * that the input device can't be opened again before everything else is + * cleared up properly. There was a race where setting enabled to 0 + * before clearing bufferq caused a crash as the device could be opened + * after the flag is disabled but before the bufferq is cleared so the + * bufferq would be setup then cleared again causing wrong memory access + * later when reading. + */ + axd_set_flag(&axd_pipe->enabled_flg, 0); + axd_set_flag(&axd_pipe->discard_flg, 0); +} + +/* Reset input pipe to starting state - for error recovery */ +void axd_cmd_inpipe_reset(struct axd_cmd *cmd, unsigned int pipe) +{ + axd_cmd_inpipe_clear(cmd, pipe); +} + +/* Is the input pipe active? */ +int axd_cmd_inpipe_active(struct axd_cmd *cmd, unsigned int pipe) +{ + int state = axd_get_flag(&cmd->in_pipes[pipe].enabled_flg); + return state == PIPE_STARTED || state == PIPE_RUNNING; +} + +/* + * Calculate the starting address of output pipe's buffers based on the + * information provided in firmware's header + */ +static char *axd_outpipe_dataout_address(struct axd_cmd *cmd, unsigned int pipe, + unsigned int *num_avail_buffers) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + struct axd_pipe *axd_pipe = &cmd->out_pipes[pipe]; + unsigned long base_address = axd_cmd_get_dataout_address(cmd); + unsigned long total_size = axd_cmd_get_dataout_size(cmd); + unsigned long num_desc, offset; + + /* + * Based on the defined axd_pipe->buf_size and number of output pipes + * supported by the firmware, we calculate the number of descriptors we + * need to use using this formula: + * + * axd_pipe->buf_size * num_desc = total_size / num_outputs + */ + num_desc = total_size / (cmd->num_outputs * axd_pipe->buf_size); + if (num_desc > AXD_OUTPUT_DESCRIPTORS) { + num_desc = AXD_OUTPUT_DESCRIPTORS; + } else if (num_desc == 0) { + dev_err(axd->dev, "[%d] Error: output buffer element size is too large\n", pipe); + return NULL; + } + offset = (total_size / cmd->num_outputs) * pipe; + if (num_avail_buffers) + *num_avail_buffers = num_desc; + + return (char *)(base_address + offset); +} + +static int axd_cmd_outpipe_buffers_init(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_pipe *axd_pipe = &cmd->out_pipes[pipe]; + struct axd_desc_ctrl *desc_ctrl = &axd_pipe->desc_ctrl; + struct axd_buffer_desc __iomem *out_buf_desc = axd_pipe->buf_desc; + unsigned int num_avail_buffers; + char bufname[16]; + int ret; + + char *buf_address = axd_outpipe_dataout_address(cmd, pipe, + &num_avail_buffers); + if (!buf_address) + return -EIO; + + /* initialise descriptors & control semaphores/locks */ + desc_init(desc_ctrl, out_buf_desc, AXD_OUTPUT_DESCRIPTORS); + /* intialise buffers */ + sprintf(bufname, "out_bufferq[%u]", pipe); + ret = axd_bufferq_init(&axd_pipe->desc_bufferq, + bufname, buf_address, + num_avail_buffers, axd_pipe->buf_size, + cmd->nonblock); + if (ret) + return ret; + sprintf(bufname, "user_bufferq[%u]", pipe); + ret = axd_bufferq_init_empty(&axd_pipe->user_bufferq, + bufname, num_avail_buffers, + axd_pipe->buf_size, cmd->nonblock); + if (ret) { + axd_bufferq_clear(&axd_pipe->desc_bufferq); + return ret; + } + + return ret; +} + +/* prepare outpipe for processing data */ +static int axd_cmd_outpipe_prepare(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_pipe *axd_pipe = &cmd->out_pipes[pipe]; + int ret; + + ret = axd_cmd_outpipe_buffers_init(cmd, pipe); + if (ret) + return ret; + + atomic_set(&axd_pipe->intcount, 0); + axd_set_flag(&axd_pipe->enabled_flg, PIPE_STARTED); + axd_set_flag(&axd_pipe->eos_flg, 0); + if (axd_reset_output_bd(cmd, pipe)) + goto out; + if (axd_cmd_output_set_enabled(cmd, pipe, 1)) + goto out; + return 0; +out: + axd_set_flag(&axd_pipe->enabled_flg, 0); + axd_set_flag(&axd_pipe->eos_flg, EOF_REACHED); + return -EIO; +} + +/* Start processing data on output pipe @pipe */ +int axd_cmd_outpipe_start(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_pipe *axd_pipe = &cmd->out_pipes[pipe]; + int ret; + + pr_debug("Starting output[%u]\n", pipe); + /* + * Fully initialise only if enabled is unlocked. + * If enabled is locked, it means someone opened the device then + * closed it before reaching EOS. In this case, re-enable output to + * continue reading from where we stopped. + */ + if (!axd_get_flag(&axd_pipe->enabled_flg)) { + ret = axd_cmd_outpipe_prepare(cmd, pipe); + if (ret) + return ret; + } else if (axd_get_flag(&axd_pipe->discard_flg)) { + /* + * we're still discarding some data from a previous call to + * stop, tell the user to try again shortly + */ + return -EAGAIN; + } + axd_pipe->tsk = current; +#ifdef AXD_DEBUG_DIAG + outSentCount[pipe] = 0; + outRecvCount[pipe] = 0; + primeupCount[pipe] = 0; + read_size[pipe] = 0; + recv_size[pipe] = 0; +#endif + return 0; +} + +/* Stop processing data on output pipe @pipe */ +void axd_cmd_outpipe_stop(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_pipe *axd_pipe = &cmd->out_pipes[pipe]; + struct axd_bufferq *desc_bufferq = &axd_pipe->desc_bufferq; + struct axd_bufferq *user_bufferq = &axd_pipe->user_bufferq; + char *ret_buf; + + pr_debug("Stopping output[%u]\n", pipe); + axd_pipe->tsk = NULL; + if (axd_get_flag(&cmd->discard_flg) && + axd_get_flag(&axd_pipe->enabled_flg)) { + /* Is there anything to discard? */ + if (axd_get_flag(&axd_pipe->enabled_flg) == PIPE_STARTED) { + /* + * nothing to clear up too, just disable the input so + * we'd initialise ourselves properly again on next + * start. + */ + axd_set_flag(&axd_pipe->enabled_flg, 0); + return; + } + axd_set_flag(&axd_pipe->discard_flg, 1); + + if (axd_pipe->cur_buf) + axd_bufferq_put(desc_bufferq, axd_pipe->cur_buf, -1); + + while (!axd_bufferq_is_empty(user_bufferq)) { + ret_buf = axd_bufferq_take(user_bufferq, NULL); + axd_bufferq_put(desc_bufferq, ret_buf, -1); + } + + if (axd_get_flag(&axd_pipe->eos_flg) == EOS_REACHED) { + axd_cmd_output_eos_reached(cmd, pipe); + return; + } + + axd_output_prime_up(axd_pipe); + + } + +} + +static void axd_cmd_outpipe_clear(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_pipe *axd_pipe = &cmd->out_pipes[pipe]; + /* + * unlock enabled to fully intialise next time we're + * opened. + */ + axd_flush_output_stream(cmd, pipe); + axd_bufferq_clear(&axd_pipe->desc_bufferq); + axd_bufferq_clear(&axd_pipe->user_bufferq); + axd_cmd_output_set_enabled(cmd, pipe, 0); + axd_set_flag(&axd_pipe->enabled_flg, 0); + axd_set_flag(&axd_pipe->discard_flg, 0); + axd_pipe->cur_buf = NULL; + axd_pipe->cur_buf_size = 0; + axd_pipe->cur_buf_offset = 0; +} + +/* Reset output pipe to starting state - for error recovery */ +void axd_cmd_outpipe_reset(struct axd_cmd *cmd, unsigned int pipe) +{ + axd_cmd_outpipe_clear(cmd, pipe); +} + +/* + * Send a buffer to input @pipe + * + * Returns number of bytes sent, or negative error number. + */ +int axd_cmd_send_buffer(struct axd_cmd *cmd, unsigned int pipe, + const char __user *buf, unsigned int size) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + struct axd_pipe *axd_pipe = &cmd->in_pipes[pipe]; + struct axd_bufferq *desc_bufferq = &axd_pipe->desc_bufferq; + struct axd_desc_ctrl *desc_ctrl = &axd_pipe->desc_ctrl; + unsigned int stride; + int ret = 0; + int written = 0; + int diff; + unsigned int cp_size; + char *p; + + /* + * Before if we had no data buffer sent to the firmware EOS flag was + * sent perfect through, but now we shouldn't send EOS flag if + * no data was sent to the firmware. We use the discard variable to + * flag if we need to send the EOS at stop or not. + * see axd_cmd_inpipe_stop() + * NOTE: discard_flg for input pipe is different than discard_flg for + * output pipe. + */ + if (unlikely(!axd_get_flag(&axd_pipe->discard_flg))) + axd_set_flag(&axd_pipe->discard_flg, 1); + + pr_debug("Writing %u bytes [%u]\n", size, pipe); + while (written < size) { + /* + * There's a one to one mapping between the desc buffers and the + * descriptors owned by the driver. If the descriptors are + * empty, we'll sleep in here and when we wake up/proceed we are + * guaranteed that we will enqueue a descriptor successfully + */ + p = axd_bufferq_take(desc_bufferq, &stride); + if (IS_ERR(p)) { + ret = PTR_ERR(p); + goto out; + } + diff = size - written; + cp_size = diff < stride ? diff : stride; + ret = copy_from_user(p, buf, cp_size); + if (ret) { + ret = -EFAULT; + goto out; + } + ret = desc_enqueue(desc_ctrl, p, cp_size, 0, axd_pipe); + if (unlikely(ret)) { + /* shouldn't happen, print a warning */ + dev_warn(axd->dev, "[%d] Warning, failed to enqueue buffer\n", pipe); + goto out; + } + /* enqueued successfully, inform the axd firmware */ + axd_datain_kick(axd_pipe); + written += cp_size; + buf += cp_size; + + /* + * A time-based stream frame with PTS might have to be split + * over multiple buffers. We should only provide the PTS for + * the first buffer. The rest should have the PTS invalidated. + */ + axd->cmd.in_pipes[pipe].current_ts_high = -1; + axd->cmd.in_pipes[pipe].current_ts_low = -1; + } +out: + if (written) { +#ifdef AXD_DEBUG_DIAG + write_size[pipe] += written; +#endif + return written; + } + return ret; +} + +void axd_cmd_send_buffer_abort(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_pipe *axd_pipe = &cmd->in_pipes[pipe]; + struct axd_bufferq *desc_bufferq = &axd_pipe->desc_bufferq; + + if (axd_get_flag(&axd_pipe->enabled_flg)) + axd_bufferq_abort_take(desc_bufferq); +} + +static void axd_cmd_output_eos_reached(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_pipe *axd_pipe = &cmd->out_pipes[pipe]; + + /* display diag info only if chan is enabled */ + if (axd_get_flag(&axd_pipe->enabled_flg)) { + pr_debug("Output[%u] EOS reached\n", pipe); + debugdiag("outSentCount[%u]= %u, outRecvCount[%u]= %u, read_size[%u]= %u\n", + pipe, outSentCount[pipe], pipe, outRecvCount[pipe], + pipe, read_size[pipe]); + debugdiag("primeupCount[%u]= %u, recv_size[%u]= %u\n", + pipe, primeupCount[pipe], pipe, recv_size[pipe]); + + /* All buffers are read, clear them. */ + axd_cmd_outpipe_clear(cmd, pipe); + } +} + +/* + * Receive a buffer from output @pipe + * + * The logic in here is that buffers we can copy from are in user_bufferq which + * is filled when we get an interrupt that the axd firmware filled them up. + * desc_bufferq holds the buffers are yet to be serviced by the firmware. + * + * Returns number of bytes received, or negative error number. + */ +int axd_cmd_recv_buffer(struct axd_cmd *cmd, unsigned int pipe, + char __user *buf, unsigned int size) +{ + struct axd_pipe *axd_pipe = &cmd->out_pipes[pipe]; + struct axd_bufferq *desc_bufferq = &axd_pipe->desc_bufferq; + struct axd_bufferq *user_bufferq = &axd_pipe->user_bufferq; + int ret = 0; + int read = 0; + int diff; + unsigned int cp_size; + unsigned int cur_buf_size, cur_buf_offset; + char *cur_buf = axd_pipe->cur_buf; + + if (axd_get_flag(&axd_pipe->eos_flg) == EOF_REACHED) { + axd_cmd_output_eos_reached(cmd, pipe); + return 0; + } + + axd_output_prime_up(axd_pipe); + + pr_debug("Reading %u bytes [%u]\n", size, pipe); + while (read < size) { + cur_buf_size = axd_pipe->cur_buf_size; + cur_buf_offset = axd_pipe->cur_buf_offset; + if (cur_buf_size) { + /* + * Current buffer points to the current user buffer + * we're holding and reading from. We keep hold into it + * until it is completely read. The logic is done in + * this way because the likelihood of this buffer to be + * larger than the read count is quite high if not the + * normal case everytime a read is issued. + */ + diff = size - read; + cp_size = diff < cur_buf_size ? diff : cur_buf_size; + ret = copy_to_user(buf, cur_buf+cur_buf_offset, + cp_size); + if (ret) + goto out; + read += cp_size; + buf += cp_size; + axd_pipe->cur_buf_offset += cp_size; + axd_pipe->cur_buf_size -= cp_size; +#ifdef AXD_DEBUG_DIAG + read_size[pipe] += cp_size; +#endif + } else { + /* + * Current user buffer is completely read, return it to + * the desc_bufferq and take another user buffer. + * Note that we will sleep on either putting or taking + * from the buffer if we're full/empty. ISR should + * fill our user buffer once more are available. + */ + if (cur_buf) { + ret = axd_bufferq_put(desc_bufferq, cur_buf, -1); + if (ret) + goto out; + if (axd_bufferq_is_empty(user_bufferq) && + axd_get_flag(&axd_pipe->eos_flg)) { + /* send EOF on next read */ + axd_set_flag(&axd_pipe->eos_flg, + EOF_REACHED); + /* + * Normally, we only need to clear up + * if read is 0. But, if the application + * is keeping track of where the stream + * ends, it might try to close the + * output pipe before the EOF is read. + * In this case, then the driver would + * lock up. Instead, we always clear up + * here to avoid this. + */ + axd_cmd_output_eos_reached(cmd, pipe); + goto out; + } + axd_output_prime_up(axd_pipe); + } + cur_buf = axd_bufferq_take(user_bufferq, &cp_size); + if (IS_ERR(cur_buf)) { + axd_pipe->cur_buf = NULL; + axd_pipe->cur_buf_offset = 0; + axd_pipe->cur_buf_size = 0; + /* + * if EOS is set and we get an error from + * bufferq_take then it is because we received a + * zero byte buffer with a EOS flag set (From + * the firmware), in this instance we just + * return EOF instead of the error code + * (ERESTARTSYS) + */ + if (axd_get_flag(&axd_pipe->eos_flg)) { + axd_set_flag(&axd_pipe->eos_flg, + EOF_REACHED); + ret = 0; + axd_cmd_output_eos_reached(cmd, pipe); + } else { + ret = PTR_ERR(cur_buf); + } + goto out; + } + axd_pipe->cur_buf_offset = 0; + axd_pipe->cur_buf_size = cp_size; + axd_pipe->cur_buf = cur_buf; + } + } +out: + if (read) { + axd_set_flag(&axd_pipe->enabled_flg, PIPE_RUNNING); + return read; + } + return ret; +} + +void axd_cmd_recv_buffer_abort(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_pipe *axd_pipe = &cmd->out_pipes[pipe]; + struct axd_bufferq *desc_bufferq = &axd_pipe->desc_bufferq; + struct axd_bufferq *user_bufferq = &axd_pipe->user_bufferq; + + if (axd_get_flag(&axd_pipe->enabled_flg)) { + axd_bufferq_abort_put(desc_bufferq); + axd_bufferq_abort_take(user_bufferq); + } +}
On Mon, Aug 24, 2015 at 01:39:15PM +0100, Qais Yousef wrote:
+int axd_cmd_set_pc(struct axd_cmd *cmd, unsigned int thread, unsigned long pc) +{
- if (thread >= THREAD_COUNT)
return -1;
Return sensible error codes please.
+unsigned long axd_cmd_get_datain_address(struct axd_cmd *cmd) +{
- struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd);
- return (unsigned long) axd->buf_base_m;
+}
What's going on with these casts?
+static inline void axd_set_flag(unsigned int *flag, unsigned int value) +{
- *flag = value;
- smp_wmb(); /* guarantee smp ordering */
+}
+static inline unsigned int axd_get_flag(unsigned int *flag) +{
- smp_rmb(); /* guarantee smp ordering */
- return *flag;
+}
Please use a normal locking construct rather than hand rolling something, or alternatively introduce new generic operations. The fact that you're hand rolling these things that have no driver specific content is really worrying in terms of their safety.
+/*
- axd_pipe->enabled_flg for output pipes is overloaded to mean two things:
- PIPE_STARTED: indicates that pipe was opened but no buffers were passed.
- When stopping the pipes, we know that we don't need to discard anything if
- the discard_flg is set in cmd struct. Which allows us to terminate easily
- and quickly.
- PIPE_RUNNING: indicates that pipe has processed some buffers, so we should
- discard if user terminates early (and discard_flg is set in cmd struct).
- */
+#define PIPE_STARTED 1 +#define PIPE_RUNNING 2
Why is the case with in place buffers not a simple zero iteration loop?
+#ifdef AXD_DEBUG_DIAG +static unsigned int inSentCount[AXD_MAX_PIPES]; +static unsigned int inRecvCount[AXD_MAX_PIPES]; +static unsigned int outSentCount[AXD_MAX_PIPES]; +static unsigned int outRecvCount[AXD_MAX_PIPES]; +static unsigned int primeupCount[AXD_MAX_PIPES]; +static unsigned int read_size[AXD_MAX_PIPES]; +static unsigned int write_size[AXD_MAX_PIPES]; +static unsigned int recv_size[AXD_MAX_PIPES];
No static globals and please follow the kernel coding style.
+static inline void axd_datain_kick(struct axd_pipe *axd_pipe) +{
- unsigned long flags;
- struct axd_memory_map __iomem *message = axd_pipe->cmd->message;
- unsigned int pipe = axd_pipe->id;
- unsigned int temp;
+#ifdef AXD_DEBUG_DIAG
- inSentCount[pipe]++;
+#endif
Define accessor macros for these and then define them to noops when not debugging rather than having #defines in the code.
+static irqreturn_t axd_irq(int irq, void *data) +{
- struct axd_cmd *cmd = data;
- unsigned int int_status;
- unsigned long flags;
- int i, ret;
- /*
* int_status is ioremapped() which means it could page fault. When axd
* is running on the same core as the host, holding lock2 would disable
* exception handling in that core which means a page fault would stuff
* host thread executing the driver. We do a double read here to ensure
* that we stall until the memory access is done before lock2 is
* acquired, hence ensuring that any page fault is handled outside lock2
* region.
- */
- int_status = ioread32(&cmd->message->int_status);
- int_status = ioread32(&cmd->message->int_status);
Eew.
- axd_platform_irq_ack();
When would this ever be called anywhere else? Just inline it (and it's better practice to only ack things we handle...).
- flags = axd_platform_lock();
- int_status = ioread32(&cmd->message->int_status);
- iowrite32(0, &cmd->message->int_status);
- if (!int_status)
goto out;
This should cause us to return IRQ_NONE.
- if (int_status & AXD_INT_ERROR) {
struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd);
int error = ioread32(&cmd->message->error);
pr_debug("<---- Received error interrupt\n");
switch (error) {
default:
case 0:
break;
We just ignore these?
case 2:
dev_warn(axd->dev, "Failed to set last configuration command\n");
break;
Does the configuration command notice?
- /*
* if we could lock the semaphore, then we're guaranteed that the
* current rd_idx is valid and ready to be used. So no need to verify
* that the status of the descriptor at rd_idx is valid.
*/
- spin_lock(&desc_ctrl->rd_lock);
It really feels like this locking is all complicated and fragile. I'm not entirely sure the optimisation is worth it - are we really sending compressed audio at such a high rate that it's worth having concurrency handling that's hard to think about?
+void axd_cmd_free_irq(struct axd_cmd *cmd, unsigned int irqnum) +{
- flush_workqueue(cmd->in_workq);
_sync()
- destroy_workqueue(cmd->in_workq);
- flush_workqueue(cmd->out_workq);
- destroy_workqueue(cmd->out_workq);
- free_irq(irqnum, cmd);
We're freeing the interrupts after we destroy the workqueue which means we could try to schedule new work after destruction.
- /*
* Based on the defined axd_pipe->buf_size and number of input pipes
* supported by the firmware, we calculate the number of descriptors we
* need to use using this formula:
*
* axd_pipe->buf_size * num_desc = total_size / num_inputs
*/
- num_desc = total_size / (cmd->num_inputs * axd_pipe->buf_size);
I'm not sure that was an especially tricky line of code to follow... am I missing something here?
I've stopped reviewing here mostly because it's the end of my day and this patch is 72K which is enormous for something that's not just lots of defines or whatever and actually needs reading in considerable detail given all the tricky concurrency stuff you're doing. Please split this code up into multiple patches for ease of review. For example all the queue management and allocation seems rather separate to the interrupt handling.
It also feels like there's room for pruning the code, perhaps sharing more of it between input and output paths and removing some layers of abstraction.
On 08/26/2015 08:16 PM, Mark Brown wrote:
On Mon, Aug 24, 2015 at 01:39:15PM +0100, Qais Yousef wrote:
+int axd_cmd_set_pc(struct axd_cmd *cmd, unsigned int thread, unsigned long pc) +{
- if (thread >= THREAD_COUNT)
return -1;
Return sensible error codes please.
OK.
+unsigned long axd_cmd_get_datain_address(struct axd_cmd *cmd) +{
- struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd);
- return (unsigned long) axd->buf_base_m;
+}
What's going on with these casts?
As with the other cases. buf_base_m is void * __iomem but we want to do some arithmatic to help AXD start up and understand where it needs to run. I agree they don't look nice and if I can avoid them I'd be happy to do so.
+static inline void axd_set_flag(unsigned int *flag, unsigned int value) +{
- *flag = value;
- smp_wmb(); /* guarantee smp ordering */
+}
+static inline unsigned int axd_get_flag(unsigned int *flag) +{
- smp_rmb(); /* guarantee smp ordering */
- return *flag;
+}
Please use a normal locking construct rather than hand rolling something, or alternatively introduce new generic operations. The fact that you're hand rolling these things that have no driver specific content is really worrying in terms of their safety.
I need to check atomic_ops.txt again but I think atomic_t is not always smb safe. I definitely was running on a version of Meta archicture in the past where atomic_t wasn't always smp safe.
I'll check if the rules have changed or something new was introduced to deal with this.
+/*
- axd_pipe->enabled_flg for output pipes is overloaded to mean two things:
- PIPE_STARTED: indicates that pipe was opened but no buffers were passed.
- When stopping the pipes, we know that we don't need to discard anything if
- the discard_flg is set in cmd struct. Which allows us to terminate easily
- and quickly.
- PIPE_RUNNING: indicates that pipe has processed some buffers, so we should
- discard if user terminates early (and discard_flg is set in cmd struct).
- */
+#define PIPE_STARTED 1 +#define PIPE_RUNNING 2
Why is the case with in place buffers not a simple zero iteration loop?
This is important when AXD is not consuming the data through I2S and returning them to Linux. What we're trying to deal with here is the firmware processed some data and expects Linux to consume whatever it has sent back to it. We want to ensure that if the user suddenly stopped consuming this data by closing the pipe to drop anything we receive back from AXD otherwise the workqueue would block indefinitely waiting for the user that disappeared to consume it causing a deadlock.
+#ifdef AXD_DEBUG_DIAG +static unsigned int inSentCount[AXD_MAX_PIPES]; +static unsigned int inRecvCount[AXD_MAX_PIPES]; +static unsigned int outSentCount[AXD_MAX_PIPES]; +static unsigned int outRecvCount[AXD_MAX_PIPES]; +static unsigned int primeupCount[AXD_MAX_PIPES]; +static unsigned int read_size[AXD_MAX_PIPES]; +static unsigned int write_size[AXD_MAX_PIPES]; +static unsigned int recv_size[AXD_MAX_PIPES];
No static globals and please follow the kernel coding style.
OK I'll fix.
+static inline void axd_datain_kick(struct axd_pipe *axd_pipe) +{
- unsigned long flags;
- struct axd_memory_map __iomem *message = axd_pipe->cmd->message;
- unsigned int pipe = axd_pipe->id;
- unsigned int temp;
+#ifdef AXD_DEBUG_DIAG
- inSentCount[pipe]++;
+#endif
Define accessor macros for these and then define them to noops when not debugging rather than having #defines in the code.
Yep sounds a better way to do it.
+static irqreturn_t axd_irq(int irq, void *data) +{
- struct axd_cmd *cmd = data;
- unsigned int int_status;
- unsigned long flags;
- int i, ret;
- /*
* int_status is ioremapped() which means it could page fault. When axd
* is running on the same core as the host, holding lock2 would disable
* exception handling in that core which means a page fault would stuff
* host thread executing the driver. We do a double read here to ensure
* that we stall until the memory access is done before lock2 is
* acquired, hence ensuring that any page fault is handled outside lock2
* region.
- */
- int_status = ioread32(&cmd->message->int_status);
- int_status = ioread32(&cmd->message->int_status);
Eew.
Luckily this is not a problem anymore. This must have slipped back in while preparing the patches for submission. I'll audit the code again to make sure this didn't happen somewhere else.
- axd_platform_irq_ack();
When would this ever be called anywhere else? Just inline it (and it's better practice to only ack things we handle...).
It wouldn't be called anywhere else but its implementation could be platform specific that's why it's abstracted. At the moment it does nothing now we're using MIPS but we shouldn't assume that this will always be the case. The main purpose of this function is to deassert the interrupt line if the way interrrupts are wired for that platform required so. In the past we were running in hardware where interrupts are sent through special slave port and the interrupt required to be acked or deasserted.
- flags = axd_platform_lock();
- int_status = ioread32(&cmd->message->int_status);
- iowrite32(0, &cmd->message->int_status);
- if (!int_status)
goto out;
This should cause us to return IRQ_NONE.
I don't think it's necessary. It could happen that AXD sent a DATAIN interrupt and shortly after sent DATAOUT interrupt but the handler was running before the DATAOUT case is handled causing both interrupts to be handled in one go but the handler could be called again to find out that there's nothing to do.
- if (int_status & AXD_INT_ERROR) {
struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd);
int error = ioread32(&cmd->message->error);
pr_debug("<---- Received error interrupt\n");
switch (error) {
default:
case 0:
break;
We just ignore these?
Case 0 doesn't indicate anything anymore. I can print a warning about unexpected error code for the default case.
case 2:
dev_warn(axd->dev, "Failed to set last configuration command\n");
break;
Does the configuration command notice?
Yes. When send a configuration command we expect a response back that it was service (by setting resopnse_flg in AXD_INT_CTRL), we timeout if we don't get one and report an error to the caller.
This error code could mean other things as well so I might modify this message to be more descriptive.
- /*
* if we could lock the semaphore, then we're guaranteed that the
* current rd_idx is valid and ready to be used. So no need to verify
* that the status of the descriptor at rd_idx is valid.
*/
- spin_lock(&desc_ctrl->rd_lock);
It really feels like this locking is all complicated and fragile. I'm not entirely sure the optimisation is worth it - are we really sending compressed audio at such a high rate that it's worth having concurrency handling that's hard to think about?
This is similar to how the bufferq implementation work. What is the other alternative to this? We do want this to be as fast as possible.
What is happening here is that the semaphore count is again controlling how many descriptors are available, if nothing is available it will cause the caller to block. If it succeeds and more than 1 descriptors is available potentially more than one SMP user could reach the later point so we hold the spinlock while modifying the shared buf_desc structure. The variable we're explicitly protecting is rd_idx.
Maybe my use of the semaphore count to keep track of how many descriptors are available and cause the caller to block is the confusing part? Would better comments help?
+void axd_cmd_free_irq(struct axd_cmd *cmd, unsigned int irqnum) +{
- flush_workqueue(cmd->in_workq);
_sync()
OK.
- destroy_workqueue(cmd->in_workq);
- flush_workqueue(cmd->out_workq);
- destroy_workqueue(cmd->out_workq);
- free_irq(irqnum, cmd);
We're freeing the interrupts after we destroy the workqueue which means we could try to schedule new work after destruction.
Right! I'll move it up.
- /*
* Based on the defined axd_pipe->buf_size and number of input pipes
* supported by the firmware, we calculate the number of descriptors we
* need to use using this formula:
*
* axd_pipe->buf_size * num_desc = total_size / num_inputs
*/
- num_desc = total_size / (cmd->num_inputs * axd_pipe->buf_size);
I'm not sure that was an especially tricky line of code to follow... am I missing something here?
The driver receive a pointer to a contiguous buffer area that it needs to divide it into buffers based on its size, number of pipes in the system, and the desired buffer size.
We then calculate our buffer queue size or how many out of the available descriptors we need.
For example if the total buffer area reserved for inputs is 10KiB and we have 1 input pipe and the desired buffer size is 1KiB, then we can use all 10 Descriptors AXD provides. If we have 2 input pipes in the system, then each 1 will take 5KiB and we need 5 descriptors for each pipe. It is equivalent to saying 'the size of input X buffer queue is 5'.
I've stopped reviewing here mostly because it's the end of my day and this patch is 72K which is enormous for something that's not just lots of defines or whatever and actually needs reading in considerable detail given all the tricky concurrency stuff you're doing. Please split this code up into multiple patches for ease of review. For example all the queue management and allocation seems rather separate to the interrupt handling.
Thanks a lot for your efforts so far. I'll try to split this into smaller chunks though it feels really like it's all one entity but 2K of code is quite a lot.
It also feels like there's room for pruning the code, perhaps sharing more of it between input and output paths and removing some layers of abstraction.
I'll look into that. If there's some specific suggestions in mind I'd appreciate hearing them.
Many thanks, Qais
On Thu, Aug 27, 2015 at 04:40:09PM +0100, Qais Yousef wrote:
On 08/26/2015 08:16 PM, Mark Brown wrote:
On Mon, Aug 24, 2015 at 01:39:15PM +0100, Qais Yousef wrote:
+unsigned long axd_cmd_get_datain_address(struct axd_cmd *cmd) +{
- struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd);
- return (unsigned long) axd->buf_base_m;
+}
What's going on with these casts?
As with the other cases. buf_base_m is void * __iomem but we want to do some arithmatic to help AXD start up and understand where it needs to run. I agree they don't look nice and if I can avoid them I'd be happy to do so.
C supports poinnter arithmetic...
+static inline void axd_set_flag(unsigned int *flag, unsigned int value) +{
- *flag = value;
- smp_wmb(); /* guarantee smp ordering */
+}
+static inline unsigned int axd_get_flag(unsigned int *flag) +{
- smp_rmb(); /* guarantee smp ordering */
- return *flag;
+}
Please use a normal locking construct rather than hand rolling something, or alternatively introduce new generic operations. The fact that you're hand rolling these things that have no driver specific content is really worrying in terms of their safety.
I need to check atomic_ops.txt again but I think atomic_t is not always smb safe. I definitely was running on a version of Meta archicture in the past where atomic_t wasn't always smp safe.
I'll check if the rules have changed or something new was introduced to deal with this.
It is true that when using atomic_t with multiprocessor you still need memory barriers but that doesn't mean atomics bring no benefit. But that's not really the point here - the point is the more general one that the whole idea of open coding memory barrier concurrency constructs doesn't look great. It makes the code much more complex and error prone compared to using normal locking and other concurrency constructs (which Linux has a rich set of).
If we really need performance then it can make sense but I'm not seeing anything here that appears to motivate this. In general all the concurrency code looks much more complex than I would expect.
+#define PIPE_STARTED 1 +#define PIPE_RUNNING 2
Why is the case with in place buffers not a simple zero iteration loop?
This is important when AXD is not consuming the data through I2S and returning them to Linux. What we're trying to deal with here is the firmware processed some data and expects Linux to consume whatever it has sent back to it. We want to ensure that if the user suddenly stopped consuming this data by closing the pipe to drop anything we receive back from AXD otherwise the workqueue would block indefinitely waiting for the user that disappeared to consume it causing a deadlock.
That doesn't seem to address the question...
- axd_platform_irq_ack();
When would this ever be called anywhere else? Just inline it (and it's better practice to only ack things we handle...).
It wouldn't be called anywhere else but its implementation could be platform specific that's why it's abstracted. At the moment it does nothing now we're using MIPS but we shouldn't assume that this will always be the case. The main purpose of this function is to deassert the interrupt line if the way interrrupts are wired for that platform required so. In the past we were running in hardware where interrupts are sent through special slave port and the interrupt required to be acked or deasserted.
This sounds like something that should be in the interrupt controller implementation not the leaf driver, just remove this unless you're actually abstracting something.
- flags = axd_platform_lock();
- int_status = ioread32(&cmd->message->int_status);
- iowrite32(0, &cmd->message->int_status);
- if (!int_status)
goto out;
This should cause us to return IRQ_NONE.
I don't think it's necessary. It could happen that AXD sent a DATAIN interrupt and shortly after sent DATAOUT interrupt but the handler was running before the DATAOUT case is handled causing both interrupts to be handled in one go but the handler could be called again to find out that there's nothing to do.
Please implement your interrupt handler properly so that the genirq error handling code can work and it is robust against things going wrong in future. It's not like it's a huge amount of complex code.
- if (int_status & AXD_INT_ERROR) {
struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd);
int error = ioread32(&cmd->message->error);
pr_debug("<---- Received error interrupt\n");
switch (error) {
default:
case 0:
break;
We just ignore these?
Case 0 doesn't indicate anything anymore. I can print a warning about unexpected error code for the default case.
That's more what I'd expect, yes.
- /*
* if we could lock the semaphore, then we're guaranteed that the
* current rd_idx is valid and ready to be used. So no need to verify
* that the status of the descriptor at rd_idx is valid.
*/
- spin_lock(&desc_ctrl->rd_lock);
It really feels like this locking is all complicated and fragile. I'm not entirely sure the optimisation is worth it - are we really sending compressed audio at such a high rate that it's worth having concurrency handling that's hard to think about?
This is similar to how the bufferq implementation work. What is the other alternative to this? We do want this to be as fast as possible.
Why not just reuse the bufferq implementation if that's what you want to use? More generally most audio ring buffers just keep track of the last place read or written and don't bother with semaphores (why do we even need to block?). It's not just the semaphore you're using here but also some non-atomic variables accessed with memory barriers and mutexes all scattered over a very large block of code. It is far too much effort to reason about what the locking scheme is supposed to be here to determine if it is safe, and that's not going to get any easier when reviewing future changes.
Trying to make something overly optimised at the expense of comprehensibility and maintainability is not good, if there is a pressing performance reason then by all means but that needs to be something concrete not just a statement that we want things to run faster.
Maybe my use of the semaphore count to keep track of how many descriptors are available and cause the caller to block is the confusing part? Would better comments help?
Documentation would help somewhat but I really think this is far too complicated for what it's trying to do. As far as I can tell all this is doing is simple FIFO type tracking of where the last write and last read in the buffer were (which is what most audio drivers do with their data buffers). That should be something that can be done with something more like just a single lock.
Based on some of your other comments I think this may have been overengineered for some other APIs you were implementing but with the ALSA API it should be possible to dramatically simplify it.
- /*
* Based on the defined axd_pipe->buf_size and number of input pipes
* supported by the firmware, we calculate the number of descriptors we
* need to use using this formula:
*
* axd_pipe->buf_size * num_desc = total_size / num_inputs
*/
- num_desc = total_size / (cmd->num_inputs * axd_pipe->buf_size);
I'm not sure that was an especially tricky line of code to follow... am I missing something here?
The driver receive a pointer to a contiguous buffer area that it needs to divide it into buffers based on its size, number of pipes in the system, and the desired buffer size.
That is what I think the code is doing but apparently it's sufficiently complex that it needs a five line comment including a rewritten version of the equation?
On 08/29/2015 11:18 AM, Mark Brown wrote:
On Thu, Aug 27, 2015 at 04:40:09PM +0100, Qais Yousef wrote:
On 08/26/2015 08:16 PM, Mark Brown wrote:
On Mon, Aug 24, 2015 at 01:39:15PM +0100, Qais Yousef wrote:
+unsigned long axd_cmd_get_datain_address(struct axd_cmd *cmd) +{
- struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd);
- return (unsigned long) axd->buf_base_m;
+}
What's going on with these casts?
As with the other cases. buf_base_m is void * __iomem but we want to do some arithmatic to help AXD start up and understand where it needs to run. I agree they don't look nice and if I can avoid them I'd be happy to do so.
C supports poinnter arithmetic...
+static inline void axd_set_flag(unsigned int *flag, unsigned int value) +{
- *flag = value;
- smp_wmb(); /* guarantee smp ordering */
+} +static inline unsigned int axd_get_flag(unsigned int *flag) +{
- smp_rmb(); /* guarantee smp ordering */
- return *flag;
+}
Please use a normal locking construct rather than hand rolling something, or alternatively introduce new generic operations. The fact that you're hand rolling these things that have no driver specific content is really worrying in terms of their safety.
I need to check atomic_ops.txt again but I think atomic_t is not always smb safe. I definitely was running on a version of Meta archicture in the past where atomic_t wasn't always smp safe. I'll check if the rules have changed or something new was introduced to deal with this.
It is true that when using atomic_t with multiprocessor you still need memory barriers but that doesn't mean atomics bring no benefit. But that's not really the point here - the point is the more general one that the whole idea of open coding memory barrier concurrency constructs doesn't look great. It makes the code much more complex and error prone compared to using normal locking and other concurrency constructs (which Linux has a rich set of).
If we really need performance then it can make sense but I'm not seeing anything here that appears to motivate this. In general all the concurrency code looks much more complex than I would expect.
OK I'll improve on this.
+#define PIPE_STARTED 1 +#define PIPE_RUNNING 2
Why is the case with in place buffers not a simple zero iteration loop?
This is important when AXD is not consuming the data through I2S and returning them to Linux. What we're trying to deal with here is the firmware processed some data and expects Linux to consume whatever it has sent back to it. We want to ensure that if the user suddenly stopped consuming this data by closing the pipe to drop anything we receive back from AXD otherwise the workqueue would block indefinitely waiting for the user that disappeared to consume it causing a deadlock.
That doesn't seem to address the question...
I'm sorry I don't understand your question then. Can you rephrase it please?
- axd_platform_irq_ack();
When would this ever be called anywhere else? Just inline it (and it's better practice to only ack things we handle...).
It wouldn't be called anywhere else but its implementation could be platform specific that's why it's abstracted. At the moment it does nothing now we're using MIPS but we shouldn't assume that this will always be the case. The main purpose of this function is to deassert the interrupt line if the way interrrupts are wired for that platform required so. In the past we were running in hardware where interrupts are sent through special slave port and the interrupt required to be acked or deasserted.
This sounds like something that should be in the interrupt controller implementation not the leaf driver, just remove this unless you're actually abstracting something.
We're actually abstracting something. This mechanism might not be part of an interrupt controller that is understood by Linux. At least I had this case in the past where the interrupt generated by AXD must be acked by writing to a special memory address. We don't have a current user for it now though so it makes sense to remove it and if a similar user comes up in the future we can sort it out then.
- flags = axd_platform_lock();
- int_status = ioread32(&cmd->message->int_status);
- iowrite32(0, &cmd->message->int_status);
- if (!int_status)
goto out;
This should cause us to return IRQ_NONE.
I don't think it's necessary. It could happen that AXD sent a DATAIN interrupt and shortly after sent DATAOUT interrupt but the handler was running before the DATAOUT case is handled causing both interrupts to be handled in one go but the handler could be called again to find out that there's nothing to do.
Please implement your interrupt handler properly so that the genirq error handling code can work and it is robust against things going wrong in future. It's not like it's a huge amount of complex code.
OK. I thought that since the situation of int_satus being 0 is something we expect we don't need to trigger a failure for then; it's just a special case where we don't actually need to work. I'll change it to return IRQ_NONE if that's what you think is more appropriate.
- if (int_status & AXD_INT_ERROR) {
struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd);
int error = ioread32(&cmd->message->error);
pr_debug("<---- Received error interrupt\n");
switch (error) {
default:
case 0:
break;
We just ignore these?
Case 0 doesn't indicate anything anymore. I can print a warning about unexpected error code for the default case.
That's more what I'd expect, yes.
- /*
* if we could lock the semaphore, then we're guaranteed that the
* current rd_idx is valid and ready to be used. So no need to verify
* that the status of the descriptor at rd_idx is valid.
*/
- spin_lock(&desc_ctrl->rd_lock);
It really feels like this locking is all complicated and fragile. I'm not entirely sure the optimisation is worth it - are we really sending compressed audio at such a high rate that it's worth having concurrency handling that's hard to think about?
This is similar to how the bufferq implementation work. What is the other alternative to this? We do want this to be as fast as possible.
Why not just reuse the bufferq implementation if that's what you want to use? More generally most audio ring buffers just keep track of the last place read or written and don't bother with semaphores (why do we even need to block?). It's not just the semaphore you're using here but also some non-atomic variables accessed with memory barriers and mutexes all scattered over a very large block of code. It is far too much effort to reason about what the locking scheme is supposed to be here to determine if it is safe, and that's not going to get any easier when reviewing future changes.
We need to block because in the past at least the driver could work on blocking mode and it would return the buffers back to Linux and there was no guarantee that the reader and writer would be on the same rate. The worst case assumption was that the writer and the reader could be 2 different apps. For example an app getting data from network to AXD to be encoded and another app reading the encoded data to store it on disk. And AXD supports multi-pipeline, so more one of these operations could be happening at the same time.
Again I hear you and I'll work on refactoring the code to make it simpler and easier to read and hopefully I can get rid of some of the complexity.
Trying to make something overly optimised at the expense of comprehensibility and maintainability is not good, if there is a pressing performance reason then by all means but that needs to be something concrete not just a statement that we want things to run faster.
Maybe my use of the semaphore count to keep track of how many descriptors are available and cause the caller to block is the confusing part? Would better comments help?
Documentation would help somewhat but I really think this is far too complicated for what it's trying to do. As far as I can tell all this is doing is simple FIFO type tracking of where the last write and last read in the buffer were (which is what most audio drivers do with their data buffers). That should be something that can be done with something more like just a single lock.
Based on some of your other comments I think this may have been overengineered for some other APIs you were implementing but with the ALSA API it should be possible to dramatically simplify it.
I'll work at addressing all of your comments and hopefully the result will be something much simpler.
Many thanks, Qais
- /*
* Based on the defined axd_pipe->buf_size and number of input pipes
* supported by the firmware, we calculate the number of descriptors we
* need to use using this formula:
*
* axd_pipe->buf_size * num_desc = total_size / num_inputs
*/
- num_desc = total_size / (cmd->num_inputs * axd_pipe->buf_size);
I'm not sure that was an especially tricky line of code to follow... am I missing something here?
The driver receive a pointer to a contiguous buffer area that it needs to divide it into buffers based on its size, number of pipes in the system, and the desired buffer size.
That is what I think the code is doing but apparently it's sufficiently complex that it needs a five line comment including a rewritten version of the equation?
On Tue, Sep 01, 2015 at 11:46:19AM +0100, Qais Yousef wrote:
On 08/29/2015 11:18 AM, Mark Brown wrote:
On Thu, Aug 27, 2015 at 04:40:09PM +0100, Qais Yousef wrote:
Again, please delete unneeded contexts and leave blanks between paragraphs (I notice you've even been removing the blank lines from quoted material).
+#define PIPE_STARTED 1 +#define PIPE_RUNNING 2
Why is the case with in place buffers not a simple zero iteration loop?
This is important when AXD is not consuming the data through I2S and returning them to Linux. What we're trying to deal with here is the firmware processed some data and expects Linux to consume whatever it has sent back to it. We want to ensure that if the user suddenly stopped consuming this data by closing the pipe to drop anything we receive back from AXD otherwise the workqueue would block indefinitely waiting for the user that disappeared to consume it causing a deadlock.
That doesn't seem to address the question...
I'm sorry I don't understand your question then. Can you rephrase it please?
Why don't we just always try to consume any buffers that are in flight?
Why not just reuse the bufferq implementation if that's what you want to use? More generally most audio ring buffers just keep track of the last place read or written and don't bother with semaphores (why do we even need to block?). It's not just the semaphore you're using here but also some non-atomic variables accessed with memory barriers and mutexes all scattered over a very large block of code. It is far too much effort to reason about what the locking scheme is supposed to be here to determine if it is safe, and that's not going to get any easier when reviewing future changes.
We need to block because in the past at least the driver could work on blocking mode and it would return the buffers back to Linux and there was no guarantee that the reader and writer would be on the same rate. The worst case assumption was that the writer and the reader could be 2 different apps. For example an app getting data from network to AXD to be encoded and another app reading the encoded data to store it on disk. And AXD supports multi-pipeline, so more one of these operations could be happening at the same time.
I can't really connect the above with a need to block, sorry... I'm going to assume this was something to do with old non-standard external interfaces.
AXD has a lot of registers. These files contain helper functions to access these registers in a readable way.
Signed-off-by: Qais Yousef qais.yousef@imgtec.com Cc: Liam Girdwood lgirdwood@gmail.com Cc: Mark Brown broonie@kernel.org Cc: Jaroslav Kysela perex@perex.cz Cc: Takashi Iwai tiwai@suse.com Cc: linux-kernel@vger.kernel.org --- sound/soc/img/axd/axd_cmds_config.c | 1235 ++++++++++ sound/soc/img/axd/axd_cmds_decoder_config.c | 422 ++++ sound/soc/img/axd/axd_cmds_info.c | 1249 ++++++++++ sound/soc/img/axd/axd_cmds_internal.c | 3264 +++++++++++++++++++++++++++ sound/soc/img/axd/axd_cmds_internal.h | 317 +++ sound/soc/img/axd/axd_cmds_pipes.c | 4 +- 6 files changed, 6489 insertions(+), 2 deletions(-) create mode 100644 sound/soc/img/axd/axd_cmds_config.c create mode 100644 sound/soc/img/axd/axd_cmds_decoder_config.c create mode 100644 sound/soc/img/axd/axd_cmds_info.c create mode 100644 sound/soc/img/axd/axd_cmds_internal.c create mode 100644 sound/soc/img/axd/axd_cmds_internal.h
diff --git a/sound/soc/img/axd/axd_cmds_config.c b/sound/soc/img/axd/axd_cmds_config.c new file mode 100644 index 000000000000..17366a6a40eb --- /dev/null +++ b/sound/soc/img/axd/axd_cmds_config.c @@ -0,0 +1,1235 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * AXD Commands API - Configuration functions. + */ +#include "axd_cmds.h" +#include "axd_cmds_internal.h" + + +/* + * Enable/Disable Mixer EQ. + * @pipe: pipe number. + * @enable: + * Enable = !0 + * Disable = 0 + */ +void axd_cmd_mixer_set_eqenabled(struct axd_cmd *cmd, unsigned int pipe, + int enable) +{ + unsigned int reg = AXD_REG_EQ_CTRL_GAIN; + unsigned int control; + + if (axd_read_reg(cmd, reg, &control)) + return; + + if (enable) + control |= AXD_EQCTRL_ENABLE_BITS; + else + control &= ~AXD_EQCTRL_ENABLE_BITS; + axd_write_reg(cmd, reg, control); +} + +/* + * Set the Master gain of the EQ of the Mixer + * @pipe: pipe number. + * @gain: 0-99 gain value + */ +void axd_cmd_mixer_set_eqmastergain(struct axd_cmd *cmd, unsigned int pipe, + int gain) +{ + unsigned int reg = AXD_REG_EQ_CTRL_GAIN; + unsigned int control; + + if (unlikely(gain > 99 || gain < 0)) + return; + + if (axd_read_reg(cmd, reg, &control)) + return; + + gain = (gain << AXD_EQCTRL_GAIN_SHIFT) & AXD_EQCTRL_GAIN_BITS; + control &= ~AXD_EQCTRL_GAIN_BITS; + control |= gain; + axd_write_reg(cmd, reg, control); +} + +/* + * Set the gain of the EQ Band0 of the Mixer + * @pipe: pipe number. + * @gain: Signed 8 bit 2'compliment gain value. + */ +void axd_cmd_mixer_set_eqband0gain(struct axd_cmd *cmd, unsigned int pipe, + int gain) +{ + gain = (gain << AXD_EQBANDX_GAIN_SHIFT) & AXD_EQBANDX_GAIN_BITS; + axd_write_reg(cmd, AXD_REG_EQ_BAND0, gain); +} + +/* + * Set the gain of the EQ Band1 of the Mixer + * @pipe: pipe number. + * @gain: Signed 8 bit 2'compliment gain value. + */ +void axd_cmd_mixer_set_eqband1gain(struct axd_cmd *cmd, unsigned int pipe, + int gain) +{ + gain = (gain << AXD_EQBANDX_GAIN_SHIFT) & AXD_EQBANDX_GAIN_BITS; + axd_write_reg(cmd, AXD_REG_EQ_BAND1, gain); +} + +/* + * Set the gain of the EQ Band2 of Mixer + * @pipe: pipe number. + * @gain: Signed 8 bit 2'compliment gain value. + */ +void axd_cmd_mixer_set_eqband2gain(struct axd_cmd *cmd, unsigned int pipe, + int gain) +{ + gain = (gain << AXD_EQBANDX_GAIN_SHIFT) & AXD_EQBANDX_GAIN_BITS; + axd_write_reg(cmd, AXD_REG_EQ_BAND2, gain); +} + +/* + * Set the gain of the EQ Band3 of the Mixer + * @pipe: pipe number. + * @gain: Signed 8 bit 2'compliment gain value. + */ +void axd_cmd_mixer_set_eqband3gain(struct axd_cmd *cmd, unsigned int pipe, + int gain) +{ + gain = (gain << AXD_EQBANDX_GAIN_SHIFT) & AXD_EQBANDX_GAIN_BITS; + axd_write_reg(cmd, AXD_REG_EQ_BAND3, gain); +} + +/* + * Set the gain of the EQ Band4 of the Mixer + * @pipe: pipe number. + * @gain: Signed 8 bit 2'compliment gain value. + */ +void axd_cmd_mixer_set_eqband4gain(struct axd_cmd *cmd, unsigned int pipe, + int gain) +{ + gain = (gain << AXD_EQBANDX_GAIN_SHIFT) & AXD_EQBANDX_GAIN_BITS; + axd_write_reg(cmd, AXD_REG_EQ_BAND4, gain); +} + +/* + * Select Mixer's Mux output + * @pipe: pipe number. + * @mux: + * Mix = 0 + * Input 0 = 1 + * Input 1 = 2 + * Input 2 = 3 + * Input 3 = 4 + */ +void axd_cmd_mixer_set_mux(struct axd_cmd *cmd, unsigned int pipe, + int mux) +{ + unsigned int reg = axd_get_mixer_mux_reg(cmd, pipe); + + if (unlikely(mux > 4 || mux < 0)) + return; + axd_write_reg(cmd, reg, mux); +} + +/* + * Enable/Disable input. + * @pipe: pipe number. + * @enable: + * Enable = !0 + * Disable = 0 + */ +int axd_cmd_input_set_enabled(struct axd_cmd *cmd, unsigned int pipe, + int enable) +{ + unsigned int reg = axd_get_input_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return -1; + + if (axd_read_reg(cmd, reg, &control)) + return -1; + + if (enable) + control |= AXD_INCTRL_ENABLE_BITS; + else + control &= ~AXD_INCTRL_ENABLE_BITS; + if (axd_write_reg(cmd, reg, control)) + return -1; + return 0; +} + +/* + * Set the source of the input pipe. + * @pipe: pipe number. + * @source: + * Pipe = 0 + * Aux = 1 + */ +void axd_cmd_input_set_source(struct axd_cmd *cmd, unsigned int pipe, + int source) +{ + unsigned int reg = axd_get_input_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg || source > 1 || source < 0)) + return; + if (axd_read_reg(cmd, reg, &control)) + return; + source = (source << AXD_INCTRL_SOURCE_SHIFT) & AXD_INCTRL_SOURCE_BITS; + control &= ~AXD_INCTRL_SOURCE_BITS; + control |= source; + axd_write_reg(cmd, reg, control); +} + +/* + * Set the codec of the input pipe. + * @pipe: pipe number. + * @codec: + * PCM Pass Through = 0 + * MPEG (2/3) = 1 + * Dolby AC3 = 2 + * AAC = 3 + * Ogg Vorbis = 4 + * FLAC = 5 + * Cook = 6 + * WMA = 7 + * DDPlus = 8 + * DTS = 9 Unsupported + * DTS-HD = 10 Unsupported + * ALAC = 11 + * SBC = 13 + */ +int axd_cmd_input_set_codec(struct axd_cmd *cmd, unsigned int pipe, + int codec) +{ + unsigned int reg = axd_get_input_control_reg(cmd, pipe); + unsigned int control, config1; + + /* make sure it's a valid value */ + if (unlikely(!reg || codec > 13 || codec < 0 || + codec == 9 || codec == 10)) + return -1; + + /* make sure the firmware supports it */ + if (axd_read_reg(cmd, AXD_REG_CONFIG1, &config1)) + return -1; + if (!(config1 & BIT(codec))) + return -1; + + if (axd_read_reg(cmd, reg, &control)) + return -1; + codec = (codec << AXD_INCTRL_CODEC_SHIFT) & AXD_INCTRL_CODEC_BITS; + control &= ~AXD_INCTRL_CODEC_BITS; + control |= codec; + axd_write_reg(cmd, reg, control); + + return 0; +} + +/* + * Set the gain of the input pipe. + * @pipe: pipe number. + * @gain: Signed 32 bit 2'compliment gain value. + * Gain Cut or Boost in 0.25dB increment. ie: 4 = 1dB. + */ +void axd_cmd_input_set_gain(struct axd_cmd *cmd, unsigned int pipe, + int gain) +{ + unsigned int reg = axd_get_input_gain_reg(cmd, pipe); + + if (unlikely(!reg)) + return; + axd_write_reg(cmd, reg, gain); +} + +/* + * Mute/Unmute the input pipe. + * @pipe: pipe number. + * @mute: 0 = OFF + * !0 = ON + */ +void axd_cmd_input_set_mute(struct axd_cmd *cmd, unsigned int pipe, + int mute) +{ + unsigned int reg = axd_get_input_mute_reg(cmd, pipe); + + if (unlikely(!reg)) + return; + axd_write_reg(cmd, reg, mute); +} + +/* + * Send event for output pipe. + * @pipe: pipe number. + * @event: + * Pause = 0 + * Resume = 1 + */ +void axd_cmd_output_set_event(struct axd_cmd *cmd, unsigned int pipe, + int event) +{ + unsigned int reg = axd_get_output_event_reg(cmd, pipe); + + if (unlikely(!reg)) + return; + axd_write_reg(cmd, reg, event); +} + +/* + * Set the upmix of the input pipe. + * @pipe: pipe number. + * @upmix: + * Pass through = 0 + * Simple 5.1 = 1 + * Dolby Pro Logic 2 = 2 + */ +void axd_cmd_input_set_upmix(struct axd_cmd *cmd, unsigned int pipe, + int upmix) +{ + unsigned int reg = axd_get_input_upmix_reg(cmd, pipe); + + if (unlikely(!reg || upmix > 2 || upmix < 0)) + return; + axd_write_reg(cmd, reg, upmix); +} + +/* Set the buffer occupancy value of @pipe. */ +void axd_cmd_input_set_buffer_occupancy(struct axd_cmd *cmd, unsigned int pipe, + unsigned int bo) +{ + unsigned int reg = axd_get_input_buffer_occupancy_reg(cmd, pipe); + + axd_write_reg(cmd, reg, bo); +} + +/* + * Enable/Disable output. + * @pipe: pipe number. + * @enable: + * Enable = !0 + * Disable = 0 + */ +int axd_cmd_output_set_enabled(struct axd_cmd *cmd, unsigned int pipe, + int enable) +{ + unsigned int reg = axd_get_output_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return -1; + if (axd_read_reg(cmd, reg, &control)) + return -1; + if (enable) + control |= AXD_OUTCTRL_ENABLE_BITS; + else + control &= ~AXD_OUTCTRL_ENABLE_BITS; + if (axd_write_reg(cmd, reg, control)) + return -1; + return 0; +} + +/* + * Set the codec of the output pipe. + * @pipe: pipe number. + * @codec: + * PCM Pass Through = 0 + * MPEG (2/3) = 1 Unsupported + * Dolby AC3 = 2 Unsupported + * AAC = 3 Unsupported + * Ogg Vorbis = 4 Unsupported + * FLAC = 5 + * Cook = 6 Unsupported + * WMA = 7 Unsupported + * DDPlus = 8 Unsupported + * DTS = 9 Unsupported + * DTS-HD = 10 Unsupported + * ALAC = 11 + * SBC = 13 Unsupported + */ +int axd_cmd_output_set_codec(struct axd_cmd *cmd, unsigned int pipe, + int codec) +{ + unsigned int reg = axd_get_output_control_reg(cmd, pipe); + unsigned int control, config2; + + /* make sure it's a valid value */ + if (unlikely(!reg || !(codec == 0 || codec == 5 || codec == 11))) + return -1; + + /* make sure the firmware supports it */ + if (axd_read_reg(cmd, AXD_REG_CONFIG2, &config2)) + return -1; + if (!(config2 & BIT(codec))) + return -1; + + if (axd_read_reg(cmd, reg, &control)) + return -1; + codec = (codec << AXD_OUTCTRL_CODEC_SHIFT) & AXD_OUTCTRL_CODEC_BITS; + control &= ~AXD_OUTCTRL_CODEC_BITS; + control |= codec; + axd_write_reg(cmd, reg, control); + + return 0; +} + +/* + * Set the sink of the output pipe. + * @pipe: pipe number. + * @source: + * Pipe = 0 + */ +void axd_cmd_output_set_sink(struct axd_cmd *cmd, unsigned int pipe, + int sink) +{ + unsigned int reg = axd_get_output_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg || (sink < 0 && sink > 3))) + return; + if (axd_read_reg(cmd, reg, &control)) + return; + sink = (sink << AXD_OUTCTRL_SINK_SHIFT) & AXD_OUTCTRL_SINK_BITS; + control &= ~AXD_OUTCTRL_SINK_BITS; + control |= sink; + axd_write_reg(cmd, reg, control); +} + +/* + * Set the downmix of the output pipe. + * @pipe: pipe number. + * @downmix: + * Pass through = 0 + * 5.1 = 1 + * 2.0 = 2 + */ +void axd_cmd_output_set_downmix(struct axd_cmd *cmd, unsigned int pipe, + int downmix) +{ + unsigned int reg = axd_get_output_downmix_reg(cmd, pipe); + + if (unlikely(!reg || downmix > 2 || downmix < 0)) + return; + axd_write_reg(cmd, reg, downmix); +} + +/* + * Enable/Disable output EQ. + * @pipe: pipe number. + * @enable: + * Enable = !0 + * Disable = 0 + */ +void axd_cmd_output_set_eqenabled(struct axd_cmd *cmd, unsigned int pipe, + int enable) +{ + unsigned int reg = axd_get_output_eqcontrol_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return; + if (axd_read_reg(cmd, reg, &control)) + return; + + if (enable) + control |= AXD_EQCTRL_ENABLE_BITS; + else + control &= ~AXD_EQCTRL_ENABLE_BITS; + axd_write_reg(cmd, reg, control); +} + +/* + * Set the Master gain of the EQ of output pipe. + * @pipe: pipe number. + * @gain: 0-99 gain value + */ +void axd_cmd_output_set_eqmastergain(struct axd_cmd *cmd, unsigned int pipe, + int gain) +{ + unsigned int reg = axd_get_output_eqcontrol_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg || gain > 99 || gain < 0)) + return; + if (axd_read_reg(cmd, reg, &control)) + return; + + gain = (gain << AXD_EQCTRL_GAIN_SHIFT) & AXD_EQCTRL_GAIN_BITS; + control &= ~AXD_EQCTRL_GAIN_BITS; + control |= gain; + axd_write_reg(cmd, reg, control); +} + +/* + * Set the gain of the EQ Band0 of output pipe. + * @pipe: pipe number. + * @gain: Signed 8 bit 2'compliment gain value. + */ +void axd_cmd_output_set_eqband0gain(struct axd_cmd *cmd, unsigned int pipe, + int gain) +{ + unsigned int reg = axd_get_output_eqband0_reg(cmd, pipe); + + if (unlikely(!reg)) + return; + gain = (gain << AXD_EQBANDX_GAIN_SHIFT) & AXD_EQBANDX_GAIN_BITS; + axd_write_reg(cmd, reg, gain); +} + +/* + * Set the gain of the EQ Band1 of output pipe. + * @pipe: pipe number. + * @gain: Signed 8 bit 2'compliment gain value. + */ +void axd_cmd_output_set_eqband1gain(struct axd_cmd *cmd, unsigned int pipe, + int gain) +{ + unsigned int reg = axd_get_output_eqband1_reg(cmd, pipe); + + if (unlikely(!reg)) + return; + gain = (gain << AXD_EQBANDX_GAIN_SHIFT) & AXD_EQBANDX_GAIN_BITS; + axd_write_reg(cmd, reg, gain); +} + +/* + * Set the gain of the EQ Band2 of output pipe. + * @pipe: pipe number. + * @gain: Signed 8 bit 2'compliment gain value. + */ +void axd_cmd_output_set_eqband2gain(struct axd_cmd *cmd, unsigned int pipe, + int gain) +{ + unsigned int reg = axd_get_output_eqband2_reg(cmd, pipe); + + if (unlikely(!reg)) + return; + gain = (gain << AXD_EQBANDX_GAIN_SHIFT) & AXD_EQBANDX_GAIN_BITS; + axd_write_reg(cmd, reg, gain); +} + +/* + * Set the gain of the EQ Band3 of output pipe. + * @pipe: pipe number. + * @gain: Signed 8 bit 2'compliment gain value. + */ +void axd_cmd_output_set_eqband3gain(struct axd_cmd *cmd, unsigned int pipe, + int gain) +{ + unsigned int reg = axd_get_output_eqband3_reg(cmd, pipe); + + if (unlikely(!reg)) + return; + gain = (gain << AXD_EQBANDX_GAIN_SHIFT) & AXD_EQBANDX_GAIN_BITS; + axd_write_reg(cmd, reg, gain); +} + +/* + * Set the gain of the EQ Band4 of output pipe. + * @pipe: pipe number. + * @gain: Signed 8 bit 2'compliment gain value. + */ +void axd_cmd_output_set_eqband4gain(struct axd_cmd *cmd, unsigned int pipe, + int gain) +{ + unsigned int reg = axd_get_output_eqband4_reg(cmd, pipe); + + if (unlikely(!reg)) + return; + gain = (gain << AXD_EQBANDX_GAIN_SHIFT) & AXD_EQBANDX_GAIN_BITS; + axd_write_reg(cmd, reg, gain); +} + +/* DCPP */ + +int axd_cmd_output_set_dcpp_enabled(struct axd_cmd *cmd, unsigned int pipe, + int enable) +{ + unsigned int reg; + unsigned int control; + + reg = axd_get_output_dcpp_control_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + if (axd_read_reg(cmd, reg, &control)) + return -1; + + if (enable) + control |= AXD_DCPP_CTRL_ENABLE_BITS; + else + control &= ~AXD_DCPP_CTRL_ENABLE_BITS; + + return axd_write_reg_buf(cmd, reg, control); +} + +int axd_cmd_output_set_dcpp_mode(struct axd_cmd *cmd, unsigned int pipe, + unsigned int mode) +{ + unsigned int reg; + unsigned int control; + + reg = axd_get_output_dcpp_control_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + if (axd_read_reg(cmd, reg, &control)) + return -1; + + /* Conditionally mask in mode bit */ + control ^= ((control ^ (mode << AXD_DCPP_CTRL_MODE_SHIFT)) + & AXD_DCPP_CTRL_MODE_BITS); + + return axd_write_reg_buf(cmd, reg, control); +} + +int axd_cmd_output_set_dcpp_eq_mode(struct axd_cmd *cmd, unsigned int pipe, + unsigned int mode) +{ + unsigned int reg; + unsigned int control; + + reg = axd_get_output_dcpp_control_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + if (axd_read_reg(cmd, reg, &control)) + return -1; + + /* Conditionally mask in mode bit */ + control ^= ((control ^ (mode << AXD_DCPP_CTRL_EQ_MODE_SHIFT)) + & AXD_DCPP_CTRL_EQ_MODE_BITS); + + return axd_write_reg_buf(cmd, reg, control); +} + +int axd_cmd_output_set_dcpp_channel_delay_samples(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_delay_samples_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_eq_output_volume(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_eq_output_volume_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_eq_passthrough_gain(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_eq_passthrough_gain_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_eq_inverse_passthrough_gain( + struct axd_cmd *cmd, unsigned int pipe, + unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_eq_inverse_passthrough_gain_reg(cmd, + pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_bass_shelf_shift(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_bass_shelf_shift_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_bass_shelf_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_bass_shelf_a0_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_bass_shelf_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_bass_shelf_a1_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_bass_shelf_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_bass_shelf_a2_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_bass_shelf_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_bass_shelf_b0_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_bass_shelf_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_bass_shelf_b1_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_treble_shelf_shift(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_treble_shelf_shift_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_treble_shelf_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_treble_shelf_a0_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_treble_shelf_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_treble_shelf_a1_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_treble_shelf_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_treble_shelf_a2_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_treble_shelf_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_treble_shelf_b0_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_treble_shelf_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_treble_shelf_b1_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_eq_gain(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, + unsigned int band, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_gain_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_eq_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, + unsigned int band, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_a0_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_eq_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, + unsigned int band, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_a1_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_eq_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, + unsigned int band, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_a2_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_eq_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, + unsigned int band, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_b0_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_eq_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, + unsigned int band, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_b1_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_channel_eq_shift(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, + unsigned int band, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_shift_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_enabled(struct axd_cmd *cmd, + unsigned int pipe, int enable) +{ + unsigned int reg; + unsigned int control; + + reg = axd_get_output_dcpp_control_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + if (axd_read_reg(cmd, reg, &control)) + return -1; + + if (enable) + control |= AXD_DCPP_CTRL_SUBBAND_ENABLE_BITS; + else + control &= ~AXD_DCPP_CTRL_SUBBAND_ENABLE_BITS; + + return axd_write_reg_buf(cmd, reg, enable); +} + +int axd_cmd_output_set_dcpp_subband_delay_samples(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + + reg = axd_get_output_dcpp_channel_delay_samples_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_input_channel_mask(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data) +{ + unsigned int reg; + unsigned int control; + + reg = axd_get_output_dcpp_control_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + if (axd_read_reg(cmd, reg, &control)) + return -1; + + control &= ~AXD_DCPP_CTRL_SUBBAND_CHANNEL_MASK_BITS; + control |= data << AXD_DCPP_CTRL_SUBBAND_CHANNEL_MASK_SHIFT; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_eq_output_volume(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + + reg = axd_get_output_dcpp_channel_eq_output_volume_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_eq_passthrough_gain(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + + reg = axd_get_output_dcpp_channel_eq_passthrough_gain_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_eq_inverse_passthrough_gain( + struct axd_cmd *cmd, unsigned int pipe, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + + reg = axd_get_output_dcpp_channel_eq_inverse_passthrough_gain_reg(cmd, + pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_low_pass_filter_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data) +{ + unsigned int reg; + + reg = axd_get_output_dcpp_subband_low_pass_filter_a0_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_low_pass_filter_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data) +{ + unsigned int reg; + + reg = axd_get_output_dcpp_subband_low_pass_filter_a1_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_low_pass_filter_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data) +{ + unsigned int reg; + + reg = axd_get_output_dcpp_subband_low_pass_filter_a2_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_low_pass_filter_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data) +{ + unsigned int reg; + + reg = axd_get_output_dcpp_subband_low_pass_filter_b0_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_low_pass_filter_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int data) +{ + unsigned int reg; + + reg = axd_get_output_dcpp_subband_low_pass_filter_b1_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_eq_gain(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_gain_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_eq_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_a0_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_eq_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_a1_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_eq_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_a2_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_eq_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_b0_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_eq_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_b1_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} + +int axd_cmd_output_set_dcpp_subband_eq_shift(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band, unsigned int data) +{ + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_shift_reg(cmd, pipe); + + if (unlikely(!reg)) + return -1; + + return axd_write_reg_buf(cmd, reg, data); +} diff --git a/sound/soc/img/axd/axd_cmds_decoder_config.c b/sound/soc/img/axd/axd_cmds_decoder_config.c new file mode 100644 index 000000000000..3b4e15da724c --- /dev/null +++ b/sound/soc/img/axd/axd_cmds_decoder_config.c @@ -0,0 +1,422 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * AXD Commands API - Decoder Configuration functions + */ +#include <sound/compress_params.h> + +#include "axd_cmds.h" +#include "axd_cmds_internal.h" + +/** PCM PASSTHROUGH (input) Config **/ +static int get_pcm_params(struct axd_cmd *cmd, unsigned int pipe, + struct snd_codec *codec) +{ + unsigned int reg; + unsigned int data; + + reg = axd_get_decoder_pcm_samplerate_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + codec->sample_rate = data; + + reg = axd_get_decoder_pcm_channels_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + codec->ch_in = data; + codec->ch_out = data; + + return 0; +} + +static int set_pcm_params(struct axd_cmd *cmd, unsigned int pipe, + struct snd_codec *codec) +{ + unsigned int reg; + unsigned int bitspersample; + + switch (codec->sample_rate) { + case 16000: + case 32000: + case 44100: + case 48000: + case 64000: + case 96000: + break; + default: + return -1; + } + reg = axd_get_decoder_pcm_samplerate_reg(cmd, pipe); + axd_write_reg(cmd, reg, codec->sample_rate); + + if (unlikely(codec->ch_in > 8)) + return -1; + reg = axd_get_decoder_pcm_channels_reg(cmd, pipe); + axd_write_reg(cmd, reg, codec->ch_in); + + bitspersample = codec->bit_rate / codec->sample_rate; + switch (bitspersample) { + case 8: + case 16: + case 24: + case 32: + break; + default: + return -1; + } + reg = axd_get_decoder_pcm_bitspersample_reg(cmd, pipe); + axd_write_reg(cmd, reg, bitspersample); + + return 0; +} + +/** MPEG (2/3) Config **/ +static int get_mpeg_params(struct axd_cmd *cmd, unsigned int pipe, + struct snd_codec *codec) +{ + unsigned int reg; + unsigned int data; + + reg = axd_get_decoder_mpeg_numchannels_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + codec->ch_in = data; + codec->ch_out = data; + + return 0; +} + +static int set_mpeg_params(struct axd_cmd *cmd, unsigned int pipe, + struct snd_codec *codec) +{ + unsigned int reg; + + if (unlikely(codec->ch_in > 2)) + return -1; + reg = axd_get_decoder_mpeg_numchannels_reg(cmd, pipe); + axd_write_reg(cmd, reg, codec->ch_in); + + return 0; +} + +/** AAC Config **/ +static int get_aac_params(struct axd_cmd *cmd, unsigned int pipe, + struct snd_codec *codec) +{ + unsigned int reg; + unsigned int data; + + reg = axd_get_decoder_aac_version_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + /* do something */ + + reg = axd_get_decoder_aac_channels_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + codec->ch_in = data; + codec->ch_out = data; + + reg = axd_get_decoder_aac_profile_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + /* do something */ + + reg = axd_get_decoder_aac_streamtype_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + /* do something */ + + reg = axd_get_decoder_aac_samplerate_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + codec->sample_rate = data; + + return 0; +} + +static int set_aac_params(struct axd_cmd *cmd, unsigned int pipe, + struct snd_codec *codec) +{ + unsigned int reg; + + /* + * AXD AAC decoer version is not the same as AAC version. + * + * 0 -> MPEG2 LC + * 1 -> MPEG4 LC + * 2 -> MPEG4 HE + * 3 -> MPEG4 DAB+ + * + * 2 can actually decoder both MPEG4 LC and MPEG2 LC, so let's choose + * it always. + * + * 3 is actually MPEG4 HE but it can only decode MPEG4 HE. + */ + reg = axd_get_decoder_aac_version_reg(cmd, pipe); + axd_write_reg(cmd, reg, 2); + + reg = axd_get_decoder_aac_channels_reg(cmd, pipe); + axd_write_reg(cmd, reg, codec->ch_in); + + /* + * AXD supports the following profiles or modes in ALSA jargon + * + * 0 -> Main Profile (MP) + * 1 -> Low Complexity (LC) + * 2 -> Scalable Sample Rate (SSR) + * + * Not sure how to get that from snd_codec, so set it always to LC for + * now. + */ + reg = axd_get_decoder_aac_profile_reg(cmd, pipe); + axd_write_reg(cmd, reg, 1); + + /* + * AXD supports the following stream types or formats + * + * 0 -> Auto Detect + * 1 -> ADTS + * 2 -> ADIF + * 3 -> RAW + */ + reg = axd_get_decoder_aac_streamtype_reg(cmd, pipe); + switch (codec->format) { + case SND_AUDIOSTREAMFORMAT_MP2ADTS: + case SND_AUDIOSTREAMFORMAT_MP4ADTS: + axd_write_reg(cmd, reg, 1); + break; + case SND_AUDIOSTREAMFORMAT_ADIF: + axd_write_reg(cmd, reg, 2); + break; + case SND_AUDIOSTREAMFORMAT_RAW: + axd_write_reg(cmd, reg, 3); + break; + default: + /* should we set it to auto detect and see if we can play it? */ + return -1; + } + + reg = axd_get_decoder_aac_samplerate_reg(cmd, pipe); + axd_write_reg(cmd, reg, codec->sample_rate); + + return 0; +} + +/** FLAC Config **/ +static int get_flac_params(struct axd_cmd *cmd, unsigned int pipe, + struct snd_codec *codec) +{ + unsigned int reg; + unsigned int data; + + reg = axd_get_decoder_flac_channels_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + codec->ch_in = data; + codec->ch_out = data; + + reg = axd_get_decoder_flac_samplerate_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + codec->sample_rate = data; + + reg = axd_get_decoder_flac_bitspersample_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + codec->bit_rate = data * codec->sample_rate; + + reg = axd_get_decoder_flac_md5checking_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + /* do something */ + + return 0; +} + +static int set_flac_params(struct axd_cmd *cmd, unsigned int pipe, + struct snd_codec *codec) +{ + unsigned int reg; + + if (unlikely(codec->ch_in > 0x7)) + return -1; + reg = axd_get_decoder_flac_channels_reg(cmd, pipe); + axd_write_reg(cmd, reg, codec->ch_in); + + if (unlikely(codec->sample_rate > 0xFFFFF)) + return -1; + reg = axd_get_decoder_flac_samplerate_reg(cmd, pipe); + axd_write_reg(cmd, reg, codec->sample_rate); + + return 0; +} + +/** WMA Config **/ +static int get_wma_params(struct axd_cmd *cmd, unsigned int pipe, + struct snd_codec *codec) +{ + unsigned int reg; + unsigned int data; + + reg = axd_get_decoder_wma_playeropt_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + reg = axd_get_decoder_wma_drcsetting_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + reg = axd_get_decoder_wma_peakampref_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + reg = axd_get_decoder_wma_rmsampref_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + reg = axd_get_decoder_wma_peakamptarget_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + reg = axd_get_decoder_wma_rmsamptarget_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + reg = axd_get_decoder_wma_pcmvalidbitspersample_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + reg = axd_get_decoder_wma_pcmcontainersize_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + reg = axd_get_decoder_wma_wmaformattag_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + reg = axd_get_decoder_wma_wmanumchannels_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + reg = axd_get_decoder_wma_wmasamplespersec_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + reg = axd_get_decoder_wma_wmaaveragebytespersec_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + reg = axd_get_decoder_wma_wmablockalign_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + reg = axd_get_decoder_wma_wmavalidbitspersample_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + reg = axd_get_decoder_wma_wmachannelmask_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + reg = axd_get_decoder_wma_wmaencodeoptions_reg(cmd, pipe); + if (axd_read_reg(cmd, reg, &data)) + return -1; + + return 0; +} + +static int set_wma_params(struct axd_cmd *cmd, unsigned int pipe, + struct snd_codec *codec) +{ + unsigned int reg; + + reg = axd_get_decoder_wma_wmanumchannels_reg(cmd, pipe); + axd_write_reg(cmd, reg, codec->ch_in); + + reg = axd_get_decoder_wma_wmasamplespersec_reg(cmd, pipe); + axd_write_reg(cmd, reg, codec->sample_rate); + + return 0; +} + +int axd_cmd_input_get_decoder_params(struct axd_cmd *cmd, unsigned int pipe, + struct snd_codec *codec) +{ + int ret; + + switch (codec->id) { + case SND_AUDIOCODEC_PCM: + ret = axd_cmd_input_set_codec(cmd, pipe, 0); + if (ret) + break; + return get_pcm_params(cmd, pipe, codec); + case SND_AUDIOCODEC_MP3: + ret = axd_cmd_input_set_codec(cmd, pipe, 1); + if (ret) + break; + return get_mpeg_params(cmd, pipe, codec); + case SND_AUDIOCODEC_AAC: + ret = axd_cmd_input_set_codec(cmd, pipe, 3); + if (ret) + break; + return get_aac_params(cmd, pipe, codec); + case SND_AUDIOCODEC_WMA: + ret = axd_cmd_input_set_codec(cmd, pipe, 7); + if (ret) + break; + return get_wma_params(cmd, pipe, codec); + case SND_AUDIOCODEC_FLAC: + ret = axd_cmd_input_set_codec(cmd, pipe, 5); + if (ret) + break; + return get_flac_params(cmd, pipe, codec); + } + + return -1; +} + +int axd_cmd_input_set_decoder_params(struct axd_cmd *cmd, unsigned int pipe, + struct snd_codec *codec) +{ + int ret; + + switch (codec->id) { + case SND_AUDIOCODEC_PCM: + ret = axd_cmd_input_set_codec(cmd, pipe, 0); + if (ret) + break; + return set_pcm_params(cmd, pipe, codec); + case SND_AUDIOCODEC_MP3: + ret = axd_cmd_input_set_codec(cmd, pipe, 1); + if (ret) + break; + return set_mpeg_params(cmd, pipe, codec); + case SND_AUDIOCODEC_AAC: + ret = axd_cmd_input_set_codec(cmd, pipe, 3); + if (ret) + break; + return set_aac_params(cmd, pipe, codec); + case SND_AUDIOCODEC_WMA: + ret = axd_cmd_input_set_codec(cmd, pipe, 7); + if (ret) + break; + return set_wma_params(cmd, pipe, codec); + case SND_AUDIOCODEC_FLAC: + ret = axd_cmd_input_set_codec(cmd, pipe, 5); + if (ret) + break; + return set_flac_params(cmd, pipe, codec); + } + + return -1; +} diff --git a/sound/soc/img/axd/axd_cmds_info.c b/sound/soc/img/axd/axd_cmds_info.c new file mode 100644 index 000000000000..9347cf7ff7cd --- /dev/null +++ b/sound/soc/img/axd/axd_cmds_info.c @@ -0,0 +1,1249 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * AXD Commands API - Info functions. + */ +#include <linux/bitops.h> +#include <sound/compress_offload.h> + +#include "axd_cmds.h" +#include "axd_cmds_internal.h" + +/* Fills @caps with the list of codecs as set in the bit map @bits */ +static void parse_codec_by_bit(unsigned int bits, struct snd_compr_caps *caps) +{ + if (bits & BIT(0)) { + caps->codecs[caps->num_codecs] = SND_AUDIOCODEC_PCM; + caps->num_codecs++; + } + if (bits & BIT(1)) { + caps->codecs[caps->num_codecs] = SND_AUDIOCODEC_MP3; + caps->num_codecs++; + } + if (bits & BIT(3)) { + caps->codecs[caps->num_codecs] = SND_AUDIOCODEC_AAC; + caps->num_codecs++; + } + if (bits & BIT(4)) { + caps->codecs[caps->num_codecs] = SND_AUDIOCODEC_VORBIS; + caps->num_codecs++; + } + if (bits & BIT(5)) { + caps->codecs[caps->num_codecs] = SND_AUDIOCODEC_FLAC; + caps->num_codecs++; + } + if (bits & BIT(7)) { + caps->codecs[caps->num_codecs] = SND_AUDIOCODEC_WMA; + caps->num_codecs++; + } +} + +/* Info API */ +/* Sets the major and minor numbers of the currently running AXD firmware */ +void axd_cmd_get_version(struct axd_cmd *cmd, + int *major, int *minor, int *patch) +{ + unsigned int version; + + axd_read_reg(cmd, AXD_REG_VERSION, &version); + if (unlikely(!major || !minor)) + return; + *major = (version >> 22); /* top 10 bits */ + *minor = (version >> 12) & 0x3FF; /* middle 10 bits */ + *patch = version & 0xFFF; /* bottom 12 bits */ +} + +/* Sets the number of supported in/out pipes */ +int axd_cmd_get_num_pipes(struct axd_cmd *cmd, + unsigned int *inpipes, unsigned int *outpipes) +{ + unsigned int config0; + int ret; + + ret = axd_read_reg(cmd, AXD_REG_CONFIG0, &config0); + if (unlikely(!inpipes || !outpipes)) + return -1; + if (ret) + return -1; + *inpipes = config0 >> 16; + *inpipes &= 0xFF; + *outpipes = config0 & 0xFF; + return 0; +} + +/* Fills @codecs with a list of supported codecs */ +void axd_cmd_get_decoders(struct axd_cmd *cmd, struct snd_compr_caps *caps) +{ + unsigned int config1; + + axd_read_reg(cmd, AXD_REG_CONFIG1, &config1); + if (unlikely(!caps)) + return; + parse_codec_by_bit(config1, caps); +} + +/* Fills @codecs with a list of supported codecs */ +void axd_cmd_get_encoders(struct axd_cmd *cmd, struct snd_compr_caps *caps) +{ + unsigned int config2; + + axd_read_reg(cmd, AXD_REG_CONFIG2, &config2); + if (unlikely(!caps)) + return; + parse_codec_by_bit(config2, caps); +} + +/* Returns non-zero if Mix/Xbar is present. Zero otherwise. */ +int axd_cmd_xbar_present(struct axd_cmd *cmd) +{ + unsigned int temp; + + axd_read_reg(cmd, AXD_REG_CONFIG3, &temp); + return temp & 0x1; +} + +/* Returns non-zero if mixer EQ is enabled. Zero otherwise. */ +int axd_cmd_mixer_get_eqenabled(struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int control; + + axd_read_reg(cmd, AXD_REG_EQ_CTRL_GAIN, &control); + return (control & AXD_EQCTRL_ENABLE_BITS) >> AXD_EQCTRL_ENABLE_SHIFT; +} + +/* Sets @gain to the currently set output EQ Master gain value */ +void axd_cmd_mixer_get_eqmastergain(struct axd_cmd *cmd, unsigned int pipe, + int *gain) +{ + unsigned int control; + + axd_read_reg(cmd, AXD_REG_EQ_CTRL_GAIN, &control); + *gain = (control & AXD_EQCTRL_GAIN_BITS) >> AXD_EQCTRL_GAIN_SHIFT; +} + +/* Sets @gain to the currently set output EQ Band0 gain value */ +void axd_cmd_mixer_get_eqband0gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain) +{ + unsigned int temp; + + axd_read_reg(cmd, AXD_REG_EQ_BAND0, &temp); + *gain = ((int)temp & AXD_EQBANDX_GAIN_BITS) >> AXD_EQBANDX_GAIN_SHIFT; +} + +/* Sets @gain to the currently set output EQ Band1 gain value */ +void axd_cmd_mixer_get_eqband1gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain) +{ + unsigned int temp; + + axd_read_reg(cmd, AXD_REG_EQ_BAND1, &temp); + *gain = ((int)temp & AXD_EQBANDX_GAIN_BITS) >> AXD_EQBANDX_GAIN_SHIFT; +} + +/* Sets @gain to the currently set output EQ Band2 gain value */ +void axd_cmd_mixer_get_eqband2gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain) +{ + unsigned int temp; + + axd_read_reg(cmd, AXD_REG_EQ_BAND2, &temp); + *gain = ((int)temp & AXD_EQBANDX_GAIN_BITS) >> AXD_EQBANDX_GAIN_SHIFT; +} + +/* Sets @gain to the currently set output EQ Band3 gain value */ +void axd_cmd_mixer_get_eqband3gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain) +{ + unsigned int temp; + + axd_read_reg(cmd, AXD_REG_EQ_BAND3, &temp); + *gain = ((int)temp & AXD_EQBANDX_GAIN_BITS) >> AXD_EQBANDX_GAIN_SHIFT; +} + +/* Sets @gain to the currently set output EQ Band4 gain value */ +void axd_cmd_mixer_get_eqband4gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain) +{ + unsigned int temp; + + axd_read_reg(cmd, AXD_REG_EQ_BAND4, &temp); + *gain = ((int)temp & AXD_EQBANDX_GAIN_BITS) >> AXD_EQBANDX_GAIN_SHIFT; +} + +/* + * Returns to the currently selected mux output @pipe of mixer + * + * 0 = mixer + * 1 = input 0 + * 2 = input 1 + * 3 = input 2 + * 4 = input 3 + */ +int axd_cmd_mixer_get_mux(struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg = axd_get_mixer_mux_reg(cmd, pipe); + unsigned int setting; + + if (unlikely(!reg)) + return -1; + axd_read_reg(cmd, reg, &setting); + return setting; +} + +/* Returns non-zero of input @pipe is enabled. Zero otherwise. */ +int axd_cmd_input_get_enabled(struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg = axd_get_input_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return (control & AXD_INCTRL_ENABLE_BITS) >> AXD_INCTRL_ENABLE_SHIFT; +} + +/* + * Returns the currently selected source of input @pipe + * + * 0 = pipe (buffer) mode + * 1 = aux input + */ +int axd_cmd_input_get_source(struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg = axd_get_input_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return -1; + axd_read_reg(cmd, reg, &control); + return (control & AXD_INCTRL_SOURCE_BITS) >> AXD_INCTRL_SOURCE_SHIFT; +} + +/* Returns the currently selected codec of input @pipe */ +int axd_cmd_input_get_codec(struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int codec_num = axd_get_input_codec_number(cmd, pipe); + + switch (codec_num) { + case 0: + return SND_AUDIOCODEC_PCM; + case 1: + return SND_AUDIOCODEC_MP3; + case 3: + return SND_AUDIOCODEC_AAC; + case 4: + return SND_AUDIOCODEC_VORBIS; + case 5: + return SND_AUDIOCODEC_FLAC; + case 7: + return SND_AUDIOCODEC_WMA; + default: + return -EINVAL; + } +} + +/* Sets @gain to the currently set input gain value */ +void axd_cmd_input_get_gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain) +{ + unsigned int reg = axd_get_input_gain_reg(cmd, pipe); + + if (unlikely(!reg || !gain)) + return; + axd_read_reg(cmd, reg, gain); +} + +/* Sets @gain to the currently set input gain value */ +void axd_cmd_input_get_mute(struct axd_cmd *cmd, unsigned int pipe, + int *muted) +{ + unsigned int reg = axd_get_input_gain_reg(cmd, pipe); + + if (unlikely(!reg || !muted)) + return; + axd_read_reg(cmd, reg, muted); +} + +/* + * Returns the currently selected upmix setting of input @pipe + * + * 0 = Pass Through + * 1 = Simple 5.1 + * 2 = Dolby Pro Logic 2 + */ +int axd_cmd_input_get_upmix(struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg = axd_get_input_upmix_reg(cmd, pipe); + unsigned int setting; + + if (unlikely(!reg)) + return -1; + axd_read_reg(cmd, reg, &setting); + return setting; +} + +/* Returns the buffer occupancy value of @pipe. */ +unsigned int axd_cmd_input_get_buffer_occupancy(struct axd_cmd *cmd, + unsigned int pipe) +{ + unsigned int bo; + unsigned int reg = axd_get_input_buffer_occupancy_reg(cmd, pipe); + + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &bo); + return bo; +} + +/* Returns non-zero of output @pipe is enabled. Zero otherwise. */ +int axd_cmd_output_get_enabled(struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg = axd_get_output_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return (control & AXD_OUTCTRL_ENABLE_BITS) >> AXD_OUTCTRL_ENABLE_SHIFT; +} + +/* Returns the currently selected codec of output @pipe */ +int axd_cmd_output_get_codec(struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int codec_num = axd_get_output_codec_number(cmd, pipe); + + switch (codec_num) { + case 0: + return SND_AUDIOCODEC_PCM; + case 1: + return SND_AUDIOCODEC_MP3; + case 3: + return SND_AUDIOCODEC_AAC; + case 4: + return SND_AUDIOCODEC_VORBIS; + case 5: + return SND_AUDIOCODEC_FLAC; + case 7: + return SND_AUDIOCODEC_WMA; + default: + return -EINVAL; + } +} + +/* + * Returns the currently selected sink of output @pipe + * + * 0 = pipe (buffer) mode + * 1 = i2s output + */ +int axd_cmd_output_get_sink(struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg = axd_get_output_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return -1; + axd_read_reg(cmd, reg, &control); + return (control & AXD_OUTCTRL_SINK_BITS) >> AXD_OUTCTRL_SINK_SHIFT; +} + +/* + * Returns the currently selected downmix setting of output @pipe + * + * 0 = pass through + * 1 = 5.1 + * 2 = 2.0 + */ +int axd_cmd_output_get_downmix(struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg = axd_get_output_downmix_reg(cmd, pipe); + unsigned int setting; + + if (unlikely(!reg)) + return -1; + axd_read_reg(cmd, reg, &setting); + return setting; +} + +/* Returns non-zero of output @pipe EQ is enabled. Zero otherwise. */ +int axd_cmd_output_get_eqenabled(struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg = axd_get_output_eqcontrol_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return (control & AXD_EQCTRL_ENABLE_BITS) >> AXD_EQCTRL_ENABLE_SHIFT; +} + +/* Sets @gain to the currently set output EQ Master gain value */ +void axd_cmd_output_get_eqmastergain(struct axd_cmd *cmd, unsigned int pipe, + int *gain) +{ + unsigned int reg = axd_get_output_eqcontrol_reg(cmd, pipe); + unsigned int temp; + + if (unlikely(!reg || !gain)) + return; + axd_read_reg(cmd, reg, &temp); + *gain = ((int)temp & AXD_EQCTRL_GAIN_BITS) >> AXD_EQCTRL_GAIN_SHIFT; +} + +/* Sets @gain to the currently set output EQ Band0 gain value */ +void axd_cmd_output_get_eqband0gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain) +{ + unsigned int reg = axd_get_output_eqband0_reg(cmd, pipe); + unsigned int temp; + + if (unlikely(!reg || !gain)) + return; + axd_read_reg(cmd, reg, &temp); + *gain = ((int)temp & AXD_EQBANDX_GAIN_BITS) >> AXD_EQBANDX_GAIN_SHIFT; +} + +/* Sets @gain to the currently set output EQ Band1 gain value */ +void axd_cmd_output_get_eqband1gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain) +{ + unsigned int reg = axd_get_output_eqband1_reg(cmd, pipe); + unsigned int temp; + + if (unlikely(!reg || !gain)) + return; + axd_read_reg(cmd, reg, &temp); + *gain = ((int)temp & AXD_EQBANDX_GAIN_BITS) >> AXD_EQBANDX_GAIN_SHIFT; +} + +/* Sets @gain to the currently set output EQ Band2 gain value */ +void axd_cmd_output_get_eqband2gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain) +{ + unsigned int reg = axd_get_output_eqband2_reg(cmd, pipe); + unsigned int temp; + + if (unlikely(!reg || !gain)) + return; + axd_read_reg(cmd, reg, &temp); + *gain = ((int)temp & AXD_EQBANDX_GAIN_BITS) >> AXD_EQBANDX_GAIN_SHIFT; +} + +/* Sets @gain to the currently set output EQ Band3 gain value */ +void axd_cmd_output_get_eqband3gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain) +{ + unsigned int reg = axd_get_output_eqband3_reg(cmd, pipe); + unsigned int temp; + + if (unlikely(!reg || !gain)) + return; + axd_read_reg(cmd, reg, &temp); + *gain = ((int)temp & AXD_EQBANDX_GAIN_BITS) >> AXD_EQBANDX_GAIN_SHIFT; +} + +/* Sets @gain to the currently set output EQ Band4 gain value */ +void axd_cmd_output_get_eqband4gain(struct axd_cmd *cmd, unsigned int pipe, + int *gain) +{ + unsigned int reg = axd_get_output_eqband4_reg(cmd, pipe); + unsigned int temp; + + if (unlikely(!reg || !gain)) + return; + axd_read_reg(cmd, reg, &temp); + *gain = ((int)temp & AXD_EQBANDX_GAIN_BITS) >> AXD_EQBANDX_GAIN_SHIFT; +} + +void axd_cmd_output_get_geq_power(struct axd_cmd *cmd, unsigned int pipe, + char *buf, int channel) +{ + u32 data[5]; + int i; + + if (channel < 4) { + for (i = 0; i < 5; i++) { + u32 reg = axd_get_output_eq_power_reg_ch0_3(cmd, + pipe, i); + + if (unlikely(!reg)) + return; + + if (axd_read_reg(cmd, reg, &data[i])) + return; + } + + sprintf(buf, "%d, %d, %d, %d, %d\n", + (data[0] >> (channel * 8)) & 0xFF, + (data[1] >> (channel * 8)) & 0xFF, + (data[2] >> (channel * 8)) & 0xFF, + (data[3] >> (channel * 8)) & 0xFF, + (data[3] >> (channel * 8)) & 0xFF); + + } else { + for (i = 0; i < 5; i++) { + u32 reg = axd_get_output_eq_power_reg_ch4_7(cmd, + pipe, i); + + if (unlikely(!reg)) + return; + + if (axd_read_reg(cmd, reg, &data[i])) + return; + } + + sprintf(buf, "%d, %d, %d, %d, %d\n", + (data[0] >> (channel-4 * 8)) & 0xFF, + (data[1] >> (channel-4 * 8)) & 0xFF, + (data[2] >> (channel-4 * 8)) & 0xFF, + (data[3] >> (channel-4 * 8)) & 0xFF, + (data[4] >> (channel-4 * 8)) & 0xFF); + } +} + +unsigned int axd_cmd_info_get_resampler_fin(struct axd_cmd *cmd, + unsigned int pipe) +{ + unsigned int temp; + unsigned int reg = axd_get_resample_fin_reg(cmd, pipe); + + axd_read_reg(cmd, reg, &temp); + + return temp; +} + +unsigned int axd_cmd_info_get_resampler_fout(struct axd_cmd *cmd, + unsigned int pipe) +{ + unsigned int temp; + unsigned int reg = axd_get_resample_fout_reg(cmd, pipe); + + axd_read_reg(cmd, reg, &temp); + + return temp; +} + +void axd_cmd_info_set_resampler_fout(struct axd_cmd *cmd, + unsigned int pipe, unsigned int fout) +{ + unsigned int reg = axd_get_resample_fout_reg(cmd, pipe); + + axd_write_reg(cmd, reg, fout); +} + +unsigned int axd_cmd_output_get_dcpp_enabled(struct axd_cmd *cmd, + unsigned int pipe) +{ + unsigned int reg = axd_get_output_dcpp_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return (control & AXD_DCPP_CTRL_ENABLE_BITS) >> + AXD_DCPP_CTRL_ENABLE_SHIFT; +} + +unsigned int axd_cmd_output_get_dcpp_mode(struct axd_cmd *cmd, + unsigned int pipe) +{ + unsigned int reg = axd_get_output_dcpp_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return (control & AXD_DCPP_CTRL_MODE_BITS) >> AXD_DCPP_CTRL_MODE_SHIFT; +} + +unsigned int axd_cmd_output_get_dcpp_channels(struct axd_cmd *cmd, + unsigned int pipe) +{ + unsigned int reg = axd_get_output_dcpp_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return (control & AXD_DCPP_CTRL_CHANNELS_BITS) >> + AXD_DCPP_CTRL_CHANNELS_SHIFT; +} + +unsigned int axd_cmd_output_get_dcpp_eq_mode(struct axd_cmd *cmd, + unsigned int pipe) +{ + unsigned int reg = axd_get_output_dcpp_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return (control & AXD_DCPP_CTRL_EQ_MODE_BITS) >> + AXD_DCPP_CTRL_EQ_MODE_SHIFT; +} + +unsigned int axd_cmd_output_get_dcpp_eq_bands(struct axd_cmd *cmd, + unsigned int pipe) +{ + unsigned int reg = axd_get_output_dcpp_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return (control & AXD_DCPP_CTRL_EQ_BANDS_BITS) >> + AXD_DCPP_CTRL_EQ_BANDS_SHIFT; +} + +unsigned int axd_cmd_output_get_dcpp_max_delay_samples(struct axd_cmd *cmd, + unsigned int pipe) +{ + unsigned int reg = axd_get_output_dcpp_max_delay_samples_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_delay_samples(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_delay_samples_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_eq_output_volume( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_eq_output_volume_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_eq_passthrough_gain( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_eq_passthrough_gain_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_eq_inverse_passthrough_gain( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_eq_inverse_passthrough_gain_reg(cmd, + pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_bass_shelf_shift( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_bass_shelf_shift_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_bass_shelf_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_bass_shelf_a0_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_bass_shelf_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_bass_shelf_a1_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_bass_shelf_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_bass_shelf_a2_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_bass_shelf_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_bass_shelf_b0_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_bass_shelf_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_bass_shelf_b1_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_treble_shelf_shift( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_treble_shelf_shift_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_treble_shelf_a0( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_treble_shelf_a0_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_treble_shelf_a1( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_treble_shelf_a1_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_treble_shelf_a2( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_treble_shelf_a2_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_treble_shelf_b0( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_treble_shelf_b0_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_treble_shelf_b1( + struct axd_cmd *cmd, unsigned int pipe, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + + reg = axd_get_output_dcpp_channel_treble_shelf_b1_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_eq_gain(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int band) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_gain_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_eq_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int band) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_a0_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_eq_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int band) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_a1_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_eq_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int band) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_a2_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_eq_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int band) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_b0_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_eq_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int band) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_b1_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_channel_eq_shift(struct axd_cmd *cmd, + unsigned int pipe, unsigned int channel, unsigned int band) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, false, channel); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_shift_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_eq_bands(struct axd_cmd *cmd, + unsigned int pipe) +{ + unsigned int reg = axd_get_output_dcpp_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return (control & AXD_DCPP_CTRL_SUBBAND_EQ_BANDS_BITS) + >> AXD_DCPP_CTRL_SUBBAND_EQ_BANDS_SHIFT; +} + +unsigned int axd_cmd_output_get_dcpp_subband_enabled(struct axd_cmd *cmd, + unsigned int pipe) +{ + unsigned int reg = axd_get_output_dcpp_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return (control & AXD_DCPP_CTRL_SUBBAND_ENABLE_BITS) + >> AXD_DCPP_CTRL_SUBBAND_ENABLE_SHIFT; +} + +unsigned int axd_cmd_output_get_dcpp_subband_input_channel_mask( + struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg = axd_get_output_dcpp_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return (control & AXD_DCPP_CTRL_SUBBAND_CHANNEL_MASK_BITS) + >> AXD_DCPP_CTRL_SUBBAND_CHANNEL_MASK_SHIFT; +} + +unsigned int axd_cmd_output_get_dcpp_subband_delay_samples(struct axd_cmd *cmd, + unsigned int pipe) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + + reg = axd_get_output_dcpp_channel_delay_samples_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_eq_output_volume( + struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + + reg = axd_get_output_dcpp_channel_eq_output_volume_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_eq_passthrough_gain( + struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + + reg = axd_get_output_dcpp_channel_eq_passthrough_gain_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_eq_inverse_passthrough_gain( + struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + + reg = axd_get_output_dcpp_channel_eq_inverse_passthrough_gain_reg(cmd, + pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_eq_gain(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_gain_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_eq_a0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_a0_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_eq_a1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band) +{ + unsigned int control; + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_a1_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_eq_a2(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_a2_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_eq_b0(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band) +{ + unsigned int control; + unsigned int reg; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_b0_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_eq_b1(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_b1_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_eq_shift(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band) +{ + unsigned int reg; + unsigned int control; + + axd_cmd_output_dcpp_select_channel(cmd, pipe, true, 0); + axd_cmd_output_dcpp_select_band(cmd, pipe, band); + + reg = axd_get_output_dcpp_channel_eq_shift_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_low_pass_filter_a0( + struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg; + unsigned int control; + + reg = axd_get_output_dcpp_subband_low_pass_filter_a0_reg(cmd, pipe); + + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_low_pass_filter_a1( + struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg; + unsigned int control; + + reg = axd_get_output_dcpp_subband_low_pass_filter_a1_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_low_pass_filter_a2( + struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg; + unsigned int control; + + reg = axd_get_output_dcpp_subband_low_pass_filter_a2_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_low_pass_filter_b0( + struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg; + unsigned int control; + + reg = axd_get_output_dcpp_subband_low_pass_filter_b0_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} + +unsigned int axd_cmd_output_get_dcpp_subband_low_pass_filter_b1( + struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg; + unsigned int control; + + reg = axd_get_output_dcpp_subband_low_pass_filter_b1_reg(cmd, pipe); + if (unlikely(!reg)) + return 0; + axd_read_reg(cmd, reg, &control); + return control; +} diff --git a/sound/soc/img/axd/axd_cmds_internal.c b/sound/soc/img/axd/axd_cmds_internal.c new file mode 100644 index 000000000000..6a1e532f748d --- /dev/null +++ b/sound/soc/img/axd/axd_cmds_internal.c @@ -0,0 +1,3264 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * Common functionality required by other axd_cmds_*.c files. + */ +#include <linux/delay.h> +#include <linux/device.h> +#include <linux/err.h> +#include <linux/io.h> +#include <linux/sched.h> + +#include "axd_cmds_internal.h" +#include "axd_module.h" +#include "axd_platform.h" + +#define WRONG_PIPE_STR "Wrong pipe number: %d\n" +#define WRONG_BAND_STR "Wrong band number: %d\n" + +/* + * Send/Clear control kick. + * + * NOTE: + * Must acquire axd_platform_lock() before accessing kick and interrupt status + * registers as the AXD firmware might be accessing them at the same time. + */ +inline void axd_ctrl_kick(struct axd_memory_map __iomem *message) +{ + unsigned long flags; + unsigned int temp; + + flags = axd_platform_lock(); + temp = ioread32(&message->kick) | AXD_ANY_KICK_BIT | AXD_KICK_CTRL_BIT; + iowrite32(temp, &message->kick); + axd_platform_unlock(flags); + axd_platform_kick(); +} +inline void axd_kick_status_clear(struct axd_memory_map __iomem *message) +{ + unsigned long flags; + unsigned int temp; + + flags = axd_platform_lock(); + temp = ioread32(&message->int_status) & ~AXD_INT_KICK_DONE; + iowrite32(temp, &message->int_status); + axd_platform_unlock(flags); +} +/* + * Wait until axd is ready again. Must be called while cm_lock is held. + */ +int axd_wait_ready(struct axd_memory_map __iomem *message) +{ +#define BUSYWAIT_TIME 1 +#define BUSYWAIT_TIMEOUT 100 + unsigned int timeout = 0; + + while (ioread32(&message->control_command) != AXD_CTRL_CMD_READY) { + mdelay(BUSYWAIT_TIME); + timeout += BUSYWAIT_TIME; + if (timeout == BUSYWAIT_TIMEOUT) + return -1; + } + return 0; +} + +/* + * Reads a register from the MemoryMapped register interface. + * @cmd: pointer to initialized struct axd_cmd. + * @reg: the register address to be accessed. + */ +int axd_read_reg(struct axd_cmd *cmd, unsigned int reg, unsigned int *data) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + struct axd_memory_map __iomem *message = cmd->message; + struct mutex *cm_lock = &cmd->cm_lock; + int ret; + + mutex_lock(cm_lock); + if (axd_get_flag(&cmd->fw_stopped_flg)) { + mutex_unlock(cm_lock); + return -1; + } + axd_set_flag(&cmd->response_flg, 0); + iowrite32(AXD_CTRL_CMD_READ_REGISTER | reg, &message->control_command); + axd_ctrl_kick(message); + ret = wait_event_timeout(cmd->wait, + axd_get_flag(&cmd->response_flg) != 0, CMD_TIMEOUT); + *data = ioread32(&message->control_data); + mutex_unlock(cm_lock); + if (!ret) { + dev_warn(axd->dev, "failed to read reg 0x%04X\n", reg); + *data = 0; + return -1; + } + return 0; +} + +/* + * Writes control data to the MemoryMapped control interface. + * We assume that cm_lock is held before this function is called. + * @cmd: pointer to initialized struct axd_cmd. + * @ctrl_command: the control command to write. + * @ctrl_data: the control value to write. + */ +int axd_write_ctrl(struct axd_cmd *cmd, unsigned int ctrl_command, + unsigned int ctrl_data) +{ + struct axd_memory_map __iomem *message = cmd->message; + int ret; + + axd_set_flag(&cmd->response_flg, 0); + iowrite32(ctrl_data, &message->control_data); + iowrite32(ctrl_command, &message->control_command); + axd_ctrl_kick(message); + ret = wait_event_timeout(cmd->wait, + axd_get_flag(&cmd->response_flg) != 0, CMD_TIMEOUT); + return ret; +} + +/* + * Writes value to a register int the MemoryMapped register interface. + * @cmd: pointer to initialized struct axd_cmd. + * @reg: the register address to be accessed. + * @value: the new value to write. + */ +int axd_write_reg(struct axd_cmd *cmd, unsigned int reg, unsigned int value) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + struct mutex *cm_lock = &cmd->cm_lock; + int ret; + + mutex_lock(cm_lock); + if (axd_get_flag(&cmd->fw_stopped_flg)) { + mutex_unlock(cm_lock); + return -1; + } + ret = axd_write_ctrl(cmd, AXD_CTRL_CMD_WRITE_REGISTER | reg, value); + mutex_unlock(cm_lock); + if (!ret) { + dev_warn(axd->dev, "failed to write reg 0x%04X\n", reg); + return -1; + } + + return 0; +} + +int axd_write_reg_buf(struct axd_cmd *cmd, unsigned int reg, unsigned int value) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + struct mutex *cm_lock = &cmd->cm_lock; + struct axd_ctrlbuf_item __iomem *buf; + unsigned int ctrlbuf_ctrl = ioread32(&cmd->message->ctrlbuf_ctrl); + unsigned int ctrlbuf_size = ioread32(&cmd->message->ctrlbuf_size); + unsigned int temp; + + if (!axd_get_flag(&cmd->ctrlbuf_active_flg)) { + /* If the ctrlbuf isn't active, fall back to simple reg write */ + return axd_write_reg(cmd, reg, value); + } + + mutex_lock(cm_lock); + if (axd_get_flag(&cmd->fw_stopped_flg)) { + mutex_unlock(cm_lock); + return -1; + } + + if (ctrlbuf_ctrl >= ctrlbuf_size) { + mutex_unlock(cm_lock); + dev_err(axd->dev, "Could not write to ctrlbuf: full\n"); + return -1; + } + + buf = &cmd->message->ctrlbuf[ctrlbuf_ctrl]; + + iowrite32(AXD_CTRL_CMD_WRITE_REGISTER | reg, &buf->reg); + iowrite32(value, &buf->val); + + temp = ioread32(&cmd->message->ctrlbuf_ctrl) + 1; + iowrite32(temp, &cmd->message->ctrlbuf_ctrl); + + mutex_unlock(cm_lock); + + return 0; +} + +int axd_flush_reg_buf(struct axd_cmd *cmd) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + struct mutex *cm_lock = &cmd->cm_lock; + int ret; + + mutex_lock(cm_lock); + if (axd_get_flag(&cmd->fw_stopped_flg)) { + mutex_unlock(cm_lock); + return -1; + } + + if (ioread32(&cmd->message->ctrlbuf_ctrl) == 0) { + mutex_unlock(cm_lock); + dev_warn(axd->dev, "Tried to flush empty ctrlbuf\n"); + return -1; + } + + ret = axd_write_ctrl(cmd, AXD_CTRL_CMD_CTRLBUF_FLUSH, 0); + if (!ret) { + /* Drop buffer and ignore any response */ + iowrite32(0, &cmd->message->ctrlbuf_ctrl); + + mutex_unlock(cm_lock); + dev_err(axd->dev, "Could not write control command to flush buffer"); + return -EIO; + } + + /* Ignore any response */ + iowrite32(0, &cmd->message->ctrlbuf_ctrl); + + mutex_unlock(cm_lock); + + return 0; +} + +/* Returns the address of the correct mixer mux register for @pipe */ +unsigned int axd_get_mixer_mux_reg(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_MUX0; + break; + case 1: + reg = AXD_REG_MUX1; + break; + case 2: + reg = AXD_REG_MUX2; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the number of the currently set input codec */ +unsigned int axd_get_input_codec_number(struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg = axd_get_input_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return -1; + axd_read_reg(cmd, reg, &control); + return (control & AXD_INCTRL_CODEC_BITS) >> AXD_INCTRL_CODEC_SHIFT; +} + +/* Returns the address of the correct input control register for @pipe */ +unsigned int axd_get_input_control_reg(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_INPUT0_CONTROL; + break; + case 1: + reg = AXD_REG_INPUT1_CONTROL; + break; + case 2: + reg = AXD_REG_INPUT2_CONTROL; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the correct input gain register for @pipe */ +unsigned int axd_get_input_gain_reg(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_INPUT0_GAIN; + break; + case 1: + reg = AXD_REG_INPUT1_GAIN; + break; + case 2: + reg = AXD_REG_INPUT2_GAIN; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the correct input mute register for @pipe */ +unsigned int axd_get_input_mute_reg(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_INPUT0_MUTE; + break; + case 1: + reg = AXD_REG_INPUT1_MUTE; + break; + case 2: + reg = AXD_REG_INPUT2_MUTE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the correct input UpMix register for @pipe */ +unsigned int axd_get_input_upmix_reg(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_INPUT0_UPMIX; + break; + case 1: + reg = AXD_REG_INPUT1_UPMIX; + break; + case 2: + reg = AXD_REG_INPUT2_UPMIX; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the correct input bufer occupancy reg for @pipe */ +unsigned int axd_get_input_buffer_occupancy_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_INPUT0_BUFFER_OCCUPANCY; + break; + case 1: + reg = AXD_REG_INPUT1_BUFFER_OCCUPANCY; + break; + case 2: + reg = AXD_REG_INPUT2_BUFFER_OCCUPANCY; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the number of the currently set output codec */ +unsigned int axd_get_output_codec_number(struct axd_cmd *cmd, unsigned int pipe) +{ + unsigned int reg = axd_get_output_control_reg(cmd, pipe); + unsigned int control; + + if (unlikely(!reg)) + return -1; + axd_read_reg(cmd, reg, &control); + return (control & AXD_OUTCTRL_CODEC_BITS) >> AXD_OUTCTRL_CODEC_SHIFT; +} + +/* Returns the address of the correct output control register for @pipe */ +unsigned int axd_get_output_control_reg(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_CONTROL; + break; + case 1: + reg = AXD_REG_OUTPUT1_CONTROL; + break; + case 2: + reg = AXD_REG_OUTPUT2_CONTROL; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the correct output DownMix register for @pipe */ +unsigned int axd_get_output_downmix_reg(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DOWNMIX; + break; + case 1: + reg = AXD_REG_OUTPUT1_DOWNMIX; + break; + case 2: + reg = AXD_REG_OUTPUT2_DOWNMIX; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the correct output event register for @pipe */ +unsigned int axd_get_output_event_reg(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_EVENT; + break; + case 1: + reg = AXD_REG_OUTPUT1_EVENT; + break; + case 2: + reg = AXD_REG_OUTPUT2_EVENT; + break; + default: + dev_err(axd->dev, "Unsupported output pipe number %d\n", pipe); + return 0; + } + return reg; +} + +/* + * Returns the address of the output EQ Ctrl / Master Gain register for + * @pipe + */ +unsigned int axd_get_output_eqcontrol_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_EQCTRL; + break; + case 1: + reg = AXD_REG_OUTPUT1_EQCTRL; + break; + case 2: + reg = AXD_REG_OUTPUT2_EQCTRL; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the output EQ Band0 register for @pipe*/ +unsigned int axd_get_output_eqband0_reg(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_EQBAND0; + break; + case 1: + reg = AXD_REG_OUTPUT1_EQBAND0; + break; + case 2: + reg = AXD_REG_OUTPUT2_EQBAND0; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the output EQ Band1 register for @pipe*/ +unsigned int axd_get_output_eqband1_reg(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_EQBAND1; + break; + case 1: + reg = AXD_REG_OUTPUT1_EQBAND1; + break; + case 2: + reg = AXD_REG_OUTPUT2_EQBAND1; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the output EQ Band2 register for @pipe*/ +unsigned int axd_get_output_eqband2_reg(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_EQBAND2; + break; + case 1: + reg = AXD_REG_OUTPUT1_EQBAND2; + break; + case 2: + reg = AXD_REG_OUTPUT2_EQBAND2; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the output EQ Band3 register for @pipe*/ +unsigned int axd_get_output_eqband3_reg(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_EQBAND3; + break; + case 1: + reg = AXD_REG_OUTPUT1_EQBAND3; + break; + case 2: + reg = AXD_REG_OUTPUT2_EQBAND3; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the output EQ Band4 register for @pipe*/ +unsigned int axd_get_output_eqband4_reg(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_EQBAND4; + break; + case 1: + reg = AXD_REG_OUTPUT1_EQBAND4; + break; + case 2: + reg = AXD_REG_OUTPUT2_EQBAND4; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* DCPP */ + +int axd_cmd_output_dcpp_select_channel(struct axd_cmd *cmd, unsigned int pipe, + bool subband, unsigned int channel) +{ + unsigned int reg; + unsigned int control; + int ret; + + reg = axd_get_output_dcpp_channel_ctrl_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + /* Generate channel selector */ + control = 0; + + if (subband) + control = AXD_DCPP_CHANNEL_CTRL_SUBBAND_BITS; + else + control = channel << AXD_DCPP_CHANNEL_CTRL_CHANNEL_SHIFT; + + /* Compare with last channel selector */ + if (control == cmd->dcpp_channel_ctrl_cache[pipe]) { + ret = 0; + } else { + ret = axd_write_reg_buf(cmd, reg, control); + cmd->dcpp_channel_ctrl_cache[pipe] = control; + } + + return ret; +} + +int axd_cmd_output_dcpp_select_band(struct axd_cmd *cmd, unsigned int pipe, + unsigned int band) +{ + unsigned int reg; + unsigned int control; + int ret; + + reg = axd_get_output_dcpp_band_ctrl_reg(cmd, pipe); + if (unlikely(!reg)) + return -1; + + /* Generate band selector */ + control = band; + + /* Compare with last band selector */ + if (control == cmd->dcpp_band_ctrl_cache[pipe]) { + ret = 0; + } else { + ret = axd_write_reg_buf(cmd, reg, control); + cmd->dcpp_band_ctrl_cache[pipe] = control; + } + + return ret; +} + +unsigned int axd_get_output_dcpp_control_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CONTROL; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CONTROL; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CONTROL; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_max_delay_samples_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_MAX_DELAY_SAMPLES; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_MAX_DELAY_SAMPLES; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_MAX_DELAY_SAMPLES; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_ctrl_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg = 0; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_CONTROL; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_CONTROL; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_CONTROL; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + } + return reg; +} + +unsigned int axd_get_output_dcpp_band_ctrl_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg = 0; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_BAND_CONTROL; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_BAND_CONTROL; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_BAND_CONTROL; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_delay_samples_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_DELAY_SAMPLES; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_DELAY_SAMPLES; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_DELAY_SAMPLES; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_eq_output_volume_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_OUTPUT_VOLUME; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_OUTPUT_VOLUME; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_OUTPUT_VOLUME; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_eq_passthrough_gain_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_PASSTHROUGH_GAIN; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_PASSTHROUGH_GAIN; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_PASSTHROUGH_GAIN; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_eq_inverse_passthrough_gain_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_INVERSE_PASSTHROUGH_GAIN; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_INVERSE_PASSTHROUGH_GAIN; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_INVERSE_PASSTHROUGH_GAIN; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_bass_shelf_shift_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_BASS_SHELF_SHIFT; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_BASS_SHELF_SHIFT; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_BASS_SHELF_SHIFT; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_bass_shelf_a0_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_BASS_SHELF_A0; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_BASS_SHELF_A0; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_BASS_SHELF_A0; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_bass_shelf_a1_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_BASS_SHELF_A1; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_BASS_SHELF_A1; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_BASS_SHELF_A1; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_bass_shelf_a2_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_BASS_SHELF_A2; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_BASS_SHELF_A2; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_BASS_SHELF_A2; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_bass_shelf_b0_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_BASS_SHELF_B0; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_BASS_SHELF_B0; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_BASS_SHELF_B0; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_bass_shelf_b1_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_BASS_SHELF_B1; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_BASS_SHELF_B1; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_BASS_SHELF_B1; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_treble_shelf_shift_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_TREBLE_SHELF_SHIFT; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_TREBLE_SHELF_SHIFT; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_TREBLE_SHELF_SHIFT; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_treble_shelf_a0_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_TREBLE_SHELF_A0; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_TREBLE_SHELF_A0; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_TREBLE_SHELF_A0; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_treble_shelf_a1_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_TREBLE_SHELF_A1; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_TREBLE_SHELF_A1; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_TREBLE_SHELF_A1; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_treble_shelf_a2_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_TREBLE_SHELF_A2; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_TREBLE_SHELF_A2; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_TREBLE_SHELF_A2; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_treble_shelf_b0_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_TREBLE_SHELF_B0; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_TREBLE_SHELF_B0; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_TREBLE_SHELF_B0; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_treble_shelf_b1_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_TREBLE_SHELF_B1; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_TREBLE_SHELF_B1; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_TREBLE_SHELF_B1; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_eq_gain_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_BAND_GAIN; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_BAND_GAIN; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_BAND_GAIN; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_eq_a0_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_BAND_A0; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_BAND_A0; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_BAND_A0; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_eq_a1_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_BAND_A1; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_BAND_A1; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_BAND_A1; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_eq_a2_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_BAND_A2; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_BAND_A2; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_BAND_A2; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_eq_b0_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_BAND_B0; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_BAND_B0; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_BAND_B0; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_eq_b1_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_BAND_B1; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_BAND_B1; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_BAND_B1; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_channel_eq_shift_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_CHANNEL_EQ_BAND_SHIFT; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_CHANNEL_EQ_BAND_SHIFT; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_CHANNEL_EQ_BAND_SHIFT; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_subband_low_pass_filter_a0_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg = 0; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_SUBBAND_LOW_PASS_FILTER_A0; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_SUBBAND_LOW_PASS_FILTER_A0; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_SUBBAND_LOW_PASS_FILTER_A0; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_subband_low_pass_filter_a1_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg = 0; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_SUBBAND_LOW_PASS_FILTER_A1; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_SUBBAND_LOW_PASS_FILTER_A1; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_SUBBAND_LOW_PASS_FILTER_A1; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_subband_low_pass_filter_a2_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg = 0; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_SUBBAND_LOW_PASS_FILTER_A2; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_SUBBAND_LOW_PASS_FILTER_A2; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_SUBBAND_LOW_PASS_FILTER_A2; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_subband_low_pass_filter_b0_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg = 0; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_SUBBAND_LOW_PASS_FILTER_B0; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_SUBBAND_LOW_PASS_FILTER_B0; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_SUBBAND_LOW_PASS_FILTER_B0; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_dcpp_subband_low_pass_filter_b1_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg = 0; + + switch (pipe) { + case 0: + reg = AXD_REG_OUTPUT0_DCPP_SUBBAND_LOW_PASS_FILTER_B1; + break; + case 1: + reg = AXD_REG_OUTPUT1_DCPP_SUBBAND_LOW_PASS_FILTER_B1; + break; + case 2: + reg = AXD_REG_OUTPUT2_DCPP_SUBBAND_LOW_PASS_FILTER_B1; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the aac version for decoder at @pipe*/ +unsigned int axd_get_decoder_aac_version_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_AAC_VERSION; + break; + case 1: + reg = AXD_REG_DEC1_AAC_VERSION; + break; + case 2: + reg = AXD_REG_DEC2_AAC_VERSION; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the aac pipes for decoder at @pipe*/ +unsigned int axd_get_decoder_aac_channels_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_AAC_CHANNELS; + break; + case 1: + reg = AXD_REG_DEC1_AAC_CHANNELS; + break; + case 2: + reg = AXD_REG_DEC2_AAC_CHANNELS; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the aac profile for decoder at @pipe*/ +unsigned int axd_get_decoder_aac_profile_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_AAC_PROFILE; + break; + case 1: + reg = AXD_REG_DEC1_AAC_PROFILE; + break; + case 2: + reg = AXD_REG_DEC2_AAC_PROFILE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the aac stream type for decoder at @pipe*/ +unsigned int axd_get_decoder_aac_streamtype_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_AAC_STREAM_TYPE; + break; + case 1: + reg = AXD_REG_DEC1_AAC_STREAM_TYPE; + break; + case 2: + reg = AXD_REG_DEC2_AAC_STREAM_TYPE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the aac stream type for decoder at @pipe*/ +unsigned int axd_get_decoder_aac_samplerate_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_AAC_SAMPLERATE; + break; + case 1: + reg = AXD_REG_DEC1_AAC_SAMPLERATE; + break; + case 2: + reg = AXD_REG_DEC2_AAC_SAMPLERATE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_decoder_ac3_channels_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_AC3_CHANNELS; + break; + case 1: + reg = AXD_REG_DEC1_AC3_CHANNELS; + break; + case 2: + reg = AXD_REG_DEC2_AC3_CHANNELS; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_decoder_ac3_channel_order_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_AC3_CHANNEL_ORDER; + break; + case 1: + reg = AXD_REG_DEC1_AC3_CHANNEL_ORDER; + break; + case 2: + reg = AXD_REG_DEC2_AC3_CHANNEL_ORDER; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_decoder_ac3_mode_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_AC3_MODE; + break; + case 1: + reg = AXD_REG_DEC1_AC3_MODE; + break; + case 2: + reg = AXD_REG_DEC2_AC3_MODE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the cook flavour for decoder at @pipe*/ +unsigned int axd_get_decoder_cook_flavour_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_COOK_FLAVOUR; + break; + case 1: + reg = AXD_REG_DEC1_COOK_FLAVOUR; + break; + case 2: + reg = AXD_REG_DEC2_COOK_FLAVOUR; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the flac pipes for decoder at @pipe*/ +unsigned int axd_get_decoder_flac_channels_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_FLAC_CHANNELS; + break; + case 1: + reg = AXD_REG_DEC1_FLAC_CHANNELS; + break; + case 2: + reg = AXD_REG_DEC2_FLAC_CHANNELS; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the flac sample rate for decoder at @pipe*/ +unsigned int axd_get_decoder_flac_samplerate_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_FLAC_SAMPLERATE; + break; + case 1: + reg = AXD_REG_DEC1_FLAC_SAMPLERATE; + break; + case 2: + reg = AXD_REG_DEC2_FLAC_SAMPLERATE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the flac bits per sample for decoder at @pipe*/ +unsigned int axd_get_decoder_flac_bitspersample_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_FLAC_BITS_PER_SAMPLE; + break; + case 1: + reg = AXD_REG_DEC1_FLAC_BITS_PER_SAMPLE; + break; + case 2: + reg = AXD_REG_DEC2_FLAC_BITS_PER_SAMPLE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the flac md5 checking for decoder at @pipe*/ +unsigned int axd_get_decoder_flac_md5checking_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_FLAC_MD5_CHECKING; + break; + case 1: + reg = AXD_REG_DEC1_FLAC_MD5_CHECKING; + break; + case 2: + reg = AXD_REG_DEC2_FLAC_MD5_CHECKING; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the mpeg num pipes for decoder at @pipe*/ +unsigned int axd_get_decoder_mpeg_numchannels_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_MPEG_CHANNELS; + break; + case 1: + reg = AXD_REG_DEC1_MPEG_CHANNELS; + break; + case 2: + reg = AXD_REG_DEC2_MPEG_CHANNELS; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the mpeg mlchannel for decoder at @pipe*/ +unsigned int axd_get_decoder_mpeg_mlchannel_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_MPEG_MLCHANNEL; + break; + case 1: + reg = AXD_REG_DEC1_MPEG_MLCHANNEL; + break; + case 2: + reg = AXD_REG_DEC2_MPEG_MLCHANNEL; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the wma player opt for decoder at @pipe*/ +unsigned int axd_get_decoder_wma_playeropt_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_PLAYER_OPT; + break; + case 1: + reg = AXD_REG_DEC1_WMA_PLAYER_OPT; + break; + case 2: + reg = AXD_REG_DEC2_WMA_PLAYER_OPT; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the wma player drc setting for decoder at @pipe*/ +unsigned int axd_get_decoder_wma_drcsetting_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_DRC_SETTING; + break; + case 1: + reg = AXD_REG_DEC1_WMA_DRC_SETTING; + break; + case 2: + reg = AXD_REG_DEC2_WMA_DRC_SETTING; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the wma player peak ref for decoder at @pipe*/ +unsigned int axd_get_decoder_wma_peakampref_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_PEAK_AMP_REF; + break; + case 1: + reg = AXD_REG_DEC1_WMA_PEAK_AMP_REF; + break; + case 2: + reg = AXD_REG_DEC2_WMA_PEAK_AMP_REF; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the wma player rms ref for decoder at @pipe*/ +unsigned int axd_get_decoder_wma_rmsampref_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_RMS_AMP_REF; + break; + case 1: + reg = AXD_REG_DEC1_WMA_RMS_AMP_REF; + break; + case 2: + reg = AXD_REG_DEC2_WMA_RMS_AMP_REF; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the wma player peak target for decoder at @pipe*/ +unsigned int axd_get_decoder_wma_peakamptarget_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_PEAK_AMP_TARGET; + break; + case 1: + reg = AXD_REG_DEC1_WMA_PEAK_AMP_TARGET; + break; + case 2: + reg = AXD_REG_DEC2_WMA_PEAK_AMP_TARGET; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the wma player rms target for decoder at @pipe*/ +unsigned int axd_get_decoder_wma_rmsamptarget_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_RMS_AMP_TARGET; + break; + case 1: + reg = AXD_REG_DEC2_WMA_RMS_AMP_TARGET; + break; + case 2: + reg = AXD_REG_DEC1_WMA_RMS_AMP_TARGET; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the wma pcm valid bits for decoder at @pipe*/ +unsigned int axd_get_decoder_wma_pcmvalidbitspersample_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_PCM_VAL_BITS_PER_SAMPLE; + break; + case 1: + reg = AXD_REG_DEC1_WMA_PCM_VAL_BITS_PER_SAMPLE; + break; + case 2: + reg = AXD_REG_DEC2_WMA_PCM_VAL_BITS_PER_SAMPLE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the wma pcm container size for decoder at @pipe*/ +unsigned int axd_get_decoder_wma_pcmcontainersize_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_PCM_CONTAINER_SIZE; + break; + case 1: + reg = AXD_REG_DEC1_WMA_PCM_CONTAINER_SIZE; + break; + case 2: + reg = AXD_REG_DEC2_WMA_PCM_CONTAINER_SIZE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the wma format tag for decoder at @pipe*/ +unsigned int axd_get_decoder_wma_wmaformattag_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_WMA_FORMAT_TAG; + break; + case 1: + reg = AXD_REG_DEC1_WMA_WMA_FORMAT_TAG; + break; + case 2: + reg = AXD_REG_DEC2_WMA_WMA_FORMAT_TAG; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the wma format num pipes for decoder at @pipe*/ +unsigned int axd_get_decoder_wma_wmanumchannels_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_WMA_CHANNELS; + break; + case 1: + reg = AXD_REG_DEC1_WMA_WMA_CHANNELS; + break; + case 2: + reg = AXD_REG_DEC2_WMA_WMA_CHANNELS; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the wma format sample/s for decoder at @pipe*/ +unsigned int axd_get_decoder_wma_wmasamplespersec_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_WMA_SAMPLES_PER_SEC; + break; + case 1: + reg = AXD_REG_DEC1_WMA_WMA_SAMPLES_PER_SEC; + break; + case 2: + reg = AXD_REG_DEC2_WMA_WMA_SAMPLES_PER_SEC; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* + * Returns the address of the wma format average bytes per sample for decoder + * at @pipe + */ +unsigned int axd_get_decoder_wma_wmaaveragebytespersec_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_WMA_AVG_BYTES_PER_SEC; + break; + case 1: + reg = AXD_REG_DEC1_WMA_WMA_AVG_BYTES_PER_SEC; + break; + case 2: + reg = AXD_REG_DEC2_WMA_WMA_AVG_BYTES_PER_SEC; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the wma format block align for decoder at @pipe*/ +unsigned int axd_get_decoder_wma_wmablockalign_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_WMA_BLOCK_ALIGN; + break; + case 1: + reg = AXD_REG_DEC1_WMA_WMA_BLOCK_ALIGN; + break; + case 2: + reg = AXD_REG_DEC2_WMA_WMA_BLOCK_ALIGN; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the wma format valid bits for decoder at @pipe*/ +unsigned int axd_get_decoder_wma_wmavalidbitspersample_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_WMA_VAL_BITS_PER_SAMPLE; + break; + case 1: + reg = AXD_REG_DEC1_WMA_WMA_VAL_BITS_PER_SAMPLE; + break; + case 2: + reg = AXD_REG_DEC2_WMA_WMA_VAL_BITS_PER_SAMPLE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the wma format pipe mask for decoder at @pipe*/ +unsigned int axd_get_decoder_wma_wmachannelmask_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_WMA_CHANNEL_MASK; + break; + case 1: + reg = AXD_REG_DEC1_WMA_WMA_CHANNEL_MASK; + break; + case 2: + reg = AXD_REG_DEC2_WMA_WMA_CHANNEL_MASK; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the wma format encode options for decoder at @pipe*/ +unsigned int axd_get_decoder_wma_wmaencodeoptions_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_WMA_WMA_ENCODE_OPTS; + break; + case 1: + reg = AXD_REG_DEC1_WMA_WMA_ENCODE_OPTS; + break; + case 2: + reg = AXD_REG_DEC2_WMA_WMA_ENCODE_OPTS; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the pcm samplerate reg for decoder at @pipe*/ +unsigned int axd_get_decoder_pcm_samplerate_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_PCMIN0_SAMPLE_RATE; + break; + case 1: + reg = AXD_REG_PCMIN1_SAMPLE_RATE; + break; + case 2: + reg = AXD_REG_PCMIN2_SAMPLE_RATE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the pcm channels reg for decoder at @pipe*/ +unsigned int axd_get_decoder_pcm_channels_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_PCMIN0_CHANNELS; + break; + case 1: + reg = AXD_REG_PCMIN1_CHANNELS; + break; + case 2: + reg = AXD_REG_PCMIN2_CHANNELS; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the pcm bitspersample reg for decoder at @pipe*/ +unsigned int axd_get_decoder_pcm_bitspersample_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_PCMIN0_BITS_PER_SAMPLE; + break; + case 1: + reg = AXD_REG_PCMIN1_BITS_PER_SAMPLE; + break; + case 2: + reg = AXD_REG_PCMIN2_BITS_PER_SAMPLE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +/* Returns the address of the pcm justification reg for decoder at @pipe*/ +unsigned int axd_get_decoder_pcm_justification_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_PCMIN0_JUSTIFICATION; + break; + case 1: + reg = AXD_REG_PCMIN1_JUSTIFICATION; + break; + case 2: + reg = AXD_REG_PCMIN2_JUSTIFICATION; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_decoder_ddplus_config_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_DDPLUS_CONFIG; + break; + case 1: + reg = AXD_REG_DEC1_DDPLUS_CONFIG; + break; + case 2: + reg = AXD_REG_DEC2_DDPLUS_CONFIG; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_decoder_ddplus_channel_order_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_DDPLUS_CHANNEL_ORDER; + break; + case 1: + reg = AXD_REG_DEC1_DDPLUS_CHANNEL_ORDER; + break; + case 2: + reg = AXD_REG_DEC2_DDPLUS_CHANNEL_ORDER; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_decoder_alac_channels_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_ALAC_CHANNELS; + break; + case 1: + reg = AXD_REG_DEC1_ALAC_CHANNELS; + break; + case 2: + reg = AXD_REG_DEC2_ALAC_CHANNELS; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_decoder_alac_depth_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_ALAC_DEPTH; + break; + case 1: + reg = AXD_REG_DEC1_ALAC_DEPTH; + break; + case 2: + reg = AXD_REG_DEC2_ALAC_DEPTH; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_decoder_alac_samplerate_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_ALAC_SAMPLE_RATE; + break; + case 1: + reg = AXD_REG_DEC1_ALAC_SAMPLE_RATE; + break; + case 2: + reg = AXD_REG_DEC2_ALAC_SAMPLE_RATE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_decoder_alac_framelength_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_ALAC_FRAME_LENGTH; + break; + case 1: + reg = AXD_REG_DEC1_ALAC_FRAME_LENGTH; + break; + case 2: + reg = AXD_REG_DEC2_ALAC_FRAME_LENGTH; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_decoder_alac_maxframebytes_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_ALAC_MAX_FRAME_BYTES; + break; + case 1: + reg = AXD_REG_DEC1_ALAC_MAX_FRAME_BYTES; + break; + case 2: + reg = AXD_REG_DEC2_ALAC_MAX_FRAME_BYTES; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_decoder_alac_avgbitrate_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_ALAC_AVG_BIT_RATE; + break; + case 1: + reg = AXD_REG_DEC1_ALAC_AVG_BIT_RATE; + break; + case 2: + reg = AXD_REG_DEC2_ALAC_AVG_BIT_RATE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_decoder_sbc_samplerate_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_SBC_SAMPLE_RATE; + break; + case 1: + reg = AXD_REG_DEC1_SBC_SAMPLE_RATE; + break; + case 2: + reg = AXD_REG_DEC2_SBC_SAMPLE_RATE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_decoder_sbc_audiomode_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_SBC_AUDIO_MODE; + break; + case 1: + reg = AXD_REG_DEC1_SBC_AUDIO_MODE; + break; + case 2: + reg = AXD_REG_DEC2_SBC_AUDIO_MODE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_decoder_sbc_blocks_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_SBC_BLOCKS; + break; + case 1: + reg = AXD_REG_DEC1_SBC_BLOCKS; + break; + case 2: + reg = AXD_REG_DEC2_SBC_BLOCKS; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_decoder_sbc_subbands_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_SBC_SUBBANDS; + break; + case 1: + reg = AXD_REG_DEC1_SBC_SUBBANDS; + break; + case 2: + reg = AXD_REG_DEC2_SBC_SUBBANDS; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_decoder_sbc_bitpool_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_SBC_BITPOOL; + break; + case 1: + reg = AXD_REG_DEC1_SBC_BITPOOL; + break; + case 2: + reg = AXD_REG_DEC2_SBC_BITPOOL; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_decoder_sbc_allocationmode_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_DEC0_SBC_ALLOCATION_MODE; + break; + case 1: + reg = AXD_REG_DEC1_SBC_ALLOCATION_MODE; + break; + case 2: + reg = AXD_REG_DEC2_SBC_ALLOCATION_MODE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + + +unsigned int axd_get_decoder_ms11_mode_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + return AXD_REG_MS11_MODE; +} + +unsigned int axd_get_decoder_ms11_common_config0_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + return AXD_REG_MS11_COMMON_CONFIG0; +} + +unsigned int axd_get_decoder_ms11_common_config1_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + return AXD_REG_MS11_COMMON_CONFIG1; +} + +unsigned int axd_get_decoder_ms11_ddt_config0_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + return AXD_REG_MS11_DDT_CONFIG0; +} + +unsigned int axd_get_decoder_ms11_ddc_config0_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + return AXD_REG_MS11_DDC_CONFIG0; +} + +unsigned int axd_get_decoder_ms11_ext_pcm_config0_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + return AXD_REG_MS11_EXT_PCM_CONFIG0; +} + +/* Returns the address of the pcm bitspersample reg for output at @pipe*/ +unsigned int axd_get_encoder_pcm_bitspersample_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_PCMOUT0_BITS_PER_SAMPLE; + break; + case 1: + reg = AXD_REG_PCMOUT1_BITS_PER_SAMPLE; + break; + case 2: + reg = AXD_REG_PCMOUT2_BITS_PER_SAMPLE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_encoder_flac_channels_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_FLAC_CHANNELS; + break; + case 1: + reg = AXD_REG_ENC1_FLAC_CHANNELS; + break; + case 2: + reg = AXD_REG_ENC2_FLAC_CHANNELS; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_flac_bitspersample_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_FLAC_BITS_PER_SAMPLE; + break; + case 1: + reg = AXD_REG_ENC1_FLAC_BITS_PER_SAMPLE; + break; + case 2: + reg = AXD_REG_ENC2_FLAC_BITS_PER_SAMPLE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_flac_samplerate_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_FLAC_SAMPLE_RATE; + break; + case 1: + reg = AXD_REG_ENC1_FLAC_SAMPLE_RATE; + break; + case 2: + reg = AXD_REG_ENC2_FLAC_SAMPLE_RATE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_flac_totalsamples_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_FLAC_TOTAL_SAMPLES; + break; + case 1: + reg = AXD_REG_ENC1_FLAC_TOTAL_SAMPLES; + break; + case 2: + reg = AXD_REG_ENC2_FLAC_TOTAL_SAMPLES; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_flac_domidsidestereo_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_FLAC_DO_MID_SIDE_STEREO; + break; + case 1: + reg = AXD_REG_ENC1_FLAC_DO_MID_SIDE_STEREO; + break; + case 2: + reg = AXD_REG_ENC2_FLAC_DO_MID_SIDE_STEREO; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_flac_loosemidsidestereo_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_FLAC_LOOSE_MID_SIDE_STEREO; + break; + case 1: + reg = AXD_REG_ENC1_FLAC_LOOSE_MID_SIDE_STEREO; + break; + case 2: + reg = AXD_REG_ENC2_FLAC_LOOSE_MID_SIDE_STEREO; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_flac_doexhaustivemodelsearch_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_FLAC_DO_EXHAUSTIVE_MODEL_SEARCH; + break; + case 1: + reg = AXD_REG_ENC1_FLAC_DO_EXHAUSTIVE_MODEL_SEARCH; + break; + case 2: + reg = AXD_REG_ENC2_FLAC_DO_EXHAUSTIVE_MODEL_SEARCH; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_flac_minresidualpartitionorder_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_FLAC_MIN_RESIDUAL_PARTITION_ORDER; + break; + case 1: + reg = AXD_REG_ENC1_FLAC_MIN_RESIDUAL_PARTITION_ORDER; + break; + case 2: + reg = AXD_REG_ENC2_FLAC_MIN_RESIDUAL_PARTITION_ORDER; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_flac_maxresidualpartitionorder_reg( + struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_FLAC_MAX_RESIDUAL_PARTITION_ORDER; + break; + case 1: + reg = AXD_REG_ENC1_FLAC_MAX_RESIDUAL_PARTITION_ORDER; + break; + case 2: + reg = AXD_REG_ENC2_FLAC_MAX_RESIDUAL_PARTITION_ORDER; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_flac_blocksize_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_FLAC_BLOCK_SIZE; + break; + case 1: + reg = AXD_REG_ENC1_FLAC_BLOCK_SIZE; + break; + case 2: + reg = AXD_REG_ENC2_FLAC_BLOCK_SIZE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_flac_bytecount_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_FLAC_BYTE_COUNT; + break; + case 1: + reg = AXD_REG_ENC1_FLAC_BYTE_COUNT; + break; + case 2: + reg = AXD_REG_ENC2_FLAC_BYTE_COUNT; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_flac_samplecount_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_FLAC_SAMPLE_COUNT; + break; + case 1: + reg = AXD_REG_ENC1_FLAC_SAMPLE_COUNT; + break; + case 2: + reg = AXD_REG_ENC2_FLAC_SAMPLE_COUNT; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_flac_framecount_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_FLAC_FRAME_COUNT; + break; + case 1: + reg = AXD_REG_ENC1_FLAC_FRAME_COUNT; + break; + case 2: + reg = AXD_REG_ENC2_FLAC_FRAME_COUNT; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_flac_framebytes_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_FLAC_FRAME_BYTES; + break; + case 1: + reg = AXD_REG_ENC1_FLAC_FRAME_BYTES; + break; + case 2: + reg = AXD_REG_ENC2_FLAC_FRAME_BYTES; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_encoder_alac_channels_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_ALAC_CHANNELS; + break; + case 1: + reg = AXD_REG_ENC1_ALAC_CHANNELS; + break; + case 2: + reg = AXD_REG_ENC2_ALAC_CHANNELS; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_alac_depth_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_ALAC_DEPTH; + break; + case 1: + reg = AXD_REG_ENC1_ALAC_DEPTH; + break; + case 2: + reg = AXD_REG_ENC2_ALAC_DEPTH; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_alac_samplerate_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_ALAC_SAMPLE_RATE; + break; + case 1: + reg = AXD_REG_ENC1_ALAC_SAMPLE_RATE; + break; + case 2: + reg = AXD_REG_ENC2_ALAC_SAMPLE_RATE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_alac_framelength_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_ALAC_FRAME_LENGTH; + break; + case 1: + reg = AXD_REG_ENC1_ALAC_FRAME_LENGTH; + break; + case 2: + reg = AXD_REG_ENC2_ALAC_FRAME_LENGTH; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_alac_maxframebytes_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_ALAC_MAX_FRAME_BYTES; + break; + case 1: + reg = AXD_REG_ENC1_ALAC_MAX_FRAME_BYTES; + break; + case 2: + reg = AXD_REG_ENC2_ALAC_MAX_FRAME_BYTES; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_alac_avgbitrate_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_ALAC_AVG_BIT_RATE; + break; + case 1: + reg = AXD_REG_ENC1_ALAC_AVG_BIT_RATE; + break; + case 2: + reg = AXD_REG_ENC2_ALAC_AVG_BIT_RATE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} +unsigned int axd_get_encoder_alac_fastmode_reg(struct axd_cmd *cmd, + unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_ENC0_ALAC_FAST_MODE; + break; + case 1: + reg = AXD_REG_ENC1_ALAC_FAST_MODE; + break; + case 2: + reg = AXD_REG_ENC2_ALAC_FAST_MODE; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_output_eq_power_reg_ch0_3(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg = 0; + + if (pipe == 0) { + switch (band) { + case 0: + reg = AXD_REG_EQ_OUT0_POWER_B0_C0_C3; + break; + case 1: + reg = AXD_REG_EQ_OUT0_POWER_B1_C0_C3; + break; + case 2: + reg = AXD_REG_EQ_OUT0_POWER_B2_C0_C3; + break; + case 3: + reg = AXD_REG_EQ_OUT0_POWER_B3_C0_C3; + break; + case 4: + reg = AXD_REG_EQ_OUT0_POWER_B4_C0_C3; + break; + default: + dev_err(axd->dev, WRONG_BAND_STR, band); + return 0; + } + } else if (pipe == 1) { + switch (band) { + case 0: + reg = AXD_REG_EQ_OUT1_POWER_B0_C0_C3; + break; + case 1: + reg = AXD_REG_EQ_OUT1_POWER_B1_C0_C3; + break; + case 2: + reg = AXD_REG_EQ_OUT1_POWER_B2_C0_C3; + break; + case 3: + reg = AXD_REG_EQ_OUT1_POWER_B3_C0_C3; + break; + case 4: + reg = AXD_REG_EQ_OUT1_POWER_B4_C0_C3; + break; + default: + dev_err(axd->dev, WRONG_BAND_STR, band); + return 0; + } + } else if (pipe == 2) { + switch (band) { + case 0: + reg = AXD_REG_EQ_OUT2_POWER_B0_C0_C3; + break; + case 1: + reg = AXD_REG_EQ_OUT2_POWER_B1_C0_C3; + break; + case 2: + reg = AXD_REG_EQ_OUT2_POWER_B2_C0_C3; + break; + case 3: + reg = AXD_REG_EQ_OUT2_POWER_B3_C0_C3; + break; + case 4: + reg = AXD_REG_EQ_OUT2_POWER_B4_C0_C3; + break; + default: + dev_err(axd->dev, WRONG_BAND_STR, band); + return 0; + } + } + return reg; +} + +unsigned int axd_get_output_eq_power_reg_ch4_7(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg = 0; + + if (pipe == 0) { + switch (band) { + case 0: + reg = AXD_REG_EQ_OUT0_POWER_B0_C4_C7; + break; + case 1: + reg = AXD_REG_EQ_OUT0_POWER_B1_C4_C7; + break; + case 2: + reg = AXD_REG_EQ_OUT0_POWER_B2_C4_C7; + break; + case 3: + reg = AXD_REG_EQ_OUT0_POWER_B3_C4_C7; + break; + case 4: + reg = AXD_REG_EQ_OUT0_POWER_B4_C4_C7; + break; + default: + dev_err(axd->dev, WRONG_BAND_STR, band); + return 0; + } + } else if (pipe == 1) { + switch (band) { + case 0: + reg = AXD_REG_EQ_OUT1_POWER_B0_C4_C7; + break; + case 1: + reg = AXD_REG_EQ_OUT1_POWER_B1_C4_C7; + break; + case 2: + reg = AXD_REG_EQ_OUT1_POWER_B2_C4_C7; + break; + case 3: + reg = AXD_REG_EQ_OUT1_POWER_B3_C4_C7; + break; + case 4: + reg = AXD_REG_EQ_OUT1_POWER_B4_C4_C7; + break; + default: + dev_err(axd->dev, WRONG_BAND_STR, band); + return 0; + } + } else if (pipe == 2) { + switch (band) { + case 0: + reg = AXD_REG_EQ_OUT2_POWER_B0_C4_C7; + break; + case 1: + reg = AXD_REG_EQ_OUT2_POWER_B1_C4_C7; + break; + case 2: + reg = AXD_REG_EQ_OUT2_POWER_B2_C4_C7; + break; + case 3: + reg = AXD_REG_EQ_OUT2_POWER_B3_C4_C7; + break; + case 4: + reg = AXD_REG_EQ_OUT2_POWER_B4_C4_C7; + break; + default: + dev_err(axd->dev, WRONG_BAND_STR, band); + return 0; + } + } + return reg; +} + +unsigned int axd_get_resample_fin_reg(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_RESAMPLER0_FIN; + break; + case 1: + reg = AXD_REG_RESAMPLER1_FIN; + break; + case 2: + reg = AXD_REG_RESAMPLER2_FIN; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} + +unsigned int axd_get_resample_fout_reg(struct axd_cmd *cmd, unsigned int pipe) +{ + struct axd_dev *axd = container_of(cmd, struct axd_dev, cmd); + unsigned int reg; + + switch (pipe) { + case 0: + reg = AXD_REG_RESAMPLER0_FOUT; + break; + case 1: + reg = AXD_REG_RESAMPLER1_FOUT; + break; + case 2: + reg = AXD_REG_RESAMPLER2_FOUT; + break; + default: + dev_err(axd->dev, WRONG_PIPE_STR, pipe); + return 0; + } + return reg; +} diff --git a/sound/soc/img/axd/axd_cmds_internal.h b/sound/soc/img/axd/axd_cmds_internal.h new file mode 100644 index 000000000000..683fb4cedd73 --- /dev/null +++ b/sound/soc/img/axd/axd_cmds_internal.h @@ -0,0 +1,317 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * Common functionality required by other axd_cmds_*.c files. + */ +#ifndef AXD_CMDS_INTERNAL_H_ +#define AXD_CMDS_INTERNAL_H_ + +#include <linux/io.h> + +#include "axd_cmds.h" + +#define CMP_PARAM(str, param) \ + !strncmp((str), (param "="), sizeof(param "=")-1) + +#define PARAM_VALUE(str, param) \ + PARAM_VALUE_WITH_END(str, param, NULL) + +#define PARAM_VALUE_WITH_END(str, param, end) \ + simple_strtol((str)+sizeof(param "=")-1, end, 0) + +#define PARAM_VALUE_ADV(str, param) \ + PARAM_VALUE_WITH_END(str, param, &str) + +void axd_ctrl_kick(struct axd_memory_map __iomem *message); +void axd_kick_status_clear(struct axd_memory_map __iomem *message); +int axd_wait_ready(struct axd_memory_map __iomem *message); + +int axd_write_ctrl(struct axd_cmd *cmd, unsigned int ctrl_command, + unsigned int ctrl_data); + +int axd_read_reg(struct axd_cmd *cmd, unsigned int reg, unsigned int *data); +int axd_write_reg(struct axd_cmd *cmd, unsigned int reg, unsigned int value); + +int axd_write_reg_buf(struct axd_cmd *cmd, unsigned int reg, + unsigned int value); + +unsigned int axd_get_mixer_mux_reg(struct axd_cmd *cmd, unsigned int pipe); + +unsigned int axd_get_input_codec_number(struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_input_control_reg(struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_input_gain_reg(struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_input_mute_reg(struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_input_upmix_reg(struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_input_buffer_occupancy_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_output_codec_number(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_control_reg(struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_downmix_reg(struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_event_reg(struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_eqcontrol_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_eqband0_reg(struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_eqband1_reg(struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_eqband2_reg(struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_eqband3_reg(struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_eqband4_reg(struct axd_cmd *cmd, unsigned int pipe); + +unsigned int axd_get_output_eq_power_reg_ch0_3(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band); +unsigned int axd_get_output_eq_power_reg_ch4_7(struct axd_cmd *cmd, + unsigned int pipe, unsigned int band); + +unsigned int axd_get_output_dcpp_control_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_max_delay_samples_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_ctrl_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_band_ctrl_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_delay_samples_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_eq_output_volume_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_eq_passthrough_gain_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_eq_inverse_passthrough_gain_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_bass_shelf_shift_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_bass_shelf_a0_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_bass_shelf_a1_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_bass_shelf_a2_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_bass_shelf_b0_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_bass_shelf_b1_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_treble_shelf_shift_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_treble_shelf_a0_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_treble_shelf_a1_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_treble_shelf_a2_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_treble_shelf_b0_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_treble_shelf_b1_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_eq_gain_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_eq_a0_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_eq_a1_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_eq_a2_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_eq_b0_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_eq_b1_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_channel_eq_shift_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_subband_input_select_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_output_dcpp_subband_low_pass_filter_a0_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_dcpp_subband_low_pass_filter_a1_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_dcpp_subband_low_pass_filter_a2_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_dcpp_subband_low_pass_filter_b0_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_output_dcpp_subband_low_pass_filter_b1_reg( + struct axd_cmd *cmd, unsigned int pipe); + +unsigned int axd_get_decoder_aac_version_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_aac_channels_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_aac_profile_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_aac_streamtype_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_aac_samplerate_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_decoder_ac3_channels_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_ac3_channel_order_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_ac3_mode_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_decoder_cook_flavour_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_decoder_flac_channels_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_flac_samplerate_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_flac_bitspersample_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_flac_md5checking_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_decoder_mpeg_numchannels_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_mpeg_mlchannel_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_decoder_wma_playeropt_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_wma_drcsetting_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_wma_peakampref_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_wma_rmsampref_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_wma_peakamptarget_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_wma_rmsamptarget_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_decoder_wma_pcmvalidbitspersample_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_wma_pcmcontainersize_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_decoder_wma_wmaformattag_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_wma_wmanumchannels_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_wma_wmasamplespersec_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_wma_wmaaveragebytespersec_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_wma_wmablockalign_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_wma_wmavalidbitspersample_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_wma_wmachannelmask_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_wma_wmaencodeoptions_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_pcm_samplerate_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_pcm_channels_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_pcm_bitspersample_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_pcm_justification_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_decoder_ddplus_config_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_ddplus_channel_order_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_decoder_alac_channels_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_alac_depth_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_alac_samplerate_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_alac_framelength_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_alac_maxframebytes_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_alac_avgbitrate_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_decoder_sbc_samplerate_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_sbc_audiomode_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_sbc_blocks_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_sbc_subbands_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_sbc_bitpool_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_sbc_allocationmode_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_decoder_ms11_mode_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_ms11_common_config0_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_ms11_common_config1_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_ms11_ddt_config0_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_ms11_ddc_config0_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_decoder_ms11_ext_pcm_config0_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_encoder_pcm_bitspersample_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_encoder_flac_channels_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_flac_bitspersample_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_flac_samplerate_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_flac_totalsamples_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_flac_domidsidestereo_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_flac_loosemidsidestereo_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_flac_doexhaustivemodelsearch_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_encoder_flac_minresidualpartitionorder_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_encoder_flac_maxresidualpartitionorder_reg( + struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_encoder_flac_blocksize_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_flac_bytecount_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_flac_samplecount_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_flac_framecount_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_flac_framebytes_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_encoder_alac_channels_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_alac_depth_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_alac_samplerate_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_alac_framelength_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_alac_maxframebytes_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_alac_avgbitrate_reg(struct axd_cmd *cmd, + unsigned int pipe); +unsigned int axd_get_encoder_alac_fastmode_reg(struct axd_cmd *cmd, + unsigned int pipe); + +unsigned int axd_get_resample_fin_reg(struct axd_cmd *cmd, unsigned int pipe); +unsigned int axd_get_resample_fout_reg(struct axd_cmd *cmd, unsigned int pipe); + +void axd_cmd_inpipe_init(struct axd_cmd *cmd, unsigned int pipe); +void axd_cmd_outpipe_init(struct axd_cmd *cmd, unsigned int pipe); + +#endif /* AXD_CMDS_INTERNAL_H_ */ diff --git a/sound/soc/img/axd/axd_cmds_pipes.c b/sound/soc/img/axd/axd_cmds_pipes.c index db355b531f76..64d5f170483b 100644 --- a/sound/soc/img/axd/axd_cmds_pipes.c +++ b/sound/soc/img/axd/axd_cmds_pipes.c @@ -722,7 +722,7 @@ void axd_cmd_inpipe_init(struct axd_cmd *cmd, unsigned int pipe) mutex_init(&axd_pipe->eos_mutex); atomic_set(&axd_pipe->intcount, 0);
- /* default buffer size, could be changed through sysfs */ + /* default buffer size, could be changed through kcontrol */ axd_pipe->buf_size = 1024*2; }
@@ -740,7 +740,7 @@ void axd_cmd_outpipe_init(struct axd_cmd *cmd, unsigned int pipe) axd_set_flag(&axd_pipe->eos_flg, 0); atomic_set(&axd_pipe->intcount, 0);
- /* default buffer size, could be changed through sysfs */ + /* default buffer size, could be changed through kcontrol */ axd_pipe->buf_size = 1024*16; }
At the moment AXD runs on MIPS cores only. These files provide basic functionality to prepare AXD f/w to bootstrap itself and do low level interrupt/kick when being initialised from a mips core.
Signed-off-by: Qais Yousef qais.yousef@imgtec.com Cc: Liam Girdwood lgirdwood@gmail.com Cc: Mark Brown broonie@kernel.org Cc: Jaroslav Kysela perex@perex.cz Cc: Takashi Iwai tiwai@suse.com Cc: linux-kernel@vger.kernel.org --- sound/soc/img/axd/axd_platform.h | 35 +++ sound/soc/img/axd/axd_platform_mips.c | 416 ++++++++++++++++++++++++++++++++++ 2 files changed, 451 insertions(+) create mode 100644 sound/soc/img/axd/axd_platform.h create mode 100644 sound/soc/img/axd/axd_platform_mips.c
diff --git a/sound/soc/img/axd/axd_platform.h b/sound/soc/img/axd/axd_platform.h new file mode 100644 index 000000000000..f9cc3c308a4a --- /dev/null +++ b/sound/soc/img/axd/axd_platform.h @@ -0,0 +1,35 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * Platform Specific helper functions. + */ +#ifndef AXD_PLATFORM_H_ +#define AXD_PLATFORM_H_ +#include "axd_module.h" + +void axd_platform_init(struct axd_dev *axd); +void axd_platform_set_pc(unsigned long pc); +int axd_platform_start(void); +void axd_platform_stop(void); +unsigned int axd_platform_num_threads(void); +void axd_platform_kick(void); +void axd_platform_irq_ack(void); +void axd_platform_print_regs(void); + +/* + * protect against simultaneous access to shared memory mapped registers area + * between axd and the host + */ +unsigned long axd_platform_lock(void); +void axd_platform_unlock(unsigned long flags); + +#endif /* AXD_PLATFORM_H_ */ diff --git a/sound/soc/img/axd/axd_platform_mips.c b/sound/soc/img/axd/axd_platform_mips.c new file mode 100644 index 000000000000..ac1cf5eb8a64 --- /dev/null +++ b/sound/soc/img/axd/axd_platform_mips.c @@ -0,0 +1,416 @@ +/* + * Copyright (C) 2011-2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * This file implements running AXD as a single VPE along side linux on the same + * core. + */ +#include <linux/cpu.h> +#include <linux/device.h> +#include <linux/io.h> +#include <linux/irqchip/mips-gic.h> +#include <linux/spinlock.h> + +#include <asm/cpu-features.h> +#include <asm/hazards.h> +#include <asm/mipsregs.h> +#include <asm/mipsmtregs.h> +#include <asm/tlbmisc.h> + +#include "axd_module.h" +#include "axd_platform.h" + + +static unsigned int axd_irqnum; +static unsigned int axd_irq; +static unsigned int axd_vpe; +static spinlock_t lock; +static unsigned long smpirqflags; + + +static void _axd_platform_init(void *info) +{ + unsigned int val; + unsigned long irqflags; + unsigned long mtflags; + + /* + * make sure nothing else on this vpe or another vpe can try to modify + * any of the shared registers below + */ + local_irq_save(irqflags); + mtflags = dvpe(); + + /* EVP = 0, VPC = 1 */ + val = read_c0_mvpcontrol(); + val &= ~MVPCONTROL_EVP; + val |= MVPCONTROL_VPC; + write_c0_mvpcontrol(val); + instruction_hazard(); + + /* prepare TC for setting up */ + settc(axd_vpe); + write_tc_c0_tchalt(1); + + /* make sure no interrupts are pending and exceptions bits are clear */ + write_vpe_c0_cause(0); + write_vpe_c0_status(0); + + /* bind TC to VPE */ + val = read_tc_c0_tcbind(); + val |= (axd_vpe << TCBIND_CURTC_SHIFT) | (axd_vpe << TCBIND_CURVPE_SHIFT); + write_tc_c0_tcbind(val); + + /* VPA = 1, MVP = 1 */ + val = read_vpe_c0_vpeconf0(); + val |= VPECONF0_MVP; + val |= VPECONF0_VPA; + write_vpe_c0_vpeconf0(val); + + /* A = 1, IXMT = 0 */ + val = read_tc_c0_tcstatus(); + val &= ~TCSTATUS_IXMT; + val |= TCSTATUS_A; + write_tc_c0_tcstatus(val); + + /* TE = 1 */ + val = read_vpe_c0_vpecontrol(); + val |= VPECONTROL_TE; + write_vpe_c0_vpecontrol(val); + + /* EVP = 1, VPC = 0 */ + val = read_c0_mvpcontrol(); + val |= MVPCONTROL_EVP; + val &= ~MVPCONTROL_VPC; + write_c0_mvpcontrol(val); + instruction_hazard(); + + evpe(mtflags); + local_irq_restore(irqflags); +} + +void axd_platform_init(struct axd_dev *axd) +{ + struct cpumask cpumask; + + axd_irqnum = axd->irqnum; + axd_irq = axd->axd_irq; + axd_vpe = axd->vpe; + spin_lock_init(&lock); + + /* + * ensure axd irq runs on cpu 0 only as it's the only one that can use + * MT to communicate with AXD + */ + cpumask_clear(&cpumask); + cpumask_set_cpu(0, &cpumask); + irq_set_affinity_hint(axd_irqnum, &cpumask); + +#ifdef CONFIG_HOTPLUG_CPU + /* + * offline the cpu before we do anything + * it's best effort here since the cpu could already be offline, hence + * we ignore the return value. + */ + cpu_down(axd_vpe); +#endif + + if (smp_processor_id() != 0) { + /* only cpu 0 can start AXD, so send it a message to do so */ + smp_call_function_single(0, &_axd_platform_init, NULL, 1); + return; + } + + _axd_platform_init(NULL); +} + +static void _reset(void *info) +{ + unsigned int val; + unsigned long irqflags; + unsigned long mtflags; + + local_irq_save(irqflags); + mtflags = dvpe(); + + settc(axd_vpe); + /* first stop TC1 */ + write_tc_c0_tchalt(1); + + /* clear EXL and ERL from TCSTATUS */ + val = read_c0_tcstatus(); + val &= ~(ST0_EXL | ST0_ERL); + write_c0_tcstatus(val); + + evpe(mtflags); + local_irq_restore(irqflags); +} + +static void reset(void) +{ + if (smp_processor_id() != 0) { + /* only cpu 0 can reset AXD, so send it a message to do so */ + smp_call_function_single(0, &_reset, NULL, 1); + return; + } + + _reset(NULL); +} + +static void _axd_platform_set_pc(void *info) +{ + unsigned long irqflags; + unsigned long mtflags; + unsigned long pc = *(unsigned long *)info; + + local_irq_save(irqflags); + mtflags = dvpe(); + + settc(axd_vpe); + write_tc_c0_tcrestart(pc); + + evpe(mtflags); + local_irq_restore(irqflags); +} + +void axd_platform_set_pc(unsigned long pc) +{ + if (smp_processor_id() != 0) { + /* only cpu 0 can set AXD PC, so send it a message to do so */ + smp_call_function_single(0, &_axd_platform_set_pc, &pc, 1); + return; + } + + _axd_platform_set_pc(&pc); +} + +static void thread_control(int start) +{ + unsigned long irqflags; + unsigned long mtflags; + + local_irq_save(irqflags); + mtflags = dvpe(); + + settc(axd_vpe); + /* start/stop the VPE */ + write_tc_c0_tchalt(!start); + + evpe(mtflags); + local_irq_restore(irqflags); +} + +static void _axd_platform_start(void *info) +{ + reset(); + thread_control(1); +} + +int axd_platform_start(void) +{ + if (smp_processor_id() != 0) { + /* only cpu 0 can start AXD, so send it a message to do so */ + smp_call_function_single(0, &_axd_platform_start, NULL, 1); + return 0; + } + + _axd_platform_start(NULL); + + return 0; +} + +static void _axd_platform_stop(void *info) +{ + thread_control(0); +} + +void axd_platform_stop(void) +{ + if (smp_processor_id() != 0) { + /* only cpu 0 can stop AXD, so send it a message to do so */ + smp_call_function_single(0, &_axd_platform_stop, NULL, 1); + return; + } + + _axd_platform_stop(NULL); +} + +unsigned int axd_platform_num_threads(void) +{ + return 1; +} + +static void _axd_platform_kick_sw1(void *info) +{ + unsigned int val; + unsigned long irqflags; + unsigned long mtflags; + + local_irq_save(irqflags); + mtflags = dvpe(); + + settc(axd_vpe); + val = read_vpe_c0_cause(); + val |= CAUSEF_IP1; + write_vpe_c0_cause(val); + + evpe(mtflags); + local_irq_restore(irqflags); +} + +void axd_platform_kick(void) +{ + /* + * ensure all writes to shared uncached memory are visible to AXD + * before sending interrupt + */ + wmb(); + + if (axd_irq) { + gic_send_ipi(axd_irq); + return; + } + + /* fallback to sending interrupt at SW1 */ + if (smp_processor_id() != 0) { + /* only cpu 0 can send AXD SW1, so send it a message to do so */ + smp_call_function_single(0, &_axd_platform_kick_sw1, NULL, 1); + return; + } + + _axd_platform_kick_sw1(NULL); +} + +static void axd_smp_platform_lock(void *info) +{ + unsigned long *flags = info; + + /* + * prevent AXD irq handler from accessing the lock while another + * processor holds it + */ + disable_irq(axd_irqnum); + *flags = dvpe(); +} + +inline unsigned long axd_platform_lock(void) +{ + unsigned long irqflags; + + if (smp_processor_id() != 0) { + /* only cpu 0 can lock AXD out, so send it a message to do so */ + unsigned long flags; + + spin_lock(&lock); /* serialise other smp cpus to access the lock */ + smp_call_function_single(0, &axd_smp_platform_lock, &flags, 1); + return flags; + } + + /* + * When not servicing AXD irq then another task is trying to acquire the + * lock, in this case we need to acquire the spinlock without spinning + * because cpu0 must keep on running to service other cpus requests.. + */ + if (!in_interrupt()) + while (!spin_trylock(&lock)) + cpu_relax(); + + /* prevent other cpus from acquiring the lock while we hold it */ + local_irq_save(irqflags); + smpirqflags = irqflags; + return dvpe(); +} + +static void axd_smp_platform_unlock(void *info) +{ + unsigned long *flags = info; + + evpe(*flags); + enable_irq(axd_irqnum); +} + +inline void axd_platform_unlock(unsigned long flags) +{ + if (smp_processor_id() != 0) { + smp_call_function_single(0, &axd_smp_platform_unlock, &flags, 1); + spin_unlock(&lock); + return; + } + evpe(flags); + local_irq_restore(smpirqflags); + if (!in_interrupt()) + spin_unlock(&lock); +} + +inline void axd_platform_irq_ack(void) +{ +} + +static void print_regs(unsigned int thread) +{ + unsigned long irqflags; + unsigned long mtflags; + + local_irq_save(irqflags); + mtflags = dvpe(); + + settc(thread); + pr_err("PC:\t\t0x%08lX\n", read_tc_c0_tcrestart()); + pr_err("STATUS:\t\t0x%08lX\n", read_vpe_c0_status()); + pr_err("CAUSE:\t\t0x%08lX\n", read_vpe_c0_cause()); + pr_err("EPC:\t\t0x%08lX\n", read_vpe_c0_epc()); + pr_err("EBASE:\t\t0x%08lX\n", read_vpe_c0_ebase()); + pr_err("BADVADDR:\t0x%08lX\n", read_vpe_c0_badvaddr()); + pr_err("CONFIG:\t\t0x%08lX\n", read_vpe_c0_config()); + pr_err("MVPCONTROL:\t0x%08X\n", read_c0_mvpcontrol()); + pr_err("VPECONTROL:\t0x%08lX\n", read_vpe_c0_vpecontrol()); + pr_err("VPECONF0:\t0x%08lX\n", read_vpe_c0_vpeconf0()); + pr_err("TCBIND:\t\t0x%08lX\n", read_tc_c0_tcbind()); + pr_err("TCSTATUS:\t0x%08lX\n", read_tc_c0_tcstatus()); + pr_err("TCHALT:\t\t0x%08lX\n", read_tc_c0_tchalt()); + pr_err("\n"); + pr_err("$0: 0x%08lX\tat: 0x%08lX\tv0: 0x%08lX\tv1: 0x%08lX\n", + mftgpr(0), mftgpr(1), mftgpr(2), mftgpr(3)); + pr_err("a0: 0x%08lX\ta1: 0x%08lX\ta2: 0x%08lX\ta3: 0x%08lX\n", + mftgpr(4), mftgpr(5), mftgpr(6), mftgpr(7)); + pr_err("t0: 0x%08lX\tt1: 0x%08lX\tt2: 0x%08lX\tt3: 0x%08lX\n", + mftgpr(8), mftgpr(9), mftgpr(10), mftgpr(11)); + pr_err("t4: 0x%08lX\tt5: 0x%08lX\tt6: 0x%08lX\tt7: 0x%08lX\n", + mftgpr(12), mftgpr(13), mftgpr(14), mftgpr(15)); + pr_err("s0: 0x%08lX\ts1: 0x%08lX\ts2: 0x%08lX\ts3: 0x%08lX\n", + mftgpr(16), mftgpr(17), mftgpr(18), mftgpr(19)); + pr_err("s4: 0x%08lX\ts5: 0x%08lX\ts6: 0x%08lX\ts7: 0x%08lX\n", + mftgpr(20), mftgpr(21), mftgpr(22), mftgpr(23)); + pr_err("t8: 0x%08lX\tt9: 0x%08lX\tk0: 0x%08lX\tk1: 0x%08lX\n", + mftgpr(24), mftgpr(25), mftgpr(26), mftgpr(27)); + pr_err("gp: 0x%08lX\tsp: 0x%08lX\ts8: 0x%08lX\tra: 0x%08lX\n", + mftgpr(28), mftgpr(29), mftgpr(30), mftgpr(31)); + + evpe(mtflags); + local_irq_restore(irqflags); +} + +static void _axd_platform_print_regs(void *info) +{ + pr_err("VPE%d regs dump\n", axd_vpe); + print_regs(axd_vpe); +} + +void axd_platform_print_regs(void) +{ + if (smp_processor_id() != 0) { + /* only cpu 0 can read AXD regs, so send it a message to do so */ + smp_call_function_single(0, &_axd_platform_print_regs, NULL, 1); + return; + } + + _axd_platform_print_regs(NULL); +}
Add implementation of alsa compress offload operations. At the moment we only support playback only.
Signed-off-by: Qais Yousef qais.yousef@imgtec.com Cc: Liam Girdwood lgirdwood@gmail.com Cc: Mark Brown broonie@kernel.org Cc: Jaroslav Kysela perex@perex.cz Cc: Takashi Iwai tiwai@suse.com Cc: linux-kernel@vger.kernel.org --- sound/soc/img/axd/axd_alsa_ops.c | 211 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 211 insertions(+) create mode 100644 sound/soc/img/axd/axd_alsa_ops.c
diff --git a/sound/soc/img/axd/axd_alsa_ops.c b/sound/soc/img/axd/axd_alsa_ops.c new file mode 100644 index 000000000000..91e17119b306 --- /dev/null +++ b/sound/soc/img/axd/axd_alsa_ops.c @@ -0,0 +1,211 @@ +/* + * Copyright (C) 2015 Imagination Technologies Ltd. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * AXD ALSA Compressed ops + */ +#include <sound/compress_driver.h> +#include <sound/soc.h> + +#include "axd_cmds.h" +#include "axd_module.h" + +static struct axd_dev *get_axd_from_cstream(struct snd_compr_stream *cstream) +{ + struct snd_soc_pcm_runtime *rtd = cstream->private_data; + return snd_soc_platform_get_drvdata(rtd->platform); +} + +static int copied_total; + +static int axd_compr_open(struct snd_compr_stream *cstream) +{ + struct axd_dev *axd = get_axd_from_cstream(cstream); + + axd_cmd_output_set_sink(&axd->cmd, 0, 1); + return axd_cmd_inpipe_start(&axd->cmd, 0); +} + +static int axd_compr_free(struct snd_compr_stream *cstream) +{ + struct axd_dev *axd = get_axd_from_cstream(cstream); + + axd_cmd_inpipe_stop(&axd->cmd, 0); + copied_total = 0; + + return 0; +} + +static int axd_compr_set_params(struct snd_compr_stream *cstream, + struct snd_compr_params *params) +{ + int ret; + struct axd_dev *axd = get_axd_from_cstream(cstream); + + ret = axd_cmd_input_set_decoder_params(&axd->cmd, 0, ¶ms->codec); + if (ret) + return -EINVAL; + return 0; +} + +static int axd_compr_get_params(struct snd_compr_stream *cstream, + struct snd_codec *params) +{ + int ret; + struct axd_dev *axd = get_axd_from_cstream(cstream); + + ret = axd_cmd_input_get_decoder_params(&axd->cmd, 0, params); + if (ret) + return -EIO; + return 0; +} + +static int axd_compr_trigger(struct snd_compr_stream *cstream, int cmd) +{ + struct axd_dev *axd = get_axd_from_cstream(cstream); + + if (cmd == SND_COMPR_TRIGGER_PARTIAL_DRAIN || + cmd == SND_COMPR_TRIGGER_DRAIN) { + /* stop to send EOS which will cause the stream to be drained */ + axd_cmd_inpipe_stop(&axd->cmd, 0); + + /* + * start again, repeating if EAGAIN is returned meaning we're + * being drained + */ + while (axd_cmd_inpipe_start(&axd->cmd, 0) == -EAGAIN) + cpu_relax(); + + copied_total = 0; + } + return 0; +} + +static int axd_compr_pointer(struct snd_compr_stream *cstream, + struct snd_compr_tstamp *tstamp) +{ + tstamp->copied_total = copied_total; + return 0; +} + +static int axd_compr_copy(struct snd_compr_stream *cstream, char __user *buf, + size_t count) +{ + struct axd_dev *axd = get_axd_from_cstream(cstream); + int ret; + + ret = axd_cmd_send_buffer(&axd->cmd, 0, buf, count); + if (ret < 0) { + dev_err(axd->dev, "failed to write buffer %d\n", ret); + return ret; + } + copied_total += ret; + + return ret; +} + +static int axd_compr_get_caps(struct snd_compr_stream *cstream, + struct snd_compr_caps *caps) +{ + struct axd_dev *axd = get_axd_from_cstream(cstream); + + caps->min_fragment_size = 1024*2; + caps->max_fragment_size = 1024*2; + caps->min_fragments= 1; + caps->max_fragments= 5; + + axd_cmd_get_decoders(&axd->cmd, caps); + + return 0; +} + +static int axd_compr_get_codec_caps(struct snd_compr_stream *cstream, + struct snd_compr_codec_caps *codec) +{ + switch (codec->codec) { + case SND_AUDIOCODEC_PCM: + codec->num_descriptors = 1; + codec->descriptor[0].max_ch = 2; + codec->descriptor[0].sample_rates[0] = 96000; + codec->descriptor[0].sample_rates[1] = 64000; + codec->descriptor[0].sample_rates[2] = 48000; + codec->descriptor[0].sample_rates[3] = 44100; + codec->descriptor[0].sample_rates[4] = 32000; + codec->descriptor[0].sample_rates[5] = 16000; + codec->descriptor[0].sample_rates[6] = 8000; + codec->descriptor[0].num_sample_rates = 7; + codec->descriptor[0].num_bitrates = 0; + codec->descriptor[0].profiles = 0; + codec->descriptor[0].modes = 0; + codec->descriptor[0].formats = 0; + break; + case SND_AUDIOCODEC_MP3: + codec->num_descriptors = 1; + codec->descriptor[0].max_ch = 2; + codec->descriptor[0].num_sample_rates = 0; + codec->descriptor[0].num_bitrates = 0; + codec->descriptor[0].profiles = 0; + codec->descriptor[0].modes = 0; + codec->descriptor[0].formats = 0; + break; + case SND_AUDIOCODEC_AAC: + codec->num_descriptors = 1; + codec->descriptor[0].max_ch = 6; + codec->descriptor[0].num_sample_rates = 0; + codec->descriptor[0].num_bitrates = 0; + codec->descriptor[0].profiles = 0; + codec->descriptor[0].modes = SND_AUDIOMODE_AAC_MAIN | + SND_AUDIOMODE_AAC_LC | SND_AUDIOMODE_AAC_SSR; + codec->descriptor[0].formats = SND_AUDIOSTREAMFORMAT_MP2ADTS | + SND_AUDIOSTREAMFORMAT_MP4ADTS | SND_AUDIOSTREAMFORMAT_ADIF | + SND_AUDIOSTREAMFORMAT_RAW; + break; + case SND_AUDIOCODEC_VORBIS: + codec->num_descriptors = 0; + break; + case SND_AUDIOCODEC_FLAC: + codec->num_descriptors = 1; + codec->descriptor[0].max_ch = 6; + codec->descriptor[0].num_sample_rates = 0; + codec->descriptor[0].num_bitrates = 0; + codec->descriptor[0].profiles = 0; + codec->descriptor[0].modes = 0; + codec->descriptor[0].formats = SND_AUDIOSTREAMFORMAT_FLAC; + break; + case SND_AUDIOCODEC_WMA: + codec->num_descriptors = 1; + codec->descriptor[0].max_ch = 6; + codec->descriptor[0].num_sample_rates = 0; + codec->descriptor[0].num_bitrates = 0; + codec->descriptor[0].profiles = SND_AUDIOPROFILE_WMA7 | + SND_AUDIOPROFILE_WMA8 | SND_AUDIOPROFILE_WMA9 | + SND_AUDIOPROFILE_WMA10; + codec->descriptor[0].modes = 0; + codec->descriptor[0].formats = SND_AUDIOSTREAMFORMAT_WMA_NOASF_HDR; + break; + default: + return -EINVAL; + } + + return 0; +} + +struct snd_compr_ops axd_compr_ops = { + .open = axd_compr_open, + .free = axd_compr_free, + .set_params = axd_compr_set_params, + .get_params = axd_compr_get_params, + .trigger = axd_compr_trigger, + .pointer = axd_compr_pointer, + .copy = axd_compr_copy, + .get_caps = axd_compr_get_caps, + .get_codec_caps = axd_compr_get_codec_caps +};
Now all necessary files are added, allow axd to be selected through Kconfig and compiled.
Signed-off-by: Qais Yousef qais.yousef@imgtec.com Cc: Liam Girdwood lgirdwood@gmail.com Cc: Mark Brown broonie@kernel.org Cc: Jaroslav Kysela perex@perex.cz Cc: Takashi Iwai tiwai@suse.com Cc: linux-kernel@vger.kernel.org --- sound/soc/Kconfig | 1 + sound/soc/Makefile | 1 + sound/soc/img/Kconfig | 11 +++++++++++ sound/soc/img/Makefile | 1 + sound/soc/img/axd/Makefile | 13 +++++++++++++ 5 files changed, 27 insertions(+) create mode 100644 sound/soc/img/Kconfig create mode 100644 sound/soc/img/Makefile create mode 100644 sound/soc/img/axd/Makefile
diff --git a/sound/soc/Kconfig b/sound/soc/Kconfig index 2ae9619443d1..8f29af1d397e 100644 --- a/sound/soc/Kconfig +++ b/sound/soc/Kconfig @@ -44,6 +44,7 @@ source "sound/soc/jz4740/Kconfig" source "sound/soc/nuc900/Kconfig" source "sound/soc/omap/Kconfig" source "sound/soc/kirkwood/Kconfig" +source "sound/soc/img/Kconfig" source "sound/soc/intel/Kconfig" source "sound/soc/mediatek/Kconfig" source "sound/soc/mxs/Kconfig" diff --git a/sound/soc/Makefile b/sound/soc/Makefile index e189903fabf4..c6a1c04b8e39 100644 --- a/sound/soc/Makefile +++ b/sound/soc/Makefile @@ -23,6 +23,7 @@ obj-$(CONFIG_SND_SOC) += davinci/ obj-$(CONFIG_SND_SOC) += dwc/ obj-$(CONFIG_SND_SOC) += fsl/ obj-$(CONFIG_SND_SOC) += jz4740/ +obj-$(CONFIG_SND_SOC) += img/ obj-$(CONFIG_SND_SOC) += intel/ obj-$(CONFIG_SND_SOC) += mediatek/ obj-$(CONFIG_SND_SOC) += mxs/ diff --git a/sound/soc/img/Kconfig b/sound/soc/img/Kconfig new file mode 100644 index 000000000000..5a089b7d4929 --- /dev/null +++ b/sound/soc/img/Kconfig @@ -0,0 +1,11 @@ +config SND_SOC_IMG_AXD + tristate "Imagination AXD Audio Processing IP" + depends on MIPS && COMMON_CLK && CMA + ---help--- + Say Y or M here if you to add support for AXD Audio Processing IP. + +config SND_SOC_IMG_AXD_DEBUGFS + bool "AXD debugfs support" + depends on SND_SOC_IMG_AXD && DEBUG_FS + ---help--- + Say Y if you want to create AXD debugfs nodes diff --git a/sound/soc/img/Makefile b/sound/soc/img/Makefile new file mode 100644 index 000000000000..189abf5d927c --- /dev/null +++ b/sound/soc/img/Makefile @@ -0,0 +1 @@ +obj-$(CONFIG_SND_SOC_IMG_AXD) += axd/ diff --git a/sound/soc/img/axd/Makefile b/sound/soc/img/axd/Makefile new file mode 100644 index 000000000000..cfa1f412bf19 --- /dev/null +++ b/sound/soc/img/axd/Makefile @@ -0,0 +1,13 @@ +obj-$(CONFIG_SND_SOC_IMG_AXD) := axd.o + +axd-objs = axd_alsa_ops.o \ + axd_buffers.o \ + axd_cmds.o \ + axd_cmds_config.o \ + axd_cmds_decoder_config.o \ + axd_cmds_info.o \ + axd_cmds_internal.o \ + axd_cmds_pipes.o \ + axd_hdr.o \ + axd_module.o \ + axd_platform_$(ARCH).o \
On Mon, Aug 24, 2015 at 01:39:09PM +0100, Qais Yousef wrote:
Qais Yousef (10): irqchip: irq-mips-gic: export gic_send_ipi dt: add img,axd.txt device tree binding document ALSA: add AXD Audio Processing IP alsa driver ALSA: axd: add fw binary header manipulation files ALSA: axd: add buffers manipulation files ALSA: axd: add basic files for sending/receiving axd cmds ALSA: axd: add cmd interface helper functions ALSA: axd: add low level AXD platform setup files ALSA: axd: add alsa compress offload operations ALSA: axd: add Makefile
Please try to use subject lines matching the style for the subsystem, I very nearly deleted this unread because it looks like an ALSA patch series, not an ASoC one.
On 08/26/2015 07:04 PM, Mark Brown wrote:
On Mon, Aug 24, 2015 at 01:39:09PM +0100, Qais Yousef wrote:
Qais Yousef (10): irqchip: irq-mips-gic: export gic_send_ipi dt: add img,axd.txt device tree binding document ALSA: add AXD Audio Processing IP alsa driver ALSA: axd: add fw binary header manipulation files ALSA: axd: add buffers manipulation files ALSA: axd: add basic files for sending/receiving axd cmds ALSA: axd: add cmd interface helper functions ALSA: axd: add low level AXD platform setup files ALSA: axd: add alsa compress offload operations ALSA: axd: add Makefile
Please try to use subject lines matching the style for the subsystem, I very nearly deleted this unread because it looks like an ALSA patch series, not an ASoC one.
OK sorry about that. I'll fix this in the next series.
Thanks, Qais
participants (7)
-
Jason Cooper
-
Jiang Liu
-
Marc Zyngier
-
Mark Brown
-
Mark Rutland
-
Qais Yousef
-
Thomas Gleixner