[alsa-devel] Ordering in soc_pcm_hw_params()
soc_pcm_hw_params() sets the parameters in this order: link, codec, cpu, platform
This is tripping me up when switching between the 44100 and 48000 families. If previous song was 44100 then the codec has sysclk set to the 44100 family.
Now I play a 48000 family song. codec gets new hardware params and errors out because sysclk is still in the 44100 family.
I have code in the cpu set_hw_params which will switch the sysclk, but it never gets to run because the codec set_hw_params() has already errored out.
Shouldn't this order be: platform, link, cpu, codec
On Mon, Aug 11, 2014 at 09:35:51AM -0400, jonsmirl@gmail.com wrote:
I have code in the cpu set_hw_params which will switch the sysclk, but it never gets to run because the codec set_hw_params() has already errored out.
Shouldn't this order be: platform, link, cpu, codec
No, the machine driver runs first so it can do any coordination needed between the other devices. Attempting to fiddle about with the ordering is never going to be robust, someone else will always want a different ordering at some point.
On Mon, Aug 11, 2014 at 11:33 AM, Mark Brown broonie@kernel.org wrote:
On Mon, Aug 11, 2014 at 09:35:51AM -0400, jonsmirl@gmail.com wrote:
I have code in the cpu set_hw_params which will switch the sysclk, but it never gets to run because the codec set_hw_params() has already errored out.
Shouldn't this order be: platform, link, cpu, codec
No, the machine driver runs first so it can do any coordination needed between the other devices. Attempting to fiddle about with the ordering is never going to be robust, someone else will always want a different ordering at some point.
I don't see how to fix this without making a machine driver that simply tells the cpu-dai to change the MCLK output from 22.5Mhz to 24.5Mhz earlier in the hw_params chain. But that's putting a generic piece of code into the machine driver.
Shouldn't codec always be the last in the chain? What would be a case where you don't want it at the end of the chain?
On Mon, Aug 11, 2014 at 12:09 PM, jonsmirl@gmail.com jonsmirl@gmail.com wrote:
On Mon, Aug 11, 2014 at 11:33 AM, Mark Brown broonie@kernel.org wrote:
On Mon, Aug 11, 2014 at 09:35:51AM -0400, jonsmirl@gmail.com wrote:
I have code in the cpu set_hw_params which will switch the sysclk, but it never gets to run because the codec set_hw_params() has already errored out.
Shouldn't this order be: platform, link, cpu, codec
No, the machine driver runs first so it can do any coordination needed between the other devices. Attempting to fiddle about with the ordering is never going to be robust, someone else will always want a different ordering at some point.
I don't see how to fix this without making a machine driver that simply tells the cpu-dai to change the MCLK output from 22.5Mhz to 24.5Mhz earlier in the hw_params chain. But that's putting a generic piece of code into the machine driver.
Shouldn't codec always be the last in the chain? What would be a case where you don't want it at the end of the chain?
Plan B. I added code in the codec's hw_param function to first call the cpu's hw_param function. cpu's hw_param will run twice but that doesn't hurt anything.
/* give the cpu_dai a chance to change sysclk first if needed */ ret = rtd->cpu_dai->driver->ops->hw_params(substream, params, rtd->cpu_dai);
-- Jon Smirl jonsmirl@gmail.com
On Mon, Aug 11, 2014 at 12:09:43PM -0400, jonsmirl@gmail.com wrote:
On Mon, Aug 11, 2014 at 11:33 AM, Mark Brown broonie@kernel.org wrote:
No, the machine driver runs first so it can do any coordination needed between the other devices. Attempting to fiddle about with the ordering is never going to be robust, someone else will always want a different ordering at some point.
I don't see how to fix this without making a machine driver that simply tells the cpu-dai to change the MCLK output from 22.5Mhz to 24.5Mhz earlier in the hw_params chain. But that's putting a generic piece of code into the machine driver.
That's absolutely fine, the clocking arrangements and sequencing requirements for achieving it are in general going to depend on the machine. The machine may have requirements which mean that it doesn't want to use a clock configuration it's physically possible for it to generate, or it may want to do more extensive reparenting depending on use case.
For things like this it is generally good practice for the machine driver to disable all clocks (set their rates to zero) when disabling if it wants to support reconfiguration.
Shouldn't codec always be the last in the chain? What would be a case where you don't want it at the end of the chain?
How are you deciding what the ordering for "the chain" should be in the first place? It seems like you're just assuming that it's what you want to set for your system which in turn appears to be derived from what you're using as clock master, it's very common for the CODEC to be the clock master in a system.
One could equally argue here that the master shouldn't allow itself to have the clock rate reprogrammed when it has something deriving a clock from it (after all if that thing is running it might be disrupted).
On Mon, Aug 11, 2014 at 1:11 PM, Mark Brown broonie@kernel.org wrote:
On Mon, Aug 11, 2014 at 12:09:43PM -0400, jonsmirl@gmail.com wrote:
On Mon, Aug 11, 2014 at 11:33 AM, Mark Brown broonie@kernel.org wrote:
No, the machine driver runs first so it can do any coordination needed between the other devices. Attempting to fiddle about with the ordering is never going to be robust, someone else will always want a different ordering at some point.
I don't see how to fix this without making a machine driver that simply tells the cpu-dai to change the MCLK output from 22.5Mhz to 24.5Mhz earlier in the hw_params chain. But that's putting a generic piece of code into the machine driver.
That's absolutely fine, the clocking arrangements and sequencing requirements for achieving it are in general going to depend on the machine. The machine may have requirements which mean that it doesn't want to use a clock configuration it's physically possible for it to generate, or it may want to do more extensive reparenting depending on use case.
For things like this it is generally good practice for the machine driver to disable all clocks (set their rates to zero) when disabling if it wants to support reconfiguration.
Shouldn't codec always be the last in the chain? What would be a case where you don't want it at the end of the chain?
How are you deciding what the ordering for "the chain" should be in the first place? It seems like you're just assuming that it's what you want to set for your system which in turn appears to be derived from what you're using as clock master, it's very common for the CODEC to be the clock master in a system.
One could equally argue here that the master shouldn't allow itself to have the clock rate reprogrammed when it has something deriving a clock from it (after all if that thing is running it might be disrupted).
We probably need something like EPROBE_DEFER that is used in the driver subsystem to allow that various pieces to so this out.
On Mon, Aug 11, 2014 at 02:00:22PM -0400, jonsmirl@gmail.com wrote:
On Mon, Aug 11, 2014 at 1:11 PM, Mark Brown broonie@kernel.org wrote:
That's absolutely fine, the clocking arrangements and sequencing requirements for achieving it are in general going to depend on the machine. The machine may have requirements which mean that it doesn't want to use a clock configuration it's physically possible for it to generate, or it may want to do more extensive reparenting depending on use case.
We probably need something like EPROBE_DEFER that is used in the driver subsystem to allow that various pieces to so this out.
That really doesn't solve the problem - consider reparenting for example.
On Mon, Aug 11, 2014 at 2:19 PM, Mark Brown broonie@kernel.org wrote:
On Mon, Aug 11, 2014 at 02:00:22PM -0400, jonsmirl@gmail.com wrote:
On Mon, Aug 11, 2014 at 1:11 PM, Mark Brown broonie@kernel.org wrote:
That's absolutely fine, the clocking arrangements and sequencing requirements for achieving it are in general going to depend on the machine. The machine may have requirements which mean that it doesn't want to use a clock configuration it's physically possible for it to generate, or it may want to do more extensive reparenting depending on use case.
We probably need something like EPROBE_DEFER that is used in the driver subsystem to allow that various pieces to so this out.
That really doesn't solve the problem - consider reparenting for example.
Maybe it is time to get rid of setsysclk and use the clk infrastructure? With the clk stuff the order is known by walking the clk dependency chain.
In my case both sgtl5000 and Allwinner I2S are using clk. But there is no mechanism for telling ASOC to use this chain to figure out the ordering.
On Mon, Aug 11, 2014 at 02:24:31PM -0400, jonsmirl@gmail.com wrote:
On Mon, Aug 11, 2014 at 2:19 PM, Mark Brown broonie@kernel.org wrote:
That really doesn't solve the problem - consider reparenting for example.
Maybe it is time to get rid of setsysclk and use the clk infrastructure? With the clk stuff the order is known by walking the clk dependency chain.
In my case both sgtl5000 and Allwinner I2S are using clk. But there is no mechanism for telling ASOC to use this chain to figure out the ordering.
The clock API is not available on all architectures and it *still* doesn't solve the parenting problems I've repeatedly mentioned.
On Mon, Aug 11, 2014 at 2:26 PM, Mark Brown broonie@kernel.org wrote:
On Mon, Aug 11, 2014 at 02:24:31PM -0400, jonsmirl@gmail.com wrote:
On Mon, Aug 11, 2014 at 2:19 PM, Mark Brown broonie@kernel.org wrote:
That really doesn't solve the problem - consider reparenting for example.
Maybe it is time to get rid of setsysclk and use the clk infrastructure? With the clk stuff the order is known by walking the clk dependency chain.
In my case both sgtl5000 and Allwinner I2S are using clk. But there is no mechanism for telling ASOC to use this chain to figure out the ordering.
The clock API is not available on all architectures and it *still* doesn't solve the parenting problems I've repeatedly mentioned.
Ok, I though the reparenting support in the clock system was enough. I was thinking that the clock chain could be run on each call which would dynamically control the ordering. So when you change the parent, the clock chain changes and then the ALSA order would change to follow.
On Mon, Aug 11, 2014 at 1:11 PM, Mark Brown broonie@kernel.org wrote:
On Mon, Aug 11, 2014 at 12:09:43PM -0400, jonsmirl@gmail.com wrote:
On Mon, Aug 11, 2014 at 11:33 AM, Mark Brown broonie@kernel.org wrote:
No, the machine driver runs first so it can do any coordination needed between the other devices. Attempting to fiddle about with the ordering is never going to be robust, someone else will always want a different ordering at some point.
I don't see how to fix this without making a machine driver that simply tells the cpu-dai to change the MCLK output from 22.5Mhz to 24.5Mhz earlier in the hw_params chain. But that's putting a generic piece of code into the machine driver.
That's absolutely fine, the clocking arrangements and sequencing requirements for achieving it are in general going to depend on the machine. The machine may have requirements which mean that it doesn't want to use a clock configuration it's physically possible for it to generate, or it may want to do more extensive reparenting depending on use case.
For things like this it is generally good practice for the machine driver to disable all clocks (set their rates to zero) when disabling if it wants to support reconfiguration.
Shouldn't codec always be the last in the chain? What would be a case where you don't want it at the end of the chain?
How are you deciding what the ordering for "the chain" should be in the first place? It seems like you're just assuming that it's what you want to set for your system which in turn appears to be derived from what you're using as clock master, it's very common for the CODEC to be the clock master in a system.
One could equally argue here that the master shouldn't allow itself to have the clock rate reprogrammed when it has something deriving a clock from it (after all if that thing is running it might be disrupted).
Also, I do have to do a codec_set_sysclk when I change it to notify the sgtl5000 codec that it has been changed.
The TI codecs are nicer in this regard. They have an external crystal and can then autolock onto whatever combo of MCLK/SCLK/LRCLK is being sent to them. My board with the TI codec aren't ready yet so I'm playing around trying to get this sgtl5000 working. AKAIK Wolfson still doesn't have anything comparable to the TAS5716. We run two 20W channels through it and then an external 40W FET for the sub.
On 08/11/2014 03:35 PM, jonsmirl@gmail.com wrote:
soc_pcm_hw_params() sets the parameters in this order: link, codec, cpu, platform
This is tripping me up when switching between the 44100 and 48000 families. If previous song was 44100 then the codec has sysclk set to the 44100 family.
Now I play a 48000 family song. codec gets new hardware params and errors out because sysclk is still in the 44100 family.
I have code in the cpu set_hw_params which will switch the sysclk, but it never gets to run because the codec set_hw_params() has already errored out.
Shouldn't this order be: platform, link, cpu, codec
What I don't understand is why does the CPU DAI driver change the sysclk for the CODEC DAI driver? What I'd expect is that the machine driver sets the sysclk for both the CODEC and the CPU DAI. In this case everything should work fine.
- Lars
On Tue, Aug 12, 2014 at 4:42 AM, Lars-Peter Clausen lars@metafoo.de wrote:
On 08/11/2014 03:35 PM, jonsmirl@gmail.com wrote:
soc_pcm_hw_params() sets the parameters in this order: link, codec, cpu, platform
This is tripping me up when switching between the 44100 and 48000 families. If previous song was 44100 then the codec has sysclk set to the 44100 family.
Now I play a 48000 family song. codec gets new hardware params and errors out because sysclk is still in the 44100 family.
I have code in the cpu set_hw_params which will switch the sysclk, but it never gets to run because the codec set_hw_params() has already errored out.
Shouldn't this order be: platform, link, cpu, codec
What I don't understand is why does the CPU DAI driver change the sysclk for the CODEC DAI driver? What I'd expect is that the machine driver sets the sysclk for both the CODEC and the CPU DAI. In this case everything should work fine.
The machine driver is simple-card. It is not smart enough to change the sysclk between 22.5Mhz and 24.5Mhz depending on what music is being played.
- Lars
On Tue, Aug 12, 2014 at 7:45 AM, jonsmirl@gmail.com jonsmirl@gmail.com wrote:
On Tue, Aug 12, 2014 at 4:42 AM, Lars-Peter Clausen lars@metafoo.de wrote:
On 08/11/2014 03:35 PM, jonsmirl@gmail.com wrote:
soc_pcm_hw_params() sets the parameters in this order: link, codec, cpu, platform
This is tripping me up when switching between the 44100 and 48000 families. If previous song was 44100 then the codec has sysclk set to the 44100 family.
Now I play a 48000 family song. codec gets new hardware params and errors out because sysclk is still in the 44100 family.
I have code in the cpu set_hw_params which will switch the sysclk, but it never gets to run because the codec set_hw_params() has already errored out.
Shouldn't this order be: platform, link, cpu, codec
What I don't understand is why does the CPU DAI driver change the sysclk for the CODEC DAI driver? What I'd expect is that the machine driver sets the sysclk for both the CODEC and the CPU DAI. In this case everything should work fine.
The machine driver is simple-card. It is not smart enough to change the sysclk between 22.5Mhz and 24.5Mhz depending on what music is being played.
Having said that, should we make simple-card smart enough to do that?
- Lars
-- Jon Smirl jonsmirl@gmail.com
On 08/12/2014 01:46 PM, jonsmirl@gmail.com wrote:
On Tue, Aug 12, 2014 at 7:45 AM, jonsmirl@gmail.com jonsmirl@gmail.com wrote:
On Tue, Aug 12, 2014 at 4:42 AM, Lars-Peter Clausen lars@metafoo.de wrote:
On 08/11/2014 03:35 PM, jonsmirl@gmail.com wrote:
soc_pcm_hw_params() sets the parameters in this order: link, codec, cpu, platform
This is tripping me up when switching between the 44100 and 48000 families. If previous song was 44100 then the codec has sysclk set to the 44100 family.
Now I play a 48000 family song. codec gets new hardware params and errors out because sysclk is still in the 44100 family.
I have code in the cpu set_hw_params which will switch the sysclk, but it never gets to run because the codec set_hw_params() has already errored out.
Shouldn't this order be: platform, link, cpu, codec
What I don't understand is why does the CPU DAI driver change the sysclk for the CODEC DAI driver? What I'd expect is that the machine driver sets the sysclk for both the CODEC and the CPU DAI. In this case everything should work fine.
The machine driver is simple-card. It is not smart enough to change the sysclk between 22.5Mhz and 24.5Mhz depending on what music is being played.
Having said that, should we make simple-card smart enough to do that?
Probably yes. The CPU DAI driver is definitely the wrong place to change the CODEC DAI sysclk.
On Tue, Aug 12, 2014 at 07:46:58AM -0400, jonsmirl@gmail.com wrote:
On Tue, Aug 12, 2014 at 7:45 AM, jonsmirl@gmail.com jonsmirl@gmail.com wrote:
The machine driver is simple-card. It is not smart enough to change the sysclk between 22.5Mhz and 24.5Mhz depending on what music is being played.
Having said that, should we make simple-card smart enough to do that?
Yes, as has been said several times now it's the responsibility of the machine driver to coordinate clocking in the card so if you want to use simple-card that means you need to add the feature to simple-card.
On Tue, Aug 12, 2014 at 7:57 AM, Mark Brown broonie@kernel.org wrote:
On Tue, Aug 12, 2014 at 07:46:58AM -0400, jonsmirl@gmail.com wrote:
On Tue, Aug 12, 2014 at 7:45 AM, jonsmirl@gmail.com jonsmirl@gmail.com wrote:
The machine driver is simple-card. It is not smart enough to change the sysclk between 22.5Mhz and 24.5Mhz depending on what music is being played.
Having said that, should we make simple-card smart enough to do that?
Yes, as has been said several times now it's the responsibility of the machine driver to coordinate clocking in the card so if you want to use
I hear this, but I'm conflicted about putting a generic capability into the machine driver.
simple-card that means you need to add the feature to simple-card.
Some questions then....
1) Are sysclk and mclk the same thing? Inside of simple-card they seem to be but I can possibly see arguments that they aren't.
2) Should sysclk for the cpu-dai and codec-dai be set independently? In simple-card they can be set to conflicting values on the two nodes in the DTS. Should sysclk be a single property for the machine?
3) If sysclk is defined by a phandle to a clock, how should clock accuracy be handled? For example in my CPU the best clock that can be made is about 0.01% less than what it should be. If you take that frequency number and start doing integer division on it you won't get the expect results because of truncation effects. Diving by 44100 gives 511.95 which truncates to 511 instead of 512.
4) If the sysclk is going to change based on the music, should it just flip between 22.5/24.5 or should it use the FS multiplier in the simple DTS and always be set to FS*music? For example FS=512 so for 44100 it is set to 22.5Mhz and 8000 it is set to 4Mhz.
5) What should the syntax for this look like in simple?
6) Longer term set-sysclk could be replaced by using phandles for the clock and then clk notifiers in the components to listen for it being changed. That would remove this problem of having multiple sysclks in the system and the DTS syntax for identifying them.
So in my case I expose a MCLK clk. sgtl5000 codec references that clock with a phandle. As the cpu-dai looks at the music it changes the rate on MCLK. Codec gets notified of that change (if it is listening) and can act on it.
For my TI codecs the MCLK code in the cpu-dai is still there and changing the MCLK, but the TI codec won't have a reference to it in the DTS. The TI codecs can auto clock on to whatever they are fed so they don't need to listen to the MCLK changes. cpu-dai knows that no one is listening to MCLK and doesn't turn the pin on (saving power).
On Tue, Aug 12, 2014 at 08:45:22AM -0400, jonsmirl@gmail.com wrote:
On Tue, Aug 12, 2014 at 7:57 AM, Mark Brown broonie@kernel.org wrote:
Yes, as has been said several times now it's the responsibility of the machine driver to coordinate clocking in the card so if you want to use
I hear this, but I'm conflicted about putting a generic capability into the machine driver.
It's a generic machine driver so having generic code in there seems reasonable.
- Are sysclk and mclk the same thing? Inside of simple-card they seem
to be but I can possibly see arguments that they aren't.
Probably in the context of systems simple enough to use simple-card.
- Should sysclk for the cpu-dai and codec-dai be set independently?
In simple-card they can be set to conflicting values on the two nodes in the DTS. Should sysclk be a single property for the machine?
No, clocks might be at different rates for different devices. One device might divide down the clock to the other.
- If sysclk is defined by a phandle to a clock, how should clock
accuracy be handled? For example in my CPU the best clock that can be made is about 0.01% less than what it should be. If you take that frequency number and start doing integer division on it you won't get the expect results because of truncation effects. Diving by 44100 gives 511.95 which truncates to 511 instead of 512.
I'm not sure it's worth worrying about systems that can't generate accurate clocks at this point, but perhaps someone will think of something sensible to do.
- If the sysclk is going to change based on the music, should it just
flip between 22.5/24.5 or should it use the FS multiplier in the simple DTS and always be set to FS*music? For example FS=512 so for 44100 it is set to 22.5Mhz and 8000 it is set to 4Mhz.
If a fixed multiplier is set I'd expect to see it observed; obviously a device may be constrained in what it can actually use though and a fixed multiplier might not be the best configuration.
- What should the syntax for this look like in simple?
It really depends what the "this" is; try to think of something tasteful.
- Longer term set-sysclk could be replaced by using phandles for the
clock and then clk notifiers in the components to listen for it being changed. That would remove this problem of having multiple sysclks in the system and the DTS syntax for identifying them.
So in my case I expose a MCLK clk. sgtl5000 codec references that clock with a phandle. As the cpu-dai looks at the music it changes the rate on MCLK. Codec gets notified of that change (if it is listening) and can act on it.
OK, but that's not meaningfully different to just having the machine driver call set_sysclk() and only works if the best way to change the clock rate is to change the clocking inside the CPU which isn't going to be the case for all systems. That's the hard part, we have to be able to scale up to handling it.
On Tue, Aug 12, 2014 at 2:20 PM, Mark Brown broonie@kernel.org wrote:
On Tue, Aug 12, 2014 at 08:45:22AM -0400, jonsmirl@gmail.com wrote:
On Tue, Aug 12, 2014 at 7:57 AM, Mark Brown broonie@kernel.org wrote:
Yes, as has been said several times now it's the responsibility of the machine driver to coordinate clocking in the card so if you want to use
I hear this, but I'm conflicted about putting a generic capability into the machine driver.
It's a generic machine driver so having generic code in there seems reasonable.
- Are sysclk and mclk the same thing? Inside of simple-card they seem
to be but I can possibly see arguments that they aren't.
Probably in the context of systems simple enough to use simple-card.
- Should sysclk for the cpu-dai and codec-dai be set independently?
In simple-card they can be set to conflicting values on the two nodes in the DTS. Should sysclk be a single property for the machine?
No, clocks might be at different rates for different devices. One device might divide down the clock to the other.
What do you think about adding fields for min/max allowed sysclk to snd_soc_dai_driver? In my case the SOC can run the sysclk at 100Mhz, but the attached codec can only handle 27Mhz.
Then in simple-card when it is computing the 512fs clock it would use the min/max sysclk numbers to clip its range. Alternatively, I could add some range checking code the the set-sysclk implementation to error out on out of range. Then do some searching around until no one errors.
sgtl5000 can do 96Khz, but only at 256fs. I'm pumping 49Mhz into it right now and it works, but that clearly violates the datasheet.
- If sysclk is defined by a phandle to a clock, how should clock
accuracy be handled? For example in my CPU the best clock that can be made is about 0.01% less than what it should be. If you take that frequency number and start doing integer division on it you won't get the expect results because of truncation effects. Diving by 44100 gives 511.95 which truncates to 511 instead of 512.
I'm not sure it's worth worrying about systems that can't generate accurate clocks at this point, but perhaps someone will think of something sensible to do.
- If the sysclk is going to change based on the music, should it just
flip between 22.5/24.5 or should it use the FS multiplier in the simple DTS and always be set to FS*music? For example FS=512 so for 44100 it is set to 22.5Mhz and 8000 it is set to 4Mhz.
If a fixed multiplier is set I'd expect to see it observed; obviously a device may be constrained in what it can actually use though and a fixed multiplier might not be the best configuration.
- What should the syntax for this look like in simple?
It really depends what the "this" is; try to think of something tasteful.
- Longer term set-sysclk could be replaced by using phandles for the
clock and then clk notifiers in the components to listen for it being changed. That would remove this problem of having multiple sysclks in the system and the DTS syntax for identifying them.
So in my case I expose a MCLK clk. sgtl5000 codec references that clock with a phandle. As the cpu-dai looks at the music it changes the rate on MCLK. Codec gets notified of that change (if it is listening) and can act on it.
OK, but that's not meaningfully different to just having the machine driver call set_sysclk() and only works if the best way to change the clock rate is to change the clocking inside the CPU which isn't going to be the case for all systems. That's the hard part, we have to be able to scale up to handling it.
On Wed, Aug 13, 2014 at 08:25:22AM -0400, jonsmirl@gmail.com wrote:
On Tue, Aug 12, 2014 at 2:20 PM, Mark Brown broonie@kernel.org wrote:
- Should sysclk for the cpu-dai and codec-dai be set independently?
In simple-card they can be set to conflicting values on the two nodes in the DTS. Should sysclk be a single property for the machine?
No, clocks might be at different rates for different devices. One device might divide down the clock to the other.
What do you think about adding fields for min/max allowed sysclk to snd_soc_dai_driver? In my case the SOC can run the sysclk at 100Mhz, but the attached codec can only handle 27Mhz.
If we're going to do constraints they should be done properly so need to be able to represent specific numbers too. It's probably a clock API problem, independently implementing it seems redundant. Doing simple things in simple-card for the common cases makes sense while the clock API isn't something we can rely on but equally we don't want to be doing huge amounts, and of course simple-card is just a subset of what people are doing.
On Wed, Aug 13, 2014 at 12:35 PM, Mark Brown broonie@kernel.org wrote:
On Wed, Aug 13, 2014 at 08:25:22AM -0400, jonsmirl@gmail.com wrote:
On Tue, Aug 12, 2014 at 2:20 PM, Mark Brown broonie@kernel.org wrote:
- Should sysclk for the cpu-dai and codec-dai be set independently?
In simple-card they can be set to conflicting values on the two nodes in the DTS. Should sysclk be a single property for the machine?
No, clocks might be at different rates for different devices. One device might divide down the clock to the other.
What do you think about adding fields for min/max allowed sysclk to snd_soc_dai_driver? In my case the SOC can run the sysclk at 100Mhz, but the attached codec can only handle 27Mhz.
If we're going to do constraints they should be done properly so need to be able to represent specific numbers too. It's probably a clock API problem, independently implementing it seems redundant. Doing simple things in simple-card for the common cases makes sense while the clock API isn't something we can rely on but equally we don't want to be doing huge amounts, and of course simple-card is just a subset of what people are doing.
Right now I could make the set_sysclk implementations return -EINVAL if the clock is out of range. Then add some logic to simple-card to try again with a different FS multipler. Or at least I can print some error message to give a clue as to why the song won't play.
What ASoC platforms haven't implemented the clock API? Using the clock API, simple-card would have a phandle to the clock master. Then it could set the proposed rate into it. Codec and CPU would have clk_notifier implemented so that they could reject the proposed change based on their constraints. Simple could then adjust the FS multiple until both devices are happy.
// from clock API struct clk_notifier_data { struct clk *clk; unsigned long old_rate; unsigned long new_rate; };
On Wed, Aug 13, 2014 at 01:00:02PM -0400, jonsmirl@gmail.com wrote:
Right now I could make the set_sysclk implementations return -EINVAL if the clock is out of range. Then add some logic to simple-card to try again with a different FS multipler. Or at least I can print some error message to give a clue as to why the song won't play.
Well, error checking would be good.
What ASoC platforms haven't implemented the clock API? Using the clock
There's still some ARM platforms that don't implement the common clock API yet (or make it optional), Blackfin, MIPS, some PowerPC, SH and x86 doesn't enable it for most platforms and when I've tried sending patches the maintainers didn't bother replying. Life would be a lot easier if the common clock API were available by default but the infrastructure for asm-generic leaves something to be desired too.
API, simple-card would have a phandle to the clock master. Then it could set the proposed rate into it. Codec and CPU would have clk_notifier implemented so that they could reject the proposed change based on their constraints. Simple could then adjust the FS multiple until both devices are happy.
Even for fairly simple cards the solution is often more complex than this - the master clock might not be directly connected to one or both of the devices and there may be programmablility for the dividers between them. It therefore seems more likely to be useful to set the rate we actually want rather than some parent rate, that at least gives the best information to the code trying to do the resolution.
On Wed, Aug 13, 2014 at 1:24 PM, Mark Brown broonie@kernel.org wrote:
On Wed, Aug 13, 2014 at 01:00:02PM -0400, jonsmirl@gmail.com wrote:
Right now I could make the set_sysclk implementations return -EINVAL if the clock is out of range. Then add some logic to simple-card to try again with a different FS multipler. Or at least I can print some error message to give a clue as to why the song won't play.
Well, error checking would be good.
What ASoC platforms haven't implemented the clock API? Using the clock
There's still some ARM platforms that don't implement the common clock API yet (or make it optional), Blackfin, MIPS, some PowerPC, SH and x86 doesn't enable it for most platforms and when I've tried sending patches the maintainers didn't bother replying. Life would be a lot easier if the common clock API were available by default but the infrastructure for asm-generic leaves something to be desired too.
API, simple-card would have a phandle to the clock master. Then it could set the proposed rate into it. Codec and CPU would have clk_notifier implemented so that they could reject the proposed change based on their constraints. Simple could then adjust the FS multiple until both devices are happy.
Even for fairly simple cards the solution is often more complex than this - the master clock might not be directly connected to one or both of the devices and there may be programmablility for the dividers between them. It therefore seems more likely to be useful to set the rate we actually want rather than some parent rate, that at least gives the best information to the code trying to do the resolution.
Simple already supports the case of having the clocks connected to two different places.
I'm trying to get the case where there is one clock using a semi-fixed FS rate, but that FS rate is subject to the high/low contraints of the cpu/codec.
In simple I can set "simple-audio-card,mclk-fs = 512" but depending on what music is being played it may violate the clock constraints on the hardware. In my case the CPU happily generated 49Mhz and fed it into a chip only capable of 27Mhz. But this codec needs the 512 fs for low frame rate music. I think the floor on the clock is 4Mhz.
Another solution would be something like "simple-audio-card,sysclk = <22579200, 24576000>" That would make both of these chips happy.
On 08/13/2014 07:00 PM, jonsmirl@gmail.com wrote:
On Wed, Aug 13, 2014 at 12:35 PM, Mark Brown broonie@kernel.org wrote:
On Wed, Aug 13, 2014 at 08:25:22AM -0400, jonsmirl@gmail.com wrote:
On Tue, Aug 12, 2014 at 2:20 PM, Mark Brown broonie@kernel.org wrote:
- Should sysclk for the cpu-dai and codec-dai be set independently?
In simple-card they can be set to conflicting values on the two nodes in the DTS. Should sysclk be a single property for the machine?
No, clocks might be at different rates for different devices. One device might divide down the clock to the other.
What do you think about adding fields for min/max allowed sysclk to snd_soc_dai_driver? In my case the SOC can run the sysclk at 100Mhz, but the attached codec can only handle 27Mhz.
If we're going to do constraints they should be done properly so need to be able to represent specific numbers too. It's probably a clock API problem, independently implementing it seems redundant. Doing simple things in simple-card for the common cases makes sense while the clock API isn't something we can rely on but equally we don't want to be doing huge amounts, and of course simple-card is just a subset of what people are doing.
Right now I could make the set_sysclk implementations return -EINVAL if the clock is out of range. Then add some logic to simple-card to try again with a different FS multipler. Or at least I can print some error message to give a clue as to why the song won't play.
The driver should definitely reject sysclk rates it can not support. I think this idea is what will give you the best results, at least short term. It quite simple to implement yet effective at solving the problem and does not add new DT ABI that we need to support forever. It could later be replaced with a more sophisticated fs rate enumeration scheme.
- Lars
participants (3)
-
jonsmirl@gmail.com
-
Lars-Peter Clausen
-
Mark Brown