[alsa-devel] [PATCH] ASoC: rt5659: Add mclk controls
The codec driver should control the mclk. So this patch adds this support.
Signed-off-by: Nicolin Chen nicoleotsuka@gmail.com --- Documentation/devicetree/bindings/sound/rt5659.txt | 3 +++ sound/soc/codecs/rt5659.c | 24 ++++++++++++++++++++++ sound/soc/codecs/rt5659.h | 1 + 3 files changed, 28 insertions(+)
diff --git a/Documentation/devicetree/bindings/sound/rt5659.txt b/Documentation/devicetree/bindings/sound/rt5659.txt index 5f79e7f..1766e05 100644 --- a/Documentation/devicetree/bindings/sound/rt5659.txt +++ b/Documentation/devicetree/bindings/sound/rt5659.txt @@ -12,6 +12,9 @@ Required properties:
Optional properties:
+- clocks: The phandle of the master clock to the CODEC +- clock-names: Should be "mclk" + - realtek,in1-differential - realtek,in3-differential - realtek,in4-differential diff --git a/sound/soc/codecs/rt5659.c b/sound/soc/codecs/rt5659.c index 1b30914..d9ead96 100644 --- a/sound/soc/codecs/rt5659.c +++ b/sound/soc/codecs/rt5659.c @@ -9,6 +9,7 @@ * published by the Free Software Foundation. */
+#include <linux/clk.h> #include <linux/module.h> #include <linux/moduleparam.h> #include <linux/init.h> @@ -3565,7 +3566,9 @@ static int rt5659_set_bclk_ratio(struct snd_soc_dai *dai, unsigned int ratio) static int rt5659_set_bias_level(struct snd_soc_codec *codec, enum snd_soc_bias_level level) { + struct snd_soc_dapm_context *dapm = snd_soc_codec_get_dapm(codec); struct rt5659_priv *rt5659 = snd_soc_codec_get_drvdata(codec); + int ret;
switch (level) { case SND_SOC_BIAS_PREPARE: @@ -3582,6 +3585,17 @@ static int rt5659_set_bias_level(struct snd_soc_codec *codec, RT5659_PWR_FV1 | RT5659_PWR_FV2); break;
+ case SND_SOC_BIAS_STANDBY: + if (dapm->bias_level == SND_SOC_BIAS_OFF) { + ret = clk_prepare_enable(rt5659->mclk); + if (ret) { + dev_err(codec->dev, + "failed to enable MCLK: %d\n", ret); + return ret; + } + } + break; + case SND_SOC_BIAS_OFF: regmap_update_bits(rt5659->regmap, RT5659_PWR_DIG_1, RT5659_PWR_LDO, 0); @@ -3591,6 +3605,7 @@ static int rt5659_set_bias_level(struct snd_soc_codec *codec, RT5659_PWR_MB | RT5659_PWR_VREF2); regmap_update_bits(rt5659->regmap, RT5659_DIG_MISC, RT5659_DIG_GATE_CTRL, 0); + clk_disable_unprepare(rt5659->mclk); break;
default: @@ -4020,6 +4035,15 @@ static int rt5659_i2c_probe(struct i2c_client *i2c,
regmap_write(rt5659->regmap, RT5659_RESET, 0);
+ /* Check if MCLK provided */ + rt5659->mclk = devm_clk_get(&i2c->dev, "mclk"); + if (IS_ERR(rt5659->mclk)) { + if (PTR_ERR(rt5659->mclk) == -EPROBE_DEFER) + return -EPROBE_DEFER; + /* Otherwise mark the mclk pointer to NULL */ + rt5659->mclk = NULL; + } + rt5659_calibrate(rt5659);
/* line in diff mode*/ diff --git a/sound/soc/codecs/rt5659.h b/sound/soc/codecs/rt5659.h index d31c9e5..d69b0eb 100644 --- a/sound/soc/codecs/rt5659.h +++ b/sound/soc/codecs/rt5659.h @@ -1796,6 +1796,7 @@ struct rt5659_priv { struct gpio_desc *gpiod_reset; struct snd_soc_jack *hs_jack; struct delayed_work jack_detect_work; + struct clk *mclk;
int sysclk; int sysclk_src;
On Wed, Jul 27, 2016 at 04:02:48PM -0700, Nicolin Chen wrote:
The codec driver should control the mclk. So this patch adds this support.
- /* Check if MCLK provided */
- rt5659->mclk = devm_clk_get(&i2c->dev, "mclk");
- if (IS_ERR(rt5659->mclk)) {
if (PTR_ERR(rt5659->mclk) == -EPROBE_DEFER)
return -EPROBE_DEFER;
/* Otherwise mark the mclk pointer to NULL */
rt5659->mclk = NULL;
- }
This device seems to be used on x86 systems so we'll need to ensure that they register clocks for this. They really should set this up using quirks keyed off DMI information or similar so it's hidden from other systems.
On Thu, Jul 28, 2016 at 04:57:32PM +0100, Mark Brown wrote:
The codec driver should control the mclk. So this patch adds this support.
- /* Check if MCLK provided */
- rt5659->mclk = devm_clk_get(&i2c->dev, "mclk");
- if (IS_ERR(rt5659->mclk)) {
if (PTR_ERR(rt5659->mclk) == -EPROBE_DEFER)
return -EPROBE_DEFER;
/* Otherwise mark the mclk pointer to NULL */
rt5659->mclk = NULL;
- }
This device seems to be used on x86 systems so we'll need to ensure that they register clocks for this. They really should set this up using quirks keyed off DMI information or similar so it's hidden from other systems.
Hmm..the change defines this mclk as a optional property so I'm not sure how it would affect x86 systems. (Would love to refine it to make it impactless.) Since it's literally hidden, any way that I can manually ensure it? Or just wait for an ack from x86 developers?
Thanks Nicolin
On Thu, Jul 28, 2016 at 11:14:20AM -0700, Nicolin Chen wrote:
On Thu, Jul 28, 2016 at 04:57:32PM +0100, Mark Brown wrote:
This device seems to be used on x86 systems so we'll need to ensure that they register clocks for this. They really should set this up using quirks keyed off DMI information or similar so it's hidden from other systems.
Hmm..the change defines this mclk as a optional property so I'm not sure how it would affect x86 systems. (Would love to refine
Ah, clk_prepare_enable() silently ignores NULL as an argument? It is a bit messy to do things that way but might be the most practical thing.
it to make it impactless.) Since it's literally hidden, any way that I can manually ensure it? Or just wait for an ack from x86 developers?
Hopefully they'll speak up - we're in the merge window anyway so it'll be just over a week before this can go into -next.
On Thu, Jul 28, 2016 at 07:55:10PM +0100, Mark Brown wrote:
This device seems to be used on x86 systems so we'll need to ensure that they register clocks for this. They really should set this up using quirks keyed off DMI information or similar so it's hidden from other systems.
Hmm..the change defines this mclk as a optional property so I'm not sure how it would affect x86 systems. (Would love to refine
Ah, clk_prepare_enable() silently ignores NULL as an argument? It is a bit messy to do things that way but might be the most practical thing.
At least the one in the drivers/clk/clk.c does:
660 int clk_prepare(struct clk *clk) 661 { 662 int ret; 663 664 if (!clk) 665 return 0; ... 774 int clk_enable(struct clk *clk) 775 { 776 unsigned long flags; 777 int ret; 778 779 if (!clk) 780 return 0;
Hopefully they'll speak up - we're in the merge window anyway so it'll be just over a week before this can go into -next.
I see. Let's wait then.
Thanks Nicolin
On Thu, Jul 28, 2016 at 07:55:10PM +0100, Mark Brown wrote:
On Thu, Jul 28, 2016 at 11:14:20AM -0700, Nicolin Chen wrote:
On Thu, Jul 28, 2016 at 04:57:32PM +0100, Mark Brown wrote:
This device seems to be used on x86 systems so we'll need to ensure that they register clocks for this. They really should set this up using quirks keyed off DMI information or similar so it's hidden from other systems.
Hmm..the change defines this mclk as a optional property so I'm not sure how it would affect x86 systems. (Would love to refine
Ah, clk_prepare_enable() silently ignores NULL as an argument? It is a bit messy to do things that way but might be the most practical thing.
it to make it impactless.) Since it's literally hidden, any way that I can manually ensure it? Or just wait for an ack from x86 developers?
Hopefully they'll speak up - we're in the merge window anyway so it'll be just over a week before this can go into -next.
Yeah I am not aware of any plan to have clks on x86. For audio we are not going to use much. ACPI and controller w/ firmware does the job.
I have added Darren, he oversee platform things so might know if clocks would be there in future..
Thanks
On Fri, Jul 29, 2016 at 09:45:21PM +0530, Vinod Koul wrote:
Yeah I am not aware of any plan to have clks on x86. For audio we are not going to use much. ACPI and controller w/ firmware does the job.
I have added Darren, he oversee platform things so might know if clocks would be there in future..
Not having controllable clocks is fine but as we've discussed before you're going to need to represent the clocks that are there with fixed clocks instantiated through whatever you use to enumerate boards. We don't want to have to special case x86 all the time in CODEC drivers.
On Fri, Jul 29, 2016 at 05:39:33PM +0100, Mark Brown wrote:
On Fri, Jul 29, 2016 at 09:45:21PM +0530, Vinod Koul wrote:
Yeah I am not aware of any plan to have clks on x86. For audio we are not going to use much. ACPI and controller w/ firmware does the job.
I have added Darren, he oversee platform things so might know if clocks would be there in future..
Not having controllable clocks is fine but as we've discussed before you're going to need to represent the clocks that are there with fixed clocks instantiated through whatever you use to enumerate boards. We don't want to have to special case x86 all the time in CODEC drivers.
Right I don't disagree on that. But we are not there yet!
Pierre managed to get that working on BYT, CHT nosuch luck. Further down the DSPs fw is managing it, not even exposed to SW :(
But yes I can try to get the controller register as a clock provider and set things that way. Let me see what can be done here. We can discuss this at uConf, I will add that as topic :)
Thanks
On 7/29/16 11:39 AM, Mark Brown wrote:
On Fri, Jul 29, 2016 at 09:45:21PM +0530, Vinod Koul wrote:
Yeah I am not aware of any plan to have clks on x86. For audio we are not going to use much. ACPI and controller w/ firmware does the job.
I have added Darren, he oversee platform things so might know if clocks would be there in future..
Not having controllable clocks is fine but as we've discussed before you're going to need to represent the clocks that are there with fixed clocks instantiated through whatever you use to enumerate boards. We don't want to have to special case x86 all the time in CODEC drivers.
Without going into a debate on x86 v. the clock API or the merits of a patch that has already been applied, I am pretty confused on who's supposed to manage the mclk between the machine and codec driver.
If you go back to this specific patch for the rt5659 audio codec, the simplified code reads as:
static int rt5659_set_bias_level() [snip] case SND_SOC_BIAS_STANDBY: if (dapm->bias_level == SND_SOC_BIAS_OFF) { ret = clk_prepare_enable(rt5659->mclk);
So on a DAPM transition the clock is enabled. Fine. What's not clear here is that the codec driver doesn't know what rates are supported by the SOC/chipset. The machine driver is typically the one doing calls such as
ret = snd_soc_dai_set_pll(codec_dai, 0, RT5640_PLL1_S_MCLK, 19200000, params_rate(params) * 512);
it's as if this patch assumes the mclk is a single rate? if this was a generic solution then the codec driver would need to set the rates as well and its internal PLL/divider settings.
In addition, the machine driver can implement a platform clock control with things like
static const struct snd_soc_dapm_route byt_rt5640_audio_map[] = { {"Headphone", NULL, "Platform Clock"}, {"Headset Mic", NULL, "Platform Clock"}, {"Internal Mic", NULL, "Platform Clock"}, {"Speaker", NULL, "Platform Clock"},
static int platform_clock_control(struct snd_soc_dapm_widget *w, struct snd_kcontrol *k, int event)
if (SND_SOC_DAPM_EVENT_ON(event)) { ret = clk_prepare_enable(priv->mclk); ret = snd_soc_dai_set_sysclk(codec_dai, RT5640_SCLK_S_PLL1, 48000 * 512, SND_SOC_CLOCK_IN); } else { /* * Set codec clock source to internal clock before * turning off the platform clock. Codec needs clock * for Jack detection and button press */ ret = snd_soc_dai_set_sysclk(codec_dai, RT5640_SCLK_S_RCCLK, 0, SND_SOC_CLOCK_IN); clk_disable_unprepare(priv->mclk);
so the summary is that we have two ways of doing the same thing - turning the mclk on when it's needed - and I wonder if doing this in the codec is really the right solution? Again this is not a question on the merits of the clk API/framework but whether we can have a single point of control instead of two pieces of code doing the same thing in two drivers. If I am missing something I am all ears.
On Wed, Aug 10, 2016 at 08:57:28AM -0500, Pierre-Louis Bossart wrote:
Without going into a debate on x86 v. the clock API or the merits of a patch that has already been applied, I am pretty confused on who's supposed to manage the mclk between the machine and codec driver.
So on a DAPM transition the clock is enabled. Fine. What's not clear here is that the codec driver doesn't know what rates are supported by the SOC/chipset. The machine driver is typically the one doing calls such as
This should really be being propagated through the clock tree by the clock API rather than open coded - for a lot of things it'll just boil down to a clk_set_rate() at the edge of the clock tree. Any constraints should also be applied through the clock API, though in a lot of cases the devices are simple enough that it should be a fairly mechanical process.
so the summary is that we have two ways of doing the same thing - turning the mclk on when it's needed - and I wonder if doing this in the codec is really the right solution? Again this is not a question on the merits of the clk API/framework but whether we can have a single point of control instead of two pieces of code doing the same thing in two drivers. If I am missing something I am all ears.
We've got two ways of doing this at the minute partly because historically things have been open coded in the machine drivers due to the lack of a clock API, now we have one we can use we should be using it consistently to set rates. Where the machine driver needs to do things dynamically it really ought to be able to express the constraints it's trying to set through the clock API, if we can't do things we need we should improve the clock API. This will mean that we don't have to reinvent the wheel when we're doing things with clocks, we have consistent interfaces to all parts of the clock tree and other bits of the system will get reuse from anything we've learned about implementation.
The CODEC clearly has *some* idea of what's going on here, and especially for simpler CODECs the code to drive the clocking should be fairly easy to generalize as there's few options. From a clock API point of view the CODEC really ought to be the one requesting the clocks that go into it, though there's nothing that says it has to only use its own information to do that.
On 8/10/16 12:06 PM, Mark Brown wrote:
On Wed, Aug 10, 2016 at 08:57:28AM -0500, Pierre-Louis Bossart wrote:
Without going into a debate on x86 v. the clock API or the merits of a patch that has already been applied, I am pretty confused on who's supposed to manage the mclk between the machine and codec driver.
So on a DAPM transition the clock is enabled. Fine. What's not clear here is that the codec driver doesn't know what rates are supported by the SOC/chipset. The machine driver is typically the one doing calls such as
This should really be being propagated through the clock tree by the clock API rather than open coded - for a lot of things it'll just boil down to a clk_set_rate() at the edge of the clock tree. Any constraints should also be applied through the clock API, though in a lot of cases the devices are simple enough that it should be a fairly mechanical process.
so the summary is that we have two ways of doing the same thing - turning the mclk on when it's needed - and I wonder if doing this in the codec is really the right solution? Again this is not a question on the merits of the clk API/framework but whether we can have a single point of control instead of two pieces of code doing the same thing in two drivers. If I am missing something I am all ears.
We've got two ways of doing this at the minute partly because historically things have been open coded in the machine drivers due to the lack of a clock API, now we have one we can use we should be using it consistently to set rates. Where the machine driver needs to do things dynamically it really ought to be able to express the constraints it's trying to set through the clock API, if we can't do things we need we should improve the clock API. This will mean that we don't have to reinvent the wheel when we're doing things with clocks, we have consistent interfaces to all parts of the clock tree and other bits of the system will get reuse from anything we've learned about implementation.
If we want to be consistent then we need to have a framework that handles both the SOC clock sources and the codec internal clock tree (including dividers and switches) I wonder if what you are hinting at is the codec driver modeling its internal PLL/clock tree with the clock API? If we have the clock API requesting the mclk only, and the rest of the codec configuration is done by the machine driver there is no real progress I can see.
The CODEC clearly has *some* idea of what's going on here, and especially for simpler CODECs the code to drive the clocking should be fairly easy to generalize as there's few options. From a clock API point of view the CODEC really ought to be the one requesting the clocks that go into it, though there's nothing that says it has to only use its own information to do that.
I don't get the last part, how would the codec use information it doesn't own or have access to? At any rate, I am only trying to define the problem statement, probably something to talk about at the Audio Miniconference.
On Wed, Aug 10, 2016 at 12:31:28PM -0500, Pierre-Louis Bossart wrote:
If we want to be consistent then we need to have a framework that handles both the SOC clock sources and the codec internal clock tree (including dividers and switches) I wonder if what you are hinting at is the codec driver modeling its internal PLL/clock tree with the clock API?
I'm not just hinting at that, I've openly stated it quite a few times now! :P For the simpler CODECs it's kind of marginal if you need to bother but for anything more complex (even things with PLLs) it seems like the way forwards.
If we have the clock API requesting the mclk only, and the rest of the codec configuration is done by the machine driver there is no real progress I can see.
It's still useful in that we can consistently manage the clocks external to the CODEC that way, especially when looking at systems where it is useful to dynamically start and stop the clocks.
The CODEC clearly has *some* idea of what's going on here, and especially for simpler CODECs the code to drive the clocking should be fairly easy to generalize as there's few options. From a clock API point of view the CODEC really ought to be the one requesting the clocks that go into it, though there's nothing that says it has to only use its own information to do that.
I don't get the last part, how would the codec use information it doesn't own or have access to?
I'd expect that a clock consumer should (like a regulator consumer can) be able to figure out what constraints there are on the rates it has set. Those could be fixed restrictions but it'd be good to also have ways for other actors in the system to change things at runtime.
At any rate, I am only trying to define the problem statement, probably something to talk about at the Audio Miniconference.
Yup. I really should chase to see if that actually got accepted or not...
On 8/10/16 12:52 PM, Mark Brown wrote:
On Wed, Aug 10, 2016 at 12:31:28PM -0500, Pierre-Louis Bossart wrote:
If we want to be consistent then we need to have a framework that handles both the SOC clock sources and the codec internal clock tree (including dividers and switches) I wonder if what you are hinting at is the codec driver modeling its internal PLL/clock tree with the clock API?
I'm not just hinting at that, I've openly stated it quite a few times now! :P For the simpler CODECs it's kind of marginal if you need to bother but for anything more complex (even things with PLLs) it seems like the way forwards.
interesting, thanks for the precision. I must admit I missed this concept completely and I didn't see any codec vendors work in this direction so far. Ironically the x86 part may be the most straightforward...
On Wed, Aug 10, 2016 at 04:59:03PM -0500, Pierre-Louis Bossart wrote:
On 8/10/16 12:52 PM, Mark Brown wrote:
I'm not just hinting at that, I've openly stated it quite a few times now! :P For the simpler CODECs it's kind of marginal if you need to bother but for anything more complex (even things with PLLs) it seems like the way forwards.
interesting, thanks for the precision. I must admit I missed this concept completely and I didn't see any codec vendors work in this direction so far. Ironically the x86 part may be the most straightforward...
Architectures like x86 that are unable or unwilling to enable the use of the clock API are a blocker here, drivers can't make use of the API if it's not available. There are more architectures than x86 but it's the only one with any active ASoC use that I'm aware of, unfortunately the maintainers didn't react positively to attempts to enable the clock API in the past. I just sent a patch to try to fix it from the clock API side so let's hope that's a bit more helpful.
- /* Check if MCLK provided */
- rt5659->mclk = devm_clk_get(&i2c->dev, "mclk");
- if (IS_ERR(rt5659->mclk)) {
if (PTR_ERR(rt5659->mclk) == -EPROBE_DEFER)
return -EPROBE_DEFER;
The correct thing to do here is to check if != -ENOENT and then return the error code. Otherwise you silently ignore errors if a clock was specified, but there was an error requesting it.
On Thu, Jul 28, 2016 at 10:40:44PM +0200, Lars-Peter Clausen wrote:
- /* Check if MCLK provided */
- rt5659->mclk = devm_clk_get(&i2c->dev, "mclk");
- if (IS_ERR(rt5659->mclk)) {
if (PTR_ERR(rt5659->mclk) == -EPROBE_DEFER)
return -EPROBE_DEFER;
The correct thing to do here is to check if != -ENOENT and then return the error code. Otherwise you silently ignore errors if a clock was specified, but there was an error requesting it.
Oh..Thanks for the input. Will refine it in v2.
participants (5)
-
Lars-Peter Clausen
-
Mark Brown
-
Nicolin Chen
-
Pierre-Louis Bossart
-
Vinod Koul