[alsa-devel] [RFC 00/17] clk: Add per-controller locks to fix deadlocks
Hi,
RFC, please, do not apply, maybe except patch #1 which is harmless.
Introduction ============ The patchset brings new entity: clock controller representing a hardware block. The clock controller comes with its own prepare lock which is used then in many places. The idea is to fix the deadlock mentioned in commit 10ff4c5239a1 ("i2c: exynos5: Fix possible ABBA deadlock by keeping I2C clock prepared") and commit 34e81ad5f0b6 ("i2c: s3c2410: fix ABBA deadlock by keeping clock prepared").
Disclaimer ========== Request for comments, so: 1. Only exynos_defconfig builds, 2. A lot of FIXME/TODO note still, 3. Checkpatch not run, lines not aligned, 4. Other (non-exynos) drivers not converted, 5. Probably not yet bisectable, 6. Locking became quite complex. The previous one lock was simple. Inefficient and dead-lock prone but simple. Because of clock hierarchy spanning through controllers, the new locking became quite complicated. I don't like it but...
Details ======= In Exynos-based boards case the deadlock occurs between clock's prepare_lock and regmap-i2c's lock:
CPU #0: CPU #1: lock(regmap) s2mps11-clk: clk_prepare_lock()
i2c-exynos: clk_prepare_lock() - wait lock(regmap) - wait
The clk_prepare_lock() on both CPUs come from different clock drivers and components: 1. I2C clock is part of SoC block and is required by i2c-s3c2410/i2c-exynos5 driver, 2. S2MPS11 clock is separate device, however driver uses I2C regmap.
The deadlock was reported by lockdep (always) and was happening in 20% of boots of Odroid XU3 with multi_v7 defconfig. Workaround for deadlock was implemented by removing prepare/unprepare calls from I2C transfers. However these are just workarounds... which after applying this patch can be reverted.
Additionally Marek Szyprowski's work on domains/clocks/pinctrl exposed the deadlock again in different configuration.
Comments as usual are welcomed.
Best regards, Krzysztof
Krzysztof Kozlowski (17): clk: bcm2835: Rename clk_register to avoid name conflict clk: Add clock controller to fine-grain the prepare lock clk: s2mps11: Switch to new clock controller API clk: samsung: Allocate a clock controller in context clk: fixed-rate: Switch to new clock controller API clk: gate: Switch to new clock controller API clk: mux: Switch to new clock controller API clk: fixed-factor: Switch to new clock controller API clk: divider: Switch to new clock controller API clk: composite: Switch to new clock controller API clk: gpio: Switch to new clock controller API ASoC: samsung: Switch to new clock controller API clk: samsung: audss: samsung: Switch to new clock controller API clk: samsung: clkout: samsung: Switch to new clock controller API clk: Use per-controller locking Revert "i2c: exynos5: Fix possible ABBA deadlock by keeping I2C clock prepared" Revert "i2c: s3c2410: fix ABBA deadlock by keeping clock prepared"
drivers/clk/bcm/clk-bcm2835.c | 8 +- drivers/clk/clk-composite.c | 8 +- drivers/clk/clk-divider.c | 10 +- drivers/clk/clk-fixed-factor.c | 11 +- drivers/clk/clk-fixed-rate.c | 28 +- drivers/clk/clk-fractional-divider.c | 5 +- drivers/clk/clk-gate.c | 8 +- drivers/clk/clk-gpio.c | 29 +- drivers/clk/clk-mux.c | 32 ++- drivers/clk/clk-s2mps11.c | 10 +- drivers/clk/clk.c | 456 +++++++++++++++++++++++++++----- drivers/clk/samsung/clk-exynos-audss.c | 30 ++- drivers/clk/samsung/clk-exynos-clkout.c | 11 +- drivers/clk/samsung/clk.c | 25 +- drivers/clk/samsung/clk.h | 1 + drivers/i2c/busses/i2c-exynos5.c | 24 +- drivers/i2c/busses/i2c-s3c2410.c | 23 +- include/linux/clk-provider.h | 88 ++++-- include/linux/clk.h | 1 + sound/soc/samsung/i2s.c | 13 +- 20 files changed, 612 insertions(+), 209 deletions(-)
During clk_register_*() API refactoring, macros will be used for hiding and narrowing the scope of changes thus leading to a name conflict with clk_register found in bcm2835 clk driver.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com --- drivers/clk/bcm/clk-bcm2835.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c index 7a7970865c2d..b77a8efbfce7 100644 --- a/drivers/clk/bcm/clk-bcm2835.c +++ b/drivers/clk/bcm/clk-bcm2835.c @@ -1262,12 +1262,12 @@ static struct clk *bcm2835_register_gate(struct bcm2835_cprman *cprman, typedef struct clk *(*bcm2835_clk_register)(struct bcm2835_cprman *cprman, const void *data); struct bcm2835_clk_desc { - bcm2835_clk_register clk_register; + bcm2835_clk_register bcm2835_clk_register; const void *data; };
/* assignment helper macros for different clock types */ -#define _REGISTER(f, ...) { .clk_register = (bcm2835_clk_register)f, \ +#define _REGISTER(f, ...) { .bcm2835_clk_register = (bcm2835_clk_register)f, \ .data = __VA_ARGS__ } #define REGISTER_PLL(...) _REGISTER(&bcm2835_register_pll, \ &(struct bcm2835_pll_data) \ @@ -1825,8 +1825,8 @@ static int bcm2835_clk_probe(struct platform_device *pdev)
for (i = 0; i < asize; i++) { desc = &clk_desc_array[i]; - if (desc->clk_register && desc->data) - clks[i] = desc->clk_register(cprman, desc->data); + if (desc->bcm2835_clk_register && desc->data) + clks[i] = desc->bcm2835_clk_register(cprman, desc->data); }
return of_clk_add_provider(dev->of_node, of_clk_src_onecell_get,
Add a new entity - clock controller - so the global clock prepare lock could be fine-grained per controller. The controller is an abstract way of representing a hardware block. It overlaps a little with clock provider so there is a potential of merging them.
The clock hierarchy might span between many controllers so add necessary locking primitives for locking children, parents or everything.
Add a global controller for drivers not converted to new API. This will be removed once everything uses per-device/per-driver clock controller.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com --- drivers/clk/clk.c | 300 +++++++++++++++++++++++++++++++++++++++++-- include/linux/clk-provider.h | 25 +++- include/linux/clk.h | 1 + 3 files changed, 310 insertions(+), 16 deletions(-)
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c index 238b989bf778..ee1cedfbaa29 100644 --- a/drivers/clk/clk.c +++ b/drivers/clk/clk.c @@ -35,6 +35,7 @@ static struct task_struct *enable_owner; static int prepare_refcnt; static int enable_refcnt;
+static LIST_HEAD(clk_ctrl_list); static HLIST_HEAD(clk_root_list); static HLIST_HEAD(clk_orphan_list); static LIST_HEAD(clk_notifier_list); @@ -46,6 +47,7 @@ struct clk_core { const struct clk_ops *ops; struct clk_hw *hw; struct module *owner; + struct clk_ctrl *ctrl; struct clk_core *parent; const char **parent_names; struct clk_core **parents; @@ -87,6 +89,24 @@ struct clk { struct hlist_node clks_node; };
+struct clk_ctrl { + struct device *dev; /* Needed? */ + struct mutex prepare_lock; + struct task_struct *prepare_owner; + int prepare_refcnt; + struct list_head node; +}; + +/* + * As a temporary solution, register all clocks which pass NULL as clock + * controller under this one. This should be removed after converting + * all users to new clock controller aware API. + */ +static struct clk_ctrl global_ctrl = { + .prepare_lock = __MUTEX_INITIALIZER(global_ctrl.prepare_lock), + .node = LIST_HEAD_INIT(global_ctrl.node), +}; + /*** locking ***/ static void clk_prepare_lock(void) { @@ -148,6 +168,228 @@ static void clk_enable_unlock(unsigned long flags) spin_unlock_irqrestore(&enable_lock, flags); }
+static void clk_ctrl_prepare_lock(struct clk_ctrl *ctrl) +{ + if (!ctrl) + return; + + if (!mutex_trylock(&ctrl->prepare_lock)) { + if (ctrl->prepare_owner == current) { + ctrl->prepare_refcnt++; + return; + } + mutex_lock(&ctrl->prepare_lock); + } + WARN_ON_ONCE(ctrl->prepare_owner != NULL); + WARN_ON_ONCE(ctrl->prepare_refcnt != 0); + ctrl->prepare_owner = current; + ctrl->prepare_refcnt = 1; +} + +static void clk_ctrl_prepare_unlock(struct clk_ctrl *ctrl) +{ + if (!ctrl) + return; + + WARN_ON_ONCE(ctrl->prepare_owner != current); + WARN_ON_ONCE(ctrl->prepare_refcnt == 0); + + if (--ctrl->prepare_refcnt) + return; + ctrl->prepare_owner = NULL; + mutex_unlock(&ctrl->prepare_lock); +} + +static void clk_prepare_lock_ctrl(struct clk_core *core) +{ + if (!core) + return; + + clk_ctrl_prepare_lock(core->ctrl); +} + +static void clk_prepare_unlock_ctrl(struct clk_core *core) +{ + if (!core) + return; + + clk_ctrl_prepare_unlock(core->ctrl); +} + +static void clk_prepare_lock_parents_locked(struct clk_core *core) +{ + struct clk_ctrl *prev = NULL; + + // lockdep_assert_held(&prepare_lock); // tmp comment? + + if (!core) + return; + + do { + if (core->ctrl != prev) { + clk_ctrl_prepare_lock(core->ctrl); + prev = core->ctrl; + } + } while ((core = core->parent)); +} + +static void clk_prepare_lock_parents(struct clk_core *core) +{ + if (!core) + return; + + clk_prepare_lock(); + clk_prepare_lock_parents_locked(core); + clk_prepare_unlock(); +} + +static void clk_prepare_unlock_parents_recur(struct clk_core *core, + struct clk_ctrl *prev) +{ + if (!core) + return; + + clk_prepare_unlock_parents_recur(core->parent, core->ctrl); + if (core->ctrl != prev) + clk_ctrl_prepare_unlock(core->ctrl); +} + +static void clk_prepare_unlock_parents(struct clk_core *core) +{ + if (!core) + return; + + clk_prepare_unlock_parents_recur(core, NULL); +} + +// FIXME: important note - will skip first lock +static void clk_prepare_lock_children_locked(struct clk_core *core) +{ + struct clk_core *child; + + lockdep_assert_held(&prepare_lock); + + if (!core) + return; + + hlist_for_each_entry(child, &core->children, child_node) { + clk_prepare_lock_children_locked(child); + + /* No need to double lock the same controller */ + if (child->ctrl != core->ctrl) + clk_ctrl_prepare_lock(child->ctrl); + } +} + +static void clk_prepare_lock_children(struct clk_core *core) +{ + if (!core) + return; + + clk_prepare_lock(); + clk_prepare_lock_children_locked(core); + /* Initial lock because children recurrency skiped first one */ + clk_ctrl_prepare_lock(core->ctrl); +} + +static void clk_prepare_unlock_children_locked(struct clk_core *core) +{ + struct clk_core *child; + + if (!core) + return; + + hlist_for_each_entry(child, &core->children, child_node) { + /* No need to double unlock the same controller */ + if (child->ctrl != core->ctrl) + clk_ctrl_prepare_unlock(child->ctrl); + + clk_prepare_unlock_children_locked(child); + } +} + +static void clk_prepare_unlock_children(struct clk_core *core) +{ + if (!core) + return; + + /* Unlock the initial controller, skipped in children recurrency */ + clk_ctrl_prepare_unlock(core->ctrl); + clk_prepare_unlock_children_locked(core); + clk_prepare_unlock(); +} + +/* Locks prepare lock, children and parents */ +static void clk_prepare_lock_tree(struct clk_core *core) +{ + if (!core) + return; + + clk_prepare_lock(); + clk_prepare_lock_children_locked(core); + /* Children recurrency skiped locking first one */ + clk_ctrl_prepare_lock(core->ctrl); + clk_prepare_lock_parents_locked(core); +} + +static void clk_prepare_unlock_tree(struct clk_core *core) +{ + if (!core) + return; + + clk_prepare_unlock_parents(core); + /* Unlock the initial controller, skipped in children recurrency */ + clk_ctrl_prepare_unlock(core->ctrl); + clk_prepare_unlock_children_locked(core); + clk_prepare_unlock(); +} + +/* + * Unlocks the controller hierarchy (children and parents) but going from + * old parent. Used in case of reparenting. + * If (core->parent == old_parent), this is equal to clk_prepare_unlock_tree(). + */ +static void clk_prepare_unlock_oldtree(struct clk_core *core, + struct clk_core *old_parent) +{ + if (!core) + return; + + clk_prepare_unlock_parents(old_parent); + /* + * Lock parents was called on 'core', but we unlock starting from + * 'old_parent'. In the same time locking did not lock the same + * controller twice but this check will be skipped for 'core'. + */ + if (old_parent->ctrl != core->ctrl) + clk_ctrl_prepare_unlock(core->ctrl); + + /* Unlock the initial controller, skipped in children recurrency */ + clk_ctrl_prepare_unlock(core->ctrl); + clk_prepare_unlock_children_locked(core); + clk_prepare_unlock(); +} + +/* Locks everything */ +/* FIXME: order of locking, it does not follow child-parent */ +static void clk_prepare_lock_all(void) +{ + struct clk_ctrl *ctrl; + + clk_prepare_lock(); + list_for_each_entry(ctrl, &clk_ctrl_list, node) + clk_ctrl_prepare_lock(ctrl); +} + +static void clk_prepare_unlock_all(void) +{ + struct clk_ctrl *ctrl; + + list_for_each_entry(ctrl, &clk_ctrl_list, node) + clk_ctrl_prepare_unlock(ctrl); + clk_prepare_unlock(); +} + static bool clk_core_is_prepared(struct clk_core *core) { /* @@ -2526,6 +2768,34 @@ void __clk_free_clk(struct clk *clk) kfree(clk); }
+struct clk_ctrl *clk_ctrl_register(struct device *dev) +{ + struct clk_ctrl *ctrl; + + ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL); + if (!ctrl) + return ERR_PTR(-ENOMEM); + + mutex_init(&ctrl->prepare_lock); + + clk_prepare_lock(); + list_add(&ctrl->node, &clk_ctrl_list); + clk_prepare_unlock(); + + return ctrl; +} +EXPORT_SYMBOL_GPL(clk_ctrl_register); + +void clk_ctrl_unregister(struct clk_ctrl *ctrl) +{ + clk_prepare_lock(); + list_del(&ctrl->node); + clk_prepare_unlock(); + + kfree(ctrl); +} +EXPORT_SYMBOL_GPL(clk_ctrl_unregister); + /** * clk_register - allocate a new clock, register it and return an opaque cookie * @dev: device that is registering this clock @@ -2537,7 +2807,8 @@ void __clk_free_clk(struct clk *clk) * rest of the clock API. In the event of an error clk_register will return an * error code; drivers must test for an error code after calling clk_register. */ -struct clk *clk_register(struct device *dev, struct clk_hw *hw) +struct clk *clk_register_with_ctrl(struct device *dev, struct clk_ctrl *ctrl, + struct clk_hw *hw) { int i, ret; struct clk_core *core; @@ -2561,6 +2832,10 @@ struct clk *clk_register(struct device *dev, struct clk_hw *hw) core->num_parents = hw->init->num_parents; core->min_rate = 0; core->max_rate = ULONG_MAX; + if (ctrl) + core->ctrl = ctrl; + else + core->ctrl = &global_ctrl; hw->core = core;
/* allocate local copy in case parent_names is __initdata */ @@ -2619,7 +2894,7 @@ fail_name: fail_out: return ERR_PTR(ret); } -EXPORT_SYMBOL_GPL(clk_register); +EXPORT_SYMBOL_GPL(clk_register_with_ctrl);
/** * clk_hw_register - register a clk_hw and return an error code @@ -2631,11 +2906,12 @@ EXPORT_SYMBOL_GPL(clk_register); * less than zero indicating failure. Drivers must test for an error code after * calling clk_hw_register(). */ -int clk_hw_register(struct device *dev, struct clk_hw *hw) +int clk_hw_register_with_ctrl(struct device *dev, struct clk_ctrl *ctrl, + struct clk_hw *hw) { - return PTR_ERR_OR_ZERO(clk_register(dev, hw)); + return PTR_ERR_OR_ZERO(clk_register_with_ctrl(dev, ctrl, hw)); } -EXPORT_SYMBOL_GPL(clk_hw_register); +EXPORT_SYMBOL_GPL(clk_hw_register_with_ctrl);
/* Free memory allocated for a clock. */ static void __clk_release(struct kref *ref) @@ -2644,6 +2920,7 @@ static void __clk_release(struct kref *ref) int i = core->num_parents;
lockdep_assert_held(&prepare_lock); + // lockdep_assert_not_held(&core->ctrl->prepare_lock); // TODO?
kfree(core->parents); while (--i >= 0) @@ -2767,7 +3044,7 @@ static void devm_clk_hw_release(struct device *dev, void *res) * automatically clk_unregister()ed on driver detach. See clk_register() for * more information. */ -struct clk *devm_clk_register(struct device *dev, struct clk_hw *hw) +struct clk *devm_clk_register_with_ctrl(struct device *dev, struct clk_ctrl *ctrl, struct clk_hw *hw) { struct clk *clk; struct clk **clkp; @@ -2776,7 +3053,7 @@ struct clk *devm_clk_register(struct device *dev, struct clk_hw *hw) if (!clkp) return ERR_PTR(-ENOMEM);
- clk = clk_register(dev, hw); + clk = clk_register_with_ctrl(dev, ctrl, hw); if (!IS_ERR(clk)) { *clkp = clk; devres_add(dev, clkp); @@ -2786,7 +3063,7 @@ struct clk *devm_clk_register(struct device *dev, struct clk_hw *hw)
return clk; } -EXPORT_SYMBOL_GPL(devm_clk_register); +EXPORT_SYMBOL_GPL(devm_clk_register_with_ctrl);
/** * devm_clk_hw_register - resource managed clk_hw_register() @@ -2797,7 +3074,8 @@ EXPORT_SYMBOL_GPL(devm_clk_register); * automatically clk_hw_unregister()ed on driver detach. See clk_hw_register() * for more information. */ -int devm_clk_hw_register(struct device *dev, struct clk_hw *hw) +int devm_clk_hw_register_with_ctrl(struct device *dev, struct clk_ctrl *ctrl, + struct clk_hw *hw) { struct clk_hw **hwp; int ret; @@ -2806,7 +3084,7 @@ int devm_clk_hw_register(struct device *dev, struct clk_hw *hw) if (!hwp) return -ENOMEM;
- ret = clk_hw_register(dev, hw); + ret = clk_hw_register_with_ctrl(dev, ctrl, hw); if (!ret) { *hwp = hw; devres_add(dev, hwp); @@ -2816,7 +3094,7 @@ int devm_clk_hw_register(struct device *dev, struct clk_hw *hw)
return ret; } -EXPORT_SYMBOL_GPL(devm_clk_hw_register); +EXPORT_SYMBOL_GPL(devm_clk_hw_register_with_ctrl);
static int devm_clk_match(struct device *dev, void *res, void *data) { diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h index a39c0c530778..3589f164ff94 100644 --- a/include/linux/clk-provider.h +++ b/include/linux/clk-provider.h @@ -39,6 +39,7 @@ struct clk; struct clk_hw; struct clk_core; +struct clk_ctrl; struct dentry;
/** @@ -703,6 +704,8 @@ struct clk_hw *clk_hw_register_gpio_mux(struct device *dev, const char *name, bool active_low, unsigned long flags); void clk_hw_unregister_gpio_mux(struct clk_hw *hw);
+struct clk_ctrl *clk_ctrl_register(struct device *dev); +void clk_ctrl_unregister(struct clk_ctrl *ctrl); /** * clk_register - allocate a new clock, register it and return an opaque cookie * @dev: device that is registering this clock @@ -714,11 +717,23 @@ void clk_hw_unregister_gpio_mux(struct clk_hw *hw); * rest of the clock API. In the event of an error clk_register will return an * error code; drivers must test for an error code after calling clk_register. */ -struct clk *clk_register(struct device *dev, struct clk_hw *hw); -struct clk *devm_clk_register(struct device *dev, struct clk_hw *hw); - -int __must_check clk_hw_register(struct device *dev, struct clk_hw *hw); -int __must_check devm_clk_hw_register(struct device *dev, struct clk_hw *hw); +struct clk *clk_register_with_ctrl(struct device *dev, struct clk_ctrl *ctrl, + struct clk_hw *hw); +struct clk *devm_clk_register_with_ctrl(struct device *dev, struct clk_ctrl *ctrl, + struct clk_hw *hw); + +#define clk_register(dev, hw) clk_register_with_ctrl(dev, NULL, hw) +#define devm_clk_register(dev, hw) devm_clk_register_with_ctrl(dev, NULL, hw) + +int __must_check clk_hw_register_with_ctrl(struct device *dev, + struct clk_ctrl *ctrl, + struct clk_hw *hw); +int __must_check devm_clk_hw_register_with_ctrl(struct device *dev, + struct clk_ctrl *ctrl, + struct clk_hw *hw); + +#define clk_hw_register(dev, hw) clk_hw_register_with_ctrl(dev, NULL, hw) +#define devm_clk_hw_register(dev, hw) devm_clk_hw_register_with_ctrl(dev, NULL, hw)
void clk_unregister(struct clk *clk); void devm_clk_unregister(struct device *dev, struct clk *clk); diff --git a/include/linux/clk.h b/include/linux/clk.h index 123c02788807..8f751d1eb1df 100644 --- a/include/linux/clk.h +++ b/include/linux/clk.h @@ -19,6 +19,7 @@ struct device;
struct clk; +struct clk_ctrl;
/** * DOC: clk notifier callback types
Allocate a clock controller and use new clk_register_with_ctrl() API.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com --- drivers/clk/clk-s2mps11.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/clk/clk-s2mps11.c b/drivers/clk/clk-s2mps11.c index fbaa84a33c46..881f1e226867 100644 --- a/drivers/clk/clk-s2mps11.c +++ b/drivers/clk/clk-s2mps11.c @@ -41,6 +41,7 @@ struct s2mps11_clk { struct clk_hw hw; struct clk *clk; struct clk_lookup *lookup; + struct clk_ctrl *clk_ctrl; u32 mask; unsigned int reg; }; @@ -176,6 +177,10 @@ static int s2mps11_clk_probe(struct platform_device *pdev) if (IS_ERR(s2mps11_clks->clk_np)) return PTR_ERR(s2mps11_clks->clk_np);
+ s2mps11_clks->clk_ctrl = clk_ctrl_register(&pdev->dev); + if (IS_ERR(s2mps11_clks->clk_ctrl)) + return PTR_ERR(s2mps11_clks->clk_ctrl); // FIXME: use devm-like + for (i = 0; i < S2MPS11_CLKS_NUM; i++) { if (i == S2MPS11_CLK_CP && hwid == S2MPS14X) continue; /* Skip clocks not present in some devices */ @@ -184,7 +189,8 @@ static int s2mps11_clk_probe(struct platform_device *pdev) s2mps11_clks[i].mask = 1 << i; s2mps11_clks[i].reg = s2mps11_reg;
- s2mps11_clks[i].clk = devm_clk_register(&pdev->dev, + s2mps11_clks[i].clk = devm_clk_register_with_ctrl(&pdev->dev, + s2mps11_clks->clk_ctrl, &s2mps11_clks[i].hw); if (IS_ERR(s2mps11_clks[i].clk)) { dev_err(&pdev->dev, "Fail to register : %s\n", @@ -233,6 +239,8 @@ static int s2mps11_clk_remove(struct platform_device *pdev) clkdev_drop(s2mps11_clks[i].lookup); }
+ clk_ctrl_unregister(s2mps11_clks->clk_ctrl); + return 0; }
Allocate a clock controller and store it in context so it will be passed later for creating clocks.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com --- drivers/clk/samsung/clk.c | 4 ++++ drivers/clk/samsung/clk.h | 1 + 2 files changed, 5 insertions(+)
diff --git a/drivers/clk/samsung/clk.c b/drivers/clk/samsung/clk.c index b7d87d6db9dc..fdeb35a48d3a 100644 --- a/drivers/clk/samsung/clk.c +++ b/drivers/clk/samsung/clk.c @@ -67,6 +67,10 @@ struct samsung_clk_provider *__init samsung_clk_init(struct device_node *np, if (!ctx) panic("could not allocate clock provider context.\n");
+ ctx->clk_ctrl = clk_ctrl_register(NULL); + if (!ctx->clk_ctrl) + panic("could not allocate clock provider controller.\n"); + clk_table = kcalloc(nr_clks, sizeof(struct clk *), GFP_KERNEL); if (!clk_table) panic("could not allocate clock lookup table\n"); diff --git a/drivers/clk/samsung/clk.h b/drivers/clk/samsung/clk.h index da3bdebabf1e..cb0ef6266b6d 100644 --- a/drivers/clk/samsung/clk.h +++ b/drivers/clk/samsung/clk.h @@ -28,6 +28,7 @@ struct samsung_clk_provider { void __iomem *reg_base; struct clk_onecell_data clk_data; spinlock_t lock; + struct clk_ctrl *clk_ctrl; };
/**
Allocate a clock controller and use new clk_register_with_ctrl() API.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com --- drivers/clk/clk-fixed-rate.c | 28 ++++++++++++++++++---------- drivers/clk/samsung/clk.c | 2 +- include/linux/clk-provider.h | 10 ++++++++-- 3 files changed, 27 insertions(+), 13 deletions(-)
diff --git a/drivers/clk/clk-fixed-rate.c b/drivers/clk/clk-fixed-rate.c index 2edb39342a02..0a5c4f2105da 100644 --- a/drivers/clk/clk-fixed-rate.c +++ b/drivers/clk/clk-fixed-rate.c @@ -55,6 +55,7 @@ EXPORT_SYMBOL_GPL(clk_fixed_rate_ops); * @fixed_accuracy: non-adjustable clock rate */ struct clk_hw *clk_hw_register_fixed_rate_with_accuracy(struct device *dev, + struct clk_ctrl *ctrl, // FIXME: re-align const char *name, const char *parent_name, unsigned long flags, unsigned long fixed_rate, unsigned long fixed_accuracy) { @@ -81,7 +82,7 @@ struct clk_hw *clk_hw_register_fixed_rate_with_accuracy(struct device *dev,
/* register the clock */ hw = &fixed->hw; - ret = clk_hw_register(dev, hw); + ret = clk_hw_register_with_ctrl(dev, ctrl, hw); if (ret) { kfree(fixed); hw = ERR_PTR(ret); @@ -92,13 +93,14 @@ struct clk_hw *clk_hw_register_fixed_rate_with_accuracy(struct device *dev, EXPORT_SYMBOL_GPL(clk_hw_register_fixed_rate_with_accuracy);
struct clk *clk_register_fixed_rate_with_accuracy(struct device *dev, + struct clk_ctrl *ctrl, // FIXME: re-align const char *name, const char *parent_name, unsigned long flags, unsigned long fixed_rate, unsigned long fixed_accuracy) { struct clk_hw *hw;
- hw = clk_hw_register_fixed_rate_with_accuracy(dev, name, parent_name, - flags, fixed_rate, fixed_accuracy); + hw = clk_hw_register_fixed_rate_with_accuracy(dev, ctrl, name, + parent_name, flags, fixed_rate, fixed_accuracy); if (IS_ERR(hw)) return ERR_CAST(hw); return hw->clk; @@ -114,21 +116,27 @@ EXPORT_SYMBOL_GPL(clk_register_fixed_rate_with_accuracy); * @flags: framework-specific flags * @fixed_rate: non-adjustable clock rate */ -struct clk_hw *clk_hw_register_fixed_rate(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_fixed_rate(struct device *dev, + struct clk_ctrl *ctrl, // FIXME: re-align + const char *name, const char *parent_name, unsigned long flags, unsigned long fixed_rate) { - return clk_hw_register_fixed_rate_with_accuracy(dev, name, parent_name, - flags, fixed_rate, 0); + return clk_hw_register_fixed_rate_with_accuracy(dev, ctrl, name, + parent_name, flags, + fixed_rate, 0); } EXPORT_SYMBOL_GPL(clk_hw_register_fixed_rate);
-struct clk *clk_register_fixed_rate(struct device *dev, const char *name, +struct clk *clk_register_fixed_rate(struct device *dev, + struct clk_ctrl *ctrl, // FIXME: re-align + const char *name, const char *parent_name, unsigned long flags, unsigned long fixed_rate) { - return clk_register_fixed_rate_with_accuracy(dev, name, parent_name, - flags, fixed_rate, 0); + return clk_register_fixed_rate_with_accuracy(dev, ctrl, name, + parent_name, flags, + fixed_rate, 0); } EXPORT_SYMBOL_GPL(clk_register_fixed_rate);
@@ -174,7 +182,7 @@ void of_fixed_clk_setup(struct device_node *node)
of_property_read_string(node, "clock-output-names", &clk_name);
- clk = clk_register_fixed_rate_with_accuracy(NULL, clk_name, NULL, + clk = clk_register_fixed_rate_with_accuracy(NULL, NULL, clk_name, NULL, 0, rate, accuracy); if (!IS_ERR(clk)) of_clk_add_provider(node, of_clk_src_simple_get, clk); diff --git a/drivers/clk/samsung/clk.c b/drivers/clk/samsung/clk.c index fdeb35a48d3a..cb1e4398d6e5 100644 --- a/drivers/clk/samsung/clk.c +++ b/drivers/clk/samsung/clk.c @@ -147,7 +147,7 @@ void __init samsung_clk_register_fixed_rate(struct samsung_clk_provider *ctx, unsigned int idx, ret;
for (idx = 0; idx < nr_clk; idx++, list++) { - clk = clk_register_fixed_rate(NULL, list->name, + clk = clk_register_fixed_rate(NULL, ctx->clk_ctrl, list->name, list->parent_name, list->flags, list->fixed_rate); if (IS_ERR(clk)) { pr_err("%s: failed to register clock %s\n", __func__, diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h index 3589f164ff94..2ee4e337e784 100644 --- a/include/linux/clk-provider.h +++ b/include/linux/clk-provider.h @@ -283,17 +283,23 @@ struct clk_fixed_rate { #define to_clk_fixed_rate(_hw) container_of(_hw, struct clk_fixed_rate, hw)
extern const struct clk_ops clk_fixed_rate_ops; -struct clk *clk_register_fixed_rate(struct device *dev, const char *name, +struct clk *clk_register_fixed_rate(struct device *dev, + struct clk_ctrl *ctrl, // FIXME: re-align + const char *name, const char *parent_name, unsigned long flags, unsigned long fixed_rate); -struct clk_hw *clk_hw_register_fixed_rate(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_fixed_rate(struct device *dev, + struct clk_ctrl *ctrl, // FIXME: re-align + const char *name, const char *parent_name, unsigned long flags, unsigned long fixed_rate); struct clk *clk_register_fixed_rate_with_accuracy(struct device *dev, + struct clk_ctrl *ctrl, // FIXME: re-align const char *name, const char *parent_name, unsigned long flags, unsigned long fixed_rate, unsigned long fixed_accuracy); void clk_unregister_fixed_rate(struct clk *clk); struct clk_hw *clk_hw_register_fixed_rate_with_accuracy(struct device *dev, + struct clk_ctrl *ctrl, // FIXME: re-align const char *name, const char *parent_name, unsigned long flags, unsigned long fixed_rate, unsigned long fixed_accuracy); void clk_hw_unregister_fixed_rate(struct clk_hw *hw);
Allocate a clock controller and use new clk_register_with_ctrl() API.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com --- drivers/clk/clk-gate.c | 8 +++++--- drivers/clk/samsung/clk.c | 5 +++-- include/linux/clk-provider.h | 6 ++++-- 3 files changed, 12 insertions(+), 7 deletions(-)
diff --git a/drivers/clk/clk-gate.c b/drivers/clk/clk-gate.c index 4e691e35483a..5eea0457778d 100644 --- a/drivers/clk/clk-gate.c +++ b/drivers/clk/clk-gate.c @@ -120,7 +120,8 @@ EXPORT_SYMBOL_GPL(clk_gate_ops); * @clk_gate_flags: gate-specific flags for this clock * @lock: shared register lock for this clock */ -struct clk_hw *clk_hw_register_gate(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_gate(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 bit_idx, u8 clk_gate_flags, spinlock_t *lock) @@ -166,14 +167,15 @@ struct clk_hw *clk_hw_register_gate(struct device *dev, const char *name, } EXPORT_SYMBOL_GPL(clk_hw_register_gate);
-struct clk *clk_register_gate(struct device *dev, const char *name, +struct clk *clk_register_gate(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 bit_idx, u8 clk_gate_flags, spinlock_t *lock) { struct clk_hw *hw;
- hw = clk_hw_register_gate(dev, name, parent_name, flags, reg, + hw = clk_hw_register_gate(dev, ctrl, name, parent_name, flags, reg, bit_idx, clk_gate_flags, lock); if (IS_ERR(hw)) return ERR_CAST(hw); diff --git a/drivers/clk/samsung/clk.c b/drivers/clk/samsung/clk.c index cb1e4398d6e5..79332011f258 100644 --- a/drivers/clk/samsung/clk.c +++ b/drivers/clk/samsung/clk.c @@ -268,8 +268,9 @@ void __init samsung_clk_register_gate(struct samsung_clk_provider *ctx, unsigned int idx, ret;
for (idx = 0; idx < nr_clk; idx++, list++) { - clk = clk_register_gate(NULL, list->name, list->parent_name, - list->flags, ctx->reg_base + list->offset, + clk = clk_register_gate(NULL, ctx->clk_ctrl, list->name, + list->parent_name, list->flags, + ctx->reg_base + list->offset, list->bit_idx, list->gate_flags, &ctx->lock); if (IS_ERR(clk)) { pr_err("%s: failed to register clock %s\n", __func__, diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h index 2ee4e337e784..289290655283 100644 --- a/include/linux/clk-provider.h +++ b/include/linux/clk-provider.h @@ -340,11 +340,13 @@ struct clk_gate { #define CLK_GATE_HIWORD_MASK BIT(1)
extern const struct clk_ops clk_gate_ops; -struct clk *clk_register_gate(struct device *dev, const char *name, +struct clk *clk_register_gate(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 bit_idx, u8 clk_gate_flags, spinlock_t *lock); -struct clk_hw *clk_hw_register_gate(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_gate(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 bit_idx, u8 clk_gate_flags, spinlock_t *lock);
Allocate a clock controller and use new clk_register_with_ctrl() API.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com --- drivers/clk/clk-mux.c | 32 ++++++++++++++++++-------------- drivers/clk/samsung/clk.c | 4 ++-- include/linux/clk-provider.h | 12 ++++++++---- 3 files changed, 28 insertions(+), 20 deletions(-)
diff --git a/drivers/clk/clk-mux.c b/drivers/clk/clk-mux.c index 16a3d5717f4e..a85bbef1992b 100644 --- a/drivers/clk/clk-mux.c +++ b/drivers/clk/clk-mux.c @@ -113,7 +113,8 @@ const struct clk_ops clk_mux_ro_ops = { }; EXPORT_SYMBOL_GPL(clk_mux_ro_ops);
-struct clk_hw *clk_hw_register_mux_table(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_mux_table(struct device *dev, + struct clk_ctrl *ctrl, const char *name, const char * const *parent_names, u8 num_parents, unsigned long flags, void __iomem *reg, u8 shift, u32 mask, @@ -159,7 +160,7 @@ struct clk_hw *clk_hw_register_mux_table(struct device *dev, const char *name, mux->hw.init = &init;
hw = &mux->hw; - ret = clk_hw_register(dev, hw); + ret = clk_hw_register_with_ctrl(dev, ctrl, hw); if (ret) { kfree(mux); hw = ERR_PTR(ret); @@ -169,7 +170,8 @@ struct clk_hw *clk_hw_register_mux_table(struct device *dev, const char *name, } EXPORT_SYMBOL_GPL(clk_hw_register_mux_table);
-struct clk *clk_register_mux_table(struct device *dev, const char *name, +struct clk *clk_register_mux_table(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char * const *parent_names, u8 num_parents, unsigned long flags, void __iomem *reg, u8 shift, u32 mask, @@ -177,16 +179,17 @@ struct clk *clk_register_mux_table(struct device *dev, const char *name, { struct clk_hw *hw;
- hw = clk_hw_register_mux_table(dev, name, parent_names, num_parents, - flags, reg, shift, mask, clk_mux_flags, - table, lock); + hw = clk_hw_register_mux_table(dev, ctrl, name, parent_names, + num_parents, flags, reg, shift, mask, + clk_mux_flags, table, lock); if (IS_ERR(hw)) return ERR_CAST(hw); return hw->clk; } EXPORT_SYMBOL_GPL(clk_register_mux_table);
-struct clk *clk_register_mux(struct device *dev, const char *name, +struct clk *clk_register_mux(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char * const *parent_names, u8 num_parents, unsigned long flags, void __iomem *reg, u8 shift, u8 width, @@ -194,13 +197,14 @@ struct clk *clk_register_mux(struct device *dev, const char *name, { u32 mask = BIT(width) - 1;
- return clk_register_mux_table(dev, name, parent_names, num_parents, - flags, reg, shift, mask, clk_mux_flags, - NULL, lock); + return clk_register_mux_table(dev, ctrl, name, parent_names, + num_parents, flags, reg, shift, mask, + clk_mux_flags, NULL, lock); } EXPORT_SYMBOL_GPL(clk_register_mux);
-struct clk_hw *clk_hw_register_mux(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_mux(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char * const *parent_names, u8 num_parents, unsigned long flags, void __iomem *reg, u8 shift, u8 width, @@ -208,9 +212,9 @@ struct clk_hw *clk_hw_register_mux(struct device *dev, const char *name, { u32 mask = BIT(width) - 1;
- return clk_hw_register_mux_table(dev, name, parent_names, num_parents, - flags, reg, shift, mask, clk_mux_flags, - NULL, lock); + return clk_hw_register_mux_table(dev, ctrl, name, parent_names, + num_parents, flags, reg, shift, mask, + clk_mux_flags, NULL, lock); } EXPORT_SYMBOL_GPL(clk_hw_register_mux);
diff --git a/drivers/clk/samsung/clk.c b/drivers/clk/samsung/clk.c index 79332011f258..1a296bbabd47 100644 --- a/drivers/clk/samsung/clk.c +++ b/drivers/clk/samsung/clk.c @@ -197,8 +197,8 @@ void __init samsung_clk_register_mux(struct samsung_clk_provider *ctx, unsigned int idx, ret;
for (idx = 0; idx < nr_clk; idx++, list++) { - clk = clk_register_mux(NULL, list->name, list->parent_names, - list->num_parents, list->flags, + clk = clk_register_mux(NULL, ctx->clk_ctrl, list->name, + list->parent_names, list->num_parents, list->flags, ctx->reg_base + list->offset, list->shift, list->width, list->mux_flags, &ctx->lock); if (IS_ERR(clk)) { diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h index 289290655283..f5f062d267a9 100644 --- a/include/linux/clk-provider.h +++ b/include/linux/clk-provider.h @@ -493,23 +493,27 @@ struct clk_mux { extern const struct clk_ops clk_mux_ops; extern const struct clk_ops clk_mux_ro_ops;
-struct clk *clk_register_mux(struct device *dev, const char *name, +struct clk *clk_register_mux(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char * const *parent_names, u8 num_parents, unsigned long flags, void __iomem *reg, u8 shift, u8 width, u8 clk_mux_flags, spinlock_t *lock); -struct clk_hw *clk_hw_register_mux(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_mux(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char * const *parent_names, u8 num_parents, unsigned long flags, void __iomem *reg, u8 shift, u8 width, u8 clk_mux_flags, spinlock_t *lock);
-struct clk *clk_register_mux_table(struct device *dev, const char *name, +struct clk *clk_register_mux_table(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char * const *parent_names, u8 num_parents, unsigned long flags, void __iomem *reg, u8 shift, u32 mask, u8 clk_mux_flags, u32 *table, spinlock_t *lock); -struct clk_hw *clk_hw_register_mux_table(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_mux_table(struct device *dev, + struct clk_ctrl *ctrl, const char *name, const char * const *parent_names, u8 num_parents, unsigned long flags, void __iomem *reg, u8 shift, u32 mask,
Allocate a clock controller and use new clk_register_with_ctrl() API.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com --- drivers/clk/clk-fixed-factor.c | 11 +++++++---- drivers/clk/samsung/clk.c | 2 +- include/linux/clk-provider.h | 4 +++- 3 files changed, 11 insertions(+), 6 deletions(-)
diff --git a/drivers/clk/clk-fixed-factor.c b/drivers/clk/clk-fixed-factor.c index 4db3be214077..921a99b6e421 100644 --- a/drivers/clk/clk-fixed-factor.c +++ b/drivers/clk/clk-fixed-factor.c @@ -69,6 +69,7 @@ const struct clk_ops clk_fixed_factor_ops = { EXPORT_SYMBOL_GPL(clk_fixed_factor_ops);
struct clk_hw *clk_hw_register_fixed_factor(struct device *dev, + struct clk_ctrl *ctrl, const char *name, const char *parent_name, unsigned long flags, unsigned int mult, unsigned int div) { @@ -103,13 +104,14 @@ struct clk_hw *clk_hw_register_fixed_factor(struct device *dev, } EXPORT_SYMBOL_GPL(clk_hw_register_fixed_factor);
-struct clk *clk_register_fixed_factor(struct device *dev, const char *name, +struct clk *clk_register_fixed_factor(struct device *dev, + struct clk_ctrl *ctrl, const char *name, const char *parent_name, unsigned long flags, unsigned int mult, unsigned int div) { struct clk_hw *hw;
- hw = clk_hw_register_fixed_factor(dev, name, parent_name, flags, mult, + hw = clk_hw_register_fixed_factor(dev, ctrl, name, parent_name, flags, mult, div); if (IS_ERR(hw)) return ERR_CAST(hw); @@ -176,8 +178,9 @@ void __init of_fixed_factor_clk_setup(struct device_node *node) if (of_match_node(set_rate_parent_matches, node)) flags |= CLK_SET_RATE_PARENT;
- clk = clk_register_fixed_factor(NULL, clk_name, parent_name, flags, - mult, div); + /* TODO: convert to clk_ctrl */ + clk = clk_register_fixed_factor(NULL, NULL, clk_name, parent_name, + flags, mult, div); if (!IS_ERR(clk)) of_clk_add_provider(node, of_clk_src_simple_get, clk); } diff --git a/drivers/clk/samsung/clk.c b/drivers/clk/samsung/clk.c index 1a296bbabd47..7bfd895781c5 100644 --- a/drivers/clk/samsung/clk.c +++ b/drivers/clk/samsung/clk.c @@ -176,7 +176,7 @@ void __init samsung_clk_register_fixed_factor(struct samsung_clk_provider *ctx, unsigned int idx;
for (idx = 0; idx < nr_clk; idx++, list++) { - clk = clk_register_fixed_factor(NULL, list->name, + clk = clk_register_fixed_factor(NULL, ctx->clk_ctrl, list->name, list->parent_name, list->flags, list->mult, list->div); if (IS_ERR(clk)) { pr_err("%s: failed to register clock %s\n", __func__, diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h index f5f062d267a9..26171815948e 100644 --- a/include/linux/clk-provider.h +++ b/include/linux/clk-provider.h @@ -545,11 +545,13 @@ struct clk_fixed_factor { #define to_clk_fixed_factor(_hw) container_of(_hw, struct clk_fixed_factor, hw)
extern const struct clk_ops clk_fixed_factor_ops; -struct clk *clk_register_fixed_factor(struct device *dev, const char *name, +struct clk *clk_register_fixed_factor(struct device *dev, + struct clk_ctrl *ctrl, const char *name, const char *parent_name, unsigned long flags, unsigned int mult, unsigned int div); void clk_unregister_fixed_factor(struct clk *clk); struct clk_hw *clk_hw_register_fixed_factor(struct device *dev, + struct clk_ctrl *ctrl, const char *name, const char *parent_name, unsigned long flags, unsigned int mult, unsigned int div); void clk_hw_unregister_fixed_factor(struct clk_hw *hw);
Allocate a clock controller and use new clk_register_with_ctrl() API.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com --- drivers/clk/clk-divider.c | 10 +++++++--- drivers/clk/clk-fractional-divider.c | 5 +++-- drivers/clk/samsung/clk.c | 8 ++++---- include/linux/clk-provider.h | 13 +++++++++---- 4 files changed, 23 insertions(+), 13 deletions(-)
diff --git a/drivers/clk/clk-divider.c b/drivers/clk/clk-divider.c index a0f55bc1ad3d..8e9e89f8c5a8 100644 --- a/drivers/clk/clk-divider.c +++ b/drivers/clk/clk-divider.c @@ -490,7 +490,8 @@ static struct clk_hw *_register_divider(struct device *dev, const char *name, * @clk_divider_flags: divider-specific flags for this clock * @lock: shared register lock for this clock */ -struct clk *clk_register_divider(struct device *dev, const char *name, +struct clk *clk_register_divider(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 shift, u8 width, u8 clk_divider_flags, spinlock_t *lock) @@ -517,7 +518,8 @@ EXPORT_SYMBOL_GPL(clk_register_divider); * @clk_divider_flags: divider-specific flags for this clock * @lock: shared register lock for this clock */ -struct clk_hw *clk_hw_register_divider(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_divider(struct device *dev, + struct clk_ctrl *ctrl, const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 shift, u8 width, u8 clk_divider_flags, spinlock_t *lock) @@ -541,7 +543,8 @@ EXPORT_SYMBOL_GPL(clk_hw_register_divider); * @table: array of divider/value pairs ending with a div set to 0 * @lock: shared register lock for this clock */ -struct clk *clk_register_divider_table(struct device *dev, const char *name, +struct clk *clk_register_divider_table(struct device *dev, + struct clk_ctrl *ctrl, const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 shift, u8 width, u8 clk_divider_flags, const struct clk_div_table *table, @@ -572,6 +575,7 @@ EXPORT_SYMBOL_GPL(clk_register_divider_table); * @lock: shared register lock for this clock */ struct clk_hw *clk_hw_register_divider_table(struct device *dev, + struct clk_ctrl *ctrl, const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 shift, u8 width, u8 clk_divider_flags, const struct clk_div_table *table, diff --git a/drivers/clk/clk-fractional-divider.c b/drivers/clk/clk-fractional-divider.c index aab904618eb6..1b712a424fd4 100644 --- a/drivers/clk/clk-fractional-divider.c +++ b/drivers/clk/clk-fractional-divider.c @@ -117,6 +117,7 @@ const struct clk_ops clk_fractional_divider_ops = { EXPORT_SYMBOL_GPL(clk_fractional_divider_ops);
struct clk_hw *clk_hw_register_fractional_divider(struct device *dev, + struct clk_ctrl *ctrl, const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 mshift, u8 mwidth, u8 nshift, u8 nwidth, u8 clk_divider_flags, spinlock_t *lock) @@ -158,14 +159,14 @@ struct clk_hw *clk_hw_register_fractional_divider(struct device *dev, } EXPORT_SYMBOL_GPL(clk_hw_register_fractional_divider);
-struct clk *clk_register_fractional_divider(struct device *dev, +struct clk *clk_register_fractional_divider(struct device *dev, struct clk_ctrl *ctrl, const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 mshift, u8 mwidth, u8 nshift, u8 nwidth, u8 clk_divider_flags, spinlock_t *lock) { struct clk_hw *hw;
- hw = clk_hw_register_fractional_divider(dev, name, parent_name, flags, + hw = clk_hw_register_fractional_divider(dev, ctrl, name, parent_name, flags, reg, mshift, mwidth, nshift, nwidth, clk_divider_flags, lock); if (IS_ERR(hw)) diff --git a/drivers/clk/samsung/clk.c b/drivers/clk/samsung/clk.c index 7bfd895781c5..9a8068d4da77 100644 --- a/drivers/clk/samsung/clk.c +++ b/drivers/clk/samsung/clk.c @@ -230,14 +230,14 @@ void __init samsung_clk_register_div(struct samsung_clk_provider *ctx,
for (idx = 0; idx < nr_clk; idx++, list++) { if (list->table) - clk = clk_register_divider_table(NULL, list->name, - list->parent_name, list->flags, + clk = clk_register_divider_table(NULL, ctx->clk_ctrl, + list->name, list->parent_name, list->flags, ctx->reg_base + list->offset, list->shift, list->width, list->div_flags, list->table, &ctx->lock); else - clk = clk_register_divider(NULL, list->name, - list->parent_name, list->flags, + clk = clk_register_divider(NULL, ctx->clk_ctrl, + list->name, list->parent_name, list->flags, ctx->reg_base + list->offset, list->shift, list->width, list->div_flags, &ctx->lock); if (IS_ERR(clk)) { diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h index 26171815948e..cfb3aa3912c5 100644 --- a/include/linux/clk-provider.h +++ b/include/linux/clk-provider.h @@ -428,20 +428,23 @@ int divider_get_val(unsigned long rate, unsigned long parent_rate, const struct clk_div_table *table, u8 width, unsigned long flags);
-struct clk *clk_register_divider(struct device *dev, const char *name, - const char *parent_name, unsigned long flags, +struct clk *clk_register_divider(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 shift, u8 width, u8 clk_divider_flags, spinlock_t *lock); -struct clk_hw *clk_hw_register_divider(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_divider(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 shift, u8 width, u8 clk_divider_flags, spinlock_t *lock); -struct clk *clk_register_divider_table(struct device *dev, const char *name, +struct clk *clk_register_divider_table(struct device *dev, + struct clk_ctrl *ctrl, const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 shift, u8 width, u8 clk_divider_flags, const struct clk_div_table *table, spinlock_t *lock); struct clk_hw *clk_hw_register_divider_table(struct device *dev, + struct clk_ctrl *ctrl, const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 shift, u8 width, u8 clk_divider_flags, const struct clk_div_table *table, @@ -586,10 +589,12 @@ struct clk_fractional_divider {
extern const struct clk_ops clk_fractional_divider_ops; struct clk *clk_register_fractional_divider(struct device *dev, + struct clk_ctrl *ctrl, const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 mshift, u8 mwidth, u8 nshift, u8 nwidth, u8 clk_divider_flags, spinlock_t *lock); struct clk_hw *clk_hw_register_fractional_divider(struct device *dev, + struct clk_ctrl *ctrl, const char *name, const char *parent_name, unsigned long flags, void __iomem *reg, u8 mshift, u8 mwidth, u8 nshift, u8 nwidth, u8 clk_divider_flags, spinlock_t *lock);
Allocate a clock controller and use new clk_register_with_ctrl() API.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com --- drivers/clk/clk-composite.c | 8 +++++--- include/linux/clk-provider.h | 6 ++++-- 2 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/drivers/clk/clk-composite.c b/drivers/clk/clk-composite.c index 00269de2f390..8ad5c2c4a329 100644 --- a/drivers/clk/clk-composite.c +++ b/drivers/clk/clk-composite.c @@ -211,7 +211,8 @@ static void clk_composite_disable(struct clk_hw *hw) gate_ops->disable(gate_hw); }
-struct clk_hw *clk_hw_register_composite(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_composite(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char * const *parent_names, int num_parents, struct clk_hw *mux_hw, const struct clk_ops *mux_ops, struct clk_hw *rate_hw, const struct clk_ops *rate_ops, @@ -324,7 +325,8 @@ err: return hw; }
-struct clk *clk_register_composite(struct device *dev, const char *name, +struct clk *clk_register_composite(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char * const *parent_names, int num_parents, struct clk_hw *mux_hw, const struct clk_ops *mux_ops, struct clk_hw *rate_hw, const struct clk_ops *rate_ops, @@ -333,7 +335,7 @@ struct clk *clk_register_composite(struct device *dev, const char *name, { struct clk_hw *hw;
- hw = clk_hw_register_composite(dev, name, parent_names, num_parents, + hw = clk_hw_register_composite(dev, ctrl, name, parent_names, num_parents, mux_hw, mux_ops, rate_hw, rate_ops, gate_hw, gate_ops, flags); if (IS_ERR(hw)) diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h index cfb3aa3912c5..2bd0a8cb7a9c 100644 --- a/include/linux/clk-provider.h +++ b/include/linux/clk-provider.h @@ -663,14 +663,16 @@ struct clk_composite {
#define to_clk_composite(_hw) container_of(_hw, struct clk_composite, hw)
-struct clk *clk_register_composite(struct device *dev, const char *name, +struct clk *clk_register_composite(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char * const *parent_names, int num_parents, struct clk_hw *mux_hw, const struct clk_ops *mux_ops, struct clk_hw *rate_hw, const struct clk_ops *rate_ops, struct clk_hw *gate_hw, const struct clk_ops *gate_ops, unsigned long flags); void clk_unregister_composite(struct clk *clk); -struct clk_hw *clk_hw_register_composite(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_composite(struct device *dev, + struct clk_ctrl *ctrl, const char *name, const char * const *parent_names, int num_parents, struct clk_hw *mux_hw, const struct clk_ops *mux_ops, struct clk_hw *rate_hw, const struct clk_ops *rate_ops,
Allocate a clock controller and use new clk_register_with_ctrl() API.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com --- drivers/clk/clk-gpio.c | 29 +++++++++++++++++------------ include/linux/clk-provider.h | 12 ++++++++---- 2 files changed, 25 insertions(+), 16 deletions(-)
diff --git a/drivers/clk/clk-gpio.c b/drivers/clk/clk-gpio.c index 86b245746a6b..8ebd520c1aa0 100644 --- a/drivers/clk/clk-gpio.c +++ b/drivers/clk/clk-gpio.c @@ -94,7 +94,8 @@ const struct clk_ops clk_gpio_mux_ops = { }; EXPORT_SYMBOL_GPL(clk_gpio_mux_ops);
-static struct clk_hw *clk_register_gpio(struct device *dev, const char *name, +static struct clk_hw *clk_register_gpio(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char * const *parent_names, u8 num_parents, unsigned gpio, bool active_low, unsigned long flags, const struct clk_ops *clk_gpio_ops) @@ -168,25 +169,27 @@ static struct clk_hw *clk_register_gpio(struct device *dev, const char *name, * @active_low: true if gpio should be set to 0 to enable clock * @flags: clock flags */ -struct clk_hw *clk_hw_register_gpio_gate(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_gpio_gate(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char *parent_name, unsigned gpio, bool active_low, unsigned long flags) { - return clk_register_gpio(dev, name, + return clk_register_gpio(dev, ctrl, name, (parent_name ? &parent_name : NULL), (parent_name ? 1 : 0), gpio, active_low, flags, &clk_gpio_gate_ops); } EXPORT_SYMBOL_GPL(clk_hw_register_gpio_gate);
-struct clk *clk_register_gpio_gate(struct device *dev, const char *name, +struct clk *clk_register_gpio_gate(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char *parent_name, unsigned gpio, bool active_low, unsigned long flags) { struct clk_hw *hw;
- hw = clk_hw_register_gpio_gate(dev, name, parent_name, gpio, active_low, - flags); + hw = clk_hw_register_gpio_gate(dev, ctrl, name, parent_name, gpio, + active_low, flags); if (IS_ERR(hw)) return ERR_CAST(hw); return hw->clk; @@ -203,7 +206,8 @@ EXPORT_SYMBOL_GPL(clk_register_gpio_gate); * @active_low: true if gpio should be set to 0 to enable clock * @flags: clock flags */ -struct clk_hw *clk_hw_register_gpio_mux(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_gpio_mux(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char * const *parent_names, u8 num_parents, unsigned gpio, bool active_low, unsigned long flags) { @@ -212,18 +216,19 @@ struct clk_hw *clk_hw_register_gpio_mux(struct device *dev, const char *name, return ERR_PTR(-EINVAL); }
- return clk_register_gpio(dev, name, parent_names, num_parents, + return clk_register_gpio(dev, ctrl, name, parent_names, num_parents, gpio, active_low, flags, &clk_gpio_mux_ops); } EXPORT_SYMBOL_GPL(clk_hw_register_gpio_mux);
-struct clk *clk_register_gpio_mux(struct device *dev, const char *name, +struct clk *clk_register_gpio_mux(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char * const *parent_names, u8 num_parents, unsigned gpio, bool active_low, unsigned long flags) { struct clk_hw *hw;
- hw = clk_hw_register_gpio_mux(dev, name, parent_names, num_parents, + hw = clk_hw_register_gpio_mux(dev, ctrl, name, parent_names, num_parents, gpio, active_low, flags); if (IS_ERR(hw)) return ERR_CAST(hw); @@ -271,10 +276,10 @@ static int gpio_clk_driver_probe(struct platform_device *pdev) active_low = of_flags & OF_GPIO_ACTIVE_LOW;
if (is_mux) - clk = clk_register_gpio_mux(&pdev->dev, node->name, + clk = clk_register_gpio_mux(&pdev->dev, NULL, node->name, parent_names, num_parents, gpio, active_low, 0); else - clk = clk_register_gpio_gate(&pdev->dev, node->name, + clk = clk_register_gpio_gate(&pdev->dev, NULL, node->name, parent_names ? parent_names[0] : NULL, gpio, active_low, 0); if (IS_ERR(clk)) diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h index 2bd0a8cb7a9c..c39760cba9ac 100644 --- a/include/linux/clk-provider.h +++ b/include/linux/clk-provider.h @@ -698,10 +698,12 @@ struct clk_gpio { #define to_clk_gpio(_hw) container_of(_hw, struct clk_gpio, hw)
extern const struct clk_ops clk_gpio_gate_ops; -struct clk *clk_register_gpio_gate(struct device *dev, const char *name, +struct clk *clk_register_gpio_gate(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char *parent_name, unsigned gpio, bool active_low, unsigned long flags); -struct clk_hw *clk_hw_register_gpio_gate(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_gpio_gate(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char *parent_name, unsigned gpio, bool active_low, unsigned long flags); void clk_hw_unregister_gpio_gate(struct clk_hw *hw); @@ -717,10 +719,12 @@ void clk_hw_unregister_gpio_gate(struct clk_hw *hw); */
extern const struct clk_ops clk_gpio_mux_ops; -struct clk *clk_register_gpio_mux(struct device *dev, const char *name, +struct clk *clk_register_gpio_mux(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char * const *parent_names, u8 num_parents, unsigned gpio, bool active_low, unsigned long flags); -struct clk_hw *clk_hw_register_gpio_mux(struct device *dev, const char *name, +struct clk_hw *clk_hw_register_gpio_mux(struct device *dev, struct clk_ctrl *ctrl, + const char *name, const char * const *parent_names, u8 num_parents, unsigned gpio, bool active_low, unsigned long flags); void clk_hw_unregister_gpio_mux(struct clk_hw *hw);
Allocate a clock controller and use new clk_register_with_ctrl() API.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com --- sound/soc/samsung/i2s.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/sound/soc/samsung/i2s.c b/sound/soc/samsung/i2s.c index fa3ff03d97d5..1ec90daa4df4 100644 --- a/sound/soc/samsung/i2s.c +++ b/sound/soc/samsung/i2s.c @@ -104,6 +104,7 @@ struct i2s_dai { /* Below fields are only valid if this is the primary FIFO */ struct clk *clk_table[3]; struct clk_onecell_data clk_data; + struct clk_ctrl *clk_ctrl; };
/* Lock for cross i/f checks */ @@ -1148,6 +1149,7 @@ static void i2s_unregister_clock_provider(struct platform_device *pdev)
of_clk_del_provider(pdev->dev.of_node); i2s_unregister_clocks(i2s); + clk_ctrl_unregister(i2s->clk_ctrl); }
static int i2s_register_clock_provider(struct platform_device *pdev) @@ -1164,6 +1166,10 @@ static int i2s_register_clock_provider(struct platform_device *pdev) if (!of_find_property(dev->of_node, "#clock-cells", NULL)) return 0;
+ i2s->clk_ctrl = clk_ctrl_register(dev); + if (IS_ERR(i2s->clk_ctrl)) + return PTR_ERR(i2s->clk_ctrl); + /* Get the RCLKSRC mux clock parent clock names */ for (i = 0; i < ARRAY_SIZE(p_names); i++) { rclksrc = clk_get(dev, clk_name[i]); @@ -1179,13 +1185,14 @@ static int i2s_register_clock_provider(struct platform_device *pdev) writel(val | PSR_PSREN, i2s->addr + I2SPSR);
i2s->clk_table[CLK_I2S_RCLK_SRC] = clk_register_mux(NULL, + i2s->clk_ctrl, "i2s_rclksrc", p_names, ARRAY_SIZE(p_names), CLK_SET_RATE_NO_REPARENT | CLK_SET_RATE_PARENT, i2s->addr + I2SMOD, reg_info->rclksrc_off, 1, 0, i2s->lock);
i2s->clk_table[CLK_I2S_RCLK_PSR] = clk_register_divider(NULL, - "i2s_presc", "i2s_rclksrc", + i2s->clk_ctrl, "i2s_presc", "i2s_rclksrc", CLK_SET_RATE_PARENT, i2s->addr + I2SPSR, 8, 6, 0, i2s->lock);
@@ -1195,8 +1202,8 @@ static int i2s_register_clock_provider(struct platform_device *pdev) of_property_read_string_index(dev->of_node, "clock-output-names", 0, &clk_name[0]);
- i2s->clk_table[CLK_I2S_CDCLK] = clk_register_gate(NULL, clk_name[0], - p_names[0], CLK_SET_RATE_PARENT, + i2s->clk_table[CLK_I2S_CDCLK] = clk_register_gate(NULL, i2s->clk_ctrl, + clk_name[0], p_names[0], CLK_SET_RATE_PARENT, i2s->addr + I2SMOD, reg_info->cdclkcon_off, CLK_GATE_SET_TO_DISABLE, i2s->lock);
On Tue, Aug 16, 2016 at 03:35:09PM +0200, Krzysztof Kozlowski wrote:
Allocate a clock controller and use new clk_register_with_ctrl() API.
Acked-by: Mark Brown broonie@kernel.org
i2s_unregister_clocks(i2s);
- clk_ctrl_unregister(i2s->clk_ctrl);
devm?
On Tue, Aug 16, 2016 at 03:13:45PM +0100, Mark Brown wrote:
On Tue, Aug 16, 2016 at 03:35:09PM +0200, Krzysztof Kozlowski wrote:
Allocate a clock controller and use new clk_register_with_ctrl() API.
Acked-by: Mark Brown broonie@kernel.org
i2s_unregister_clocks(i2s);
- clk_ctrl_unregister(i2s->clk_ctrl);
devm?
Right, that would make sense. Thanks for pointing this out.
Best regards, Krzysztof
Allocate a clock controller and use new clk_register_with_ctrl() API.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com --- drivers/clk/samsung/clk-exynos-audss.c | 30 +++++++++++++++++++----------- 1 file changed, 19 insertions(+), 11 deletions(-)
diff --git a/drivers/clk/samsung/clk-exynos-audss.c b/drivers/clk/samsung/clk-exynos-audss.c index bdf8b971f332..c13e6ab6f3b6 100644 --- a/drivers/clk/samsung/clk-exynos-audss.c +++ b/drivers/clk/samsung/clk-exynos-audss.c @@ -30,6 +30,7 @@ static DEFINE_SPINLOCK(lock); static struct clk **clk_table; static void __iomem *reg_base; static struct clk_onecell_data clk_data; +static struct clk_ctrl *clk_ctrl; /* * On Exynos5420 this will be a clock which has to be enabled before any * access to audss registers. Typically a child of EPLL. @@ -135,6 +136,10 @@ static int exynos_audss_clk_probe(struct platform_device *pdev) if (!clk_table) return -ENOMEM;
+ clk_ctrl = clk_ctrl_register(&pdev->dev); /* FIXME: Use devm-like, leaks now on error path */ + if (IS_ERR(clk_ctrl)) + return PTR_ERR(clk_ctrl); + clk_data.clks = clk_table; if (variant == TYPE_EXYNOS5420) clk_data.clk_num = EXYNOS_AUDSS_MAX_CLKS; @@ -159,7 +164,7 @@ static int exynos_audss_clk_probe(struct platform_device *pdev) } } } - clk_table[EXYNOS_MOUT_AUDSS] = clk_register_mux(NULL, "mout_audss", + clk_table[EXYNOS_MOUT_AUDSS] = clk_register_mux(NULL, clk_ctrl, "mout_audss", mout_audss_p, ARRAY_SIZE(mout_audss_p), CLK_SET_RATE_NO_REPARENT, reg_base + ASS_CLK_SRC, 0, 1, 0, &lock); @@ -170,48 +175,48 @@ static int exynos_audss_clk_probe(struct platform_device *pdev) mout_i2s_p[1] = __clk_get_name(cdclk); if (!IS_ERR(sclk_audio)) mout_i2s_p[2] = __clk_get_name(sclk_audio); - clk_table[EXYNOS_MOUT_I2S] = clk_register_mux(NULL, "mout_i2s", + clk_table[EXYNOS_MOUT_I2S] = clk_register_mux(NULL, clk_ctrl, "mout_i2s", mout_i2s_p, ARRAY_SIZE(mout_i2s_p), CLK_SET_RATE_NO_REPARENT, reg_base + ASS_CLK_SRC, 2, 2, 0, &lock);
- clk_table[EXYNOS_DOUT_SRP] = clk_register_divider(NULL, "dout_srp", + clk_table[EXYNOS_DOUT_SRP] = clk_register_divider(NULL, clk_ctrl, "dout_srp", "mout_audss", 0, reg_base + ASS_CLK_DIV, 0, 4, 0, &lock);
- clk_table[EXYNOS_DOUT_AUD_BUS] = clk_register_divider(NULL, + clk_table[EXYNOS_DOUT_AUD_BUS] = clk_register_divider(NULL, clk_ctrl, "dout_aud_bus", "dout_srp", 0, reg_base + ASS_CLK_DIV, 4, 4, 0, &lock);
- clk_table[EXYNOS_DOUT_I2S] = clk_register_divider(NULL, "dout_i2s", + clk_table[EXYNOS_DOUT_I2S] = clk_register_divider(NULL, clk_ctrl, "dout_i2s", "mout_i2s", 0, reg_base + ASS_CLK_DIV, 8, 4, 0, &lock);
- clk_table[EXYNOS_SRP_CLK] = clk_register_gate(NULL, "srp_clk", + clk_table[EXYNOS_SRP_CLK] = clk_register_gate(NULL, clk_ctrl, "srp_clk", "dout_srp", CLK_SET_RATE_PARENT, reg_base + ASS_CLK_GATE, 0, 0, &lock);
- clk_table[EXYNOS_I2S_BUS] = clk_register_gate(NULL, "i2s_bus", + clk_table[EXYNOS_I2S_BUS] = clk_register_gate(NULL, clk_ctrl, "i2s_bus", "dout_aud_bus", CLK_SET_RATE_PARENT, reg_base + ASS_CLK_GATE, 2, 0, &lock);
- clk_table[EXYNOS_SCLK_I2S] = clk_register_gate(NULL, "sclk_i2s", + clk_table[EXYNOS_SCLK_I2S] = clk_register_gate(NULL, clk_ctrl, "sclk_i2s", "dout_i2s", CLK_SET_RATE_PARENT, reg_base + ASS_CLK_GATE, 3, 0, &lock);
- clk_table[EXYNOS_PCM_BUS] = clk_register_gate(NULL, "pcm_bus", + clk_table[EXYNOS_PCM_BUS] = clk_register_gate(NULL, clk_ctrl, "pcm_bus", "sclk_pcm", CLK_SET_RATE_PARENT, reg_base + ASS_CLK_GATE, 4, 0, &lock);
sclk_pcm_in = devm_clk_get(&pdev->dev, "sclk_pcm_in"); if (!IS_ERR(sclk_pcm_in)) sclk_pcm_p = __clk_get_name(sclk_pcm_in); - clk_table[EXYNOS_SCLK_PCM] = clk_register_gate(NULL, "sclk_pcm", + clk_table[EXYNOS_SCLK_PCM] = clk_register_gate(NULL, clk_ctrl, "sclk_pcm", sclk_pcm_p, CLK_SET_RATE_PARENT, reg_base + ASS_CLK_GATE, 5, 0, &lock);
if (variant == TYPE_EXYNOS5420) { - clk_table[EXYNOS_ADMA] = clk_register_gate(NULL, "adma", + clk_table[EXYNOS_ADMA] = clk_register_gate(NULL, clk_ctrl, "adma", "dout_srp", CLK_SET_RATE_PARENT, reg_base + ASS_CLK_GATE, 9, 0, &lock); } @@ -261,6 +266,9 @@ static int exynos_audss_clk_remove(struct platform_device *pdev) if (!IS_ERR(epll)) clk_disable_unprepare(epll);
+ clk_ctrl_unregister(clk_ctrl); + clk_ctrl = NULL; + return 0; }
Allocate a clock controller and use new clk_register_with_ctrl() API.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com --- drivers/clk/samsung/clk-exynos-clkout.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/clk/samsung/clk-exynos-clkout.c b/drivers/clk/samsung/clk-exynos-clkout.c index 96fab6cfb202..9c57a623e741 100644 --- a/drivers/clk/samsung/clk-exynos-clkout.c +++ b/drivers/clk/samsung/clk-exynos-clkout.c @@ -26,6 +26,7 @@ #define EXYNOS5_CLKOUT_MUX_MASK 0x1f
struct exynos_clkout { + struct clk_ctrl *clk_ctrl; struct clk_gate gate; struct clk_mux mux; spinlock_t slock; @@ -66,6 +67,10 @@ static void __init exynos_clkout_init(struct device_node *node, u32 mux_mask) if (!clkout) return;
+ clkout->clk_ctrl = clk_ctrl_register(NULL); + if (IS_ERR(clkout->clk_ctrl)) + goto free_clkout; + spin_lock_init(&clkout->slock);
parent_count = 0; @@ -84,7 +89,7 @@ static void __init exynos_clkout_init(struct device_node *node, u32 mux_mask) }
if (!parent_count) - goto free_clkout; + goto free_ctrl;
clkout->reg = of_iomap(node, 0); if (!clkout->reg) @@ -100,7 +105,7 @@ static void __init exynos_clkout_init(struct device_node *node, u32 mux_mask) clkout->mux.shift = EXYNOS_CLKOUT_MUX_SHIFT; clkout->mux.lock = &clkout->slock;
- clkout->clk_table[0] = clk_register_composite(NULL, "clkout", + clkout->clk_table[0] = clk_register_composite(NULL, clkout->clk_ctrl, "clkout", parent_names, parent_count, &clkout->mux.hw, &clk_mux_ops, NULL, NULL, &clkout->gate.hw, &clk_gate_ops, CLK_SET_RATE_PARENT @@ -126,6 +131,8 @@ clks_put: for (i = 0; i < EXYNOS_CLKOUT_PARENTS; ++i) if (!IS_ERR(parents[i])) clk_put(parents[i]); +free_ctrl: + clk_ctrl_unregister(clkout->clk_ctrl); free_clkout: kfree(clkout);
Replace global prepare lock with a more fine-grained solution - per clock controller locks. The global lock is unfortunately still present and used on some paths but at least the prepare path could be simplified.
This directly removes the deadlocks mentioned in: 1. commit 10ff4c5239a1 ("i2c: exynos5: Fix possible ABBA deadlock by keeping I2C clock prepared"), 2. commit 34e81ad5f0b6 ("i2c: s3c2410: fix ABBA deadlock by keeping clock prepared").
The locking got unfortunately much more complicated. Following locking design was chosen: 1. For prepare/unprepare paths: lock only clock controller and its parents. 2. For recalc rates paths: lock global lock, the controller and its children. 3. For reparent paths: lock entire tree up down (children and parents) and the global lock as well.
In each case of traversing the clock hierarchy, the locking of controllers is always from children to parents.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com --- drivers/clk/clk.c | 156 ++++++++++++++++++++++++++++++++++-------------------- 1 file changed, 100 insertions(+), 56 deletions(-)
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c index ee1cedfbaa29..37113f7cef64 100644 --- a/drivers/clk/clk.c +++ b/drivers/clk/clk.c @@ -710,11 +710,11 @@ EXPORT_SYMBOL_GPL(__clk_mux_determine_rate_closest);
static void clk_core_unprepare(struct clk_core *core) { - lockdep_assert_held(&prepare_lock); - if (!core) return;
+ lockdep_assert_held(&core->ctrl->prepare_lock); + if (WARN_ON(core->prepare_count == 0)) return;
@@ -737,9 +737,13 @@ static void clk_core_unprepare(struct clk_core *core)
static void clk_core_unprepare_lock(struct clk_core *core) { - clk_prepare_lock(); + /* + * TODO: optimize, no need to lock parent controllers + * if prepare_count > 1 + */ + clk_prepare_lock_parents_locked(core); clk_core_unprepare(core); - clk_prepare_unlock(); + clk_prepare_unlock_parents(core); }
/** @@ -766,11 +770,11 @@ static int clk_core_prepare(struct clk_core *core) { int ret = 0;
- lockdep_assert_held(&prepare_lock); - if (!core) return 0;
+ lockdep_assert_held(&core->ctrl->prepare_lock); + if (core->prepare_count == 0) { ret = clk_core_prepare(core->parent); if (ret) @@ -798,9 +802,13 @@ static int clk_core_prepare_lock(struct clk_core *core) { int ret;
- clk_prepare_lock(); + /* + * TODO: optimize, no need to lock parent controllers + * if prepare_count >= 1 already + */ + clk_prepare_lock_parents_locked(core); ret = clk_core_prepare(core); - clk_prepare_unlock(); + clk_prepare_unlock_parents(core);
return ret; } @@ -1055,7 +1063,7 @@ static int clk_disable_unused(void) return 0; }
- clk_prepare_lock(); + clk_prepare_lock_all();
hlist_for_each_entry(core, &clk_root_list, child_node) clk_disable_unused_subtree(core); @@ -1069,7 +1077,7 @@ static int clk_disable_unused(void) hlist_for_each_entry(core, &clk_orphan_list, child_node) clk_unprepare_unused_subtree(core);
- clk_prepare_unlock(); + clk_prepare_unlock_all();
return 0; } @@ -1081,11 +1089,11 @@ static int clk_core_round_rate_nolock(struct clk_core *core, struct clk_core *parent; long rate;
- lockdep_assert_held(&prepare_lock); - if (!core) return 0;
+ lockdep_assert_held(&core->ctrl->prepare_lock); + parent = core->parent; if (parent) { req->best_parent_hw = parent->hw; @@ -1164,13 +1172,13 @@ long clk_round_rate(struct clk *clk, unsigned long rate) if (!clk) return 0;
- clk_prepare_lock(); + clk_prepare_lock_parents(clk->core);
clk_core_get_boundaries(clk->core, &req.min_rate, &req.max_rate); req.rate = rate;
ret = clk_core_round_rate_nolock(clk->core, &req); - clk_prepare_unlock(); + clk_prepare_unlock_parents(clk->core);
if (ret) return ret; @@ -1228,7 +1236,7 @@ static void __clk_recalc_accuracies(struct clk_core *core) unsigned long parent_accuracy = 0; struct clk_core *child;
- lockdep_assert_held(&prepare_lock); + lockdep_assert_held(&core->ctrl->prepare_lock);
if (core->parent) parent_accuracy = core->parent->accuracy; @@ -1247,12 +1255,12 @@ static long clk_core_get_accuracy(struct clk_core *core) { unsigned long accuracy;
- clk_prepare_lock(); + clk_prepare_lock_children(core); if (core && (core->flags & CLK_GET_ACCURACY_NOCACHE)) __clk_recalc_accuracies(core);
accuracy = __clk_get_accuracy(core); - clk_prepare_unlock(); + clk_prepare_unlock_children(core);
return accuracy; } @@ -1301,7 +1309,7 @@ static void __clk_recalc_rates(struct clk_core *core, unsigned long msg) unsigned long parent_rate = 0; struct clk_core *child;
- lockdep_assert_held(&prepare_lock); + lockdep_assert_held(&core->ctrl->prepare_lock);
old_rate = core->rate;
@@ -1325,13 +1333,14 @@ static unsigned long clk_core_get_rate(struct clk_core *core) { unsigned long rate;
- clk_prepare_lock(); + clk_prepare_lock_tree(core);
if (core && (core->flags & CLK_GET_RATE_NOCACHE)) __clk_recalc_rates(core, 0);
rate = clk_core_get_rate_nolock(core); - clk_prepare_unlock(); + + clk_prepare_unlock_tree(core);
return rate; } @@ -1525,7 +1534,7 @@ static int __clk_speculate_rates(struct clk_core *core, unsigned long new_rate; int ret = NOTIFY_DONE;
- lockdep_assert_held(&prepare_lock); + lockdep_assert_held(&core->ctrl->prepare_lock);
new_rate = clk_recalc(core, parent_rate);
@@ -1867,11 +1876,11 @@ int clk_set_rate(struct clk *clk, unsigned long rate) return 0;
/* prevent racing with updates to the clock topology */ - clk_prepare_lock(); + clk_prepare_lock_tree(clk->core);
ret = clk_core_set_rate_nolock(clk->core, rate);
- clk_prepare_unlock(); + clk_prepare_unlock_tree(clk->core);
return ret; } @@ -1899,7 +1908,7 @@ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max) return -EINVAL; }
- clk_prepare_lock(); + clk_prepare_lock_parents(clk->core);
if (min != clk->min_rate || max != clk->max_rate) { clk->min_rate = min; @@ -1907,7 +1916,7 @@ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max) ret = clk_core_set_rate_nolock(clk->core, clk->core->req_rate); }
- clk_prepare_unlock(); + clk_prepare_unlock_parents(clk->core);
return ret; } @@ -1958,10 +1967,13 @@ struct clk *clk_get_parent(struct clk *clk) if (!clk) return NULL;
- clk_prepare_lock(); - /* TODO: Create a per-user clk and change callers to call clk_put */ + clk_prepare_lock_ctrl(clk->core); + /* + * TODO: Create a per-user clk and change callers to call clk_put. + * Switch to tree locking then. + */ parent = !clk->core->parent ? NULL : clk->core->parent->hw->clk; - clk_prepare_unlock(); + clk_prepare_unlock_ctrl(clk->core);
return parent; } @@ -1981,8 +1993,11 @@ static void clk_core_reparent(struct clk_core *core, struct clk_core *new_parent) { clk_reparent(core, new_parent); + + clk_prepare_lock_children(core); __clk_recalc_accuracies(core); __clk_recalc_rates(core, POST_RATE_CHANGE); + clk_prepare_unlock_children(core); }
void clk_hw_reparent(struct clk_hw *hw, struct clk_hw *new_parent) @@ -2032,26 +2047,31 @@ static int clk_core_set_parent(struct clk_core *core, struct clk_core *parent) int ret = 0; int p_index = 0; unsigned long p_rate = 0; + struct clk_core *old_parent;
if (!core) return 0;
/* prevent racing with updates to the clock topology */ - clk_prepare_lock(); + clk_prepare_lock_tree(core);
+ old_parent = core->parent; if (core->parent == parent) - goto out; + goto unlock_tree; + + if (parent && (parent->ctrl != core->ctrl)) + clk_ctrl_prepare_lock(parent->ctrl);
/* verify ops for for multi-parent clks */ if ((core->num_parents > 1) && (!core->ops->set_parent)) { ret = -ENOSYS; - goto out; + goto unlock_new_parent; }
/* check that we are allowed to re-parent if the clock is in use */ if ((core->flags & CLK_SET_PARENT_GATE) && core->prepare_count) { ret = -EBUSY; - goto out; + goto unlock_new_parent; }
/* try finding the new parent index */ @@ -2061,7 +2081,7 @@ static int clk_core_set_parent(struct clk_core *core, struct clk_core *parent) pr_debug("%s: clk %s can not be parent of clk %s\n", __func__, parent->name, core->name); ret = p_index; - goto out; + goto unlock_new_parent; } p_rate = parent->rate; } @@ -2071,7 +2091,7 @@ static int clk_core_set_parent(struct clk_core *core, struct clk_core *parent)
/* abort if a driver objects */ if (ret & NOTIFY_STOP_MASK) - goto out; + goto unlock_new_parent;
/* do the re-parent */ ret = __clk_set_parent(core, parent, p_index); @@ -2084,8 +2104,13 @@ static int clk_core_set_parent(struct clk_core *core, struct clk_core *parent) __clk_recalc_accuracies(core); }
-out: - clk_prepare_unlock(); + +unlock_new_parent: + if (parent && (parent->ctrl != core->ctrl)) + clk_ctrl_prepare_unlock(parent->ctrl); + +unlock_tree: + clk_prepare_unlock_oldtree(core, old_parent);
return ret; } @@ -2148,7 +2173,7 @@ int clk_set_phase(struct clk *clk, int degrees) if (degrees < 0) degrees += 360;
- clk_prepare_lock(); + clk_prepare_lock_parents(clk->core);
/* bail early if nothing to do */ if (degrees == clk->core->phase) @@ -2165,7 +2190,7 @@ int clk_set_phase(struct clk *clk, int degrees) clk->core->phase = degrees;
out: - clk_prepare_unlock(); + clk_prepare_unlock_parents(clk->core);
return ret; } @@ -2175,9 +2200,9 @@ static int clk_core_get_phase(struct clk_core *core) { int ret;
- clk_prepare_lock(); + clk_prepare_lock_parents(core); ret = core->phase; - clk_prepare_unlock(); + clk_prepare_unlock_parents(core);
return ret; } @@ -2280,13 +2305,13 @@ static int clk_summary_show(struct seq_file *s, void *data) seq_puts(s, " clock enable_cnt prepare_cnt rate accuracy phase\n"); seq_puts(s, "----------------------------------------------------------------------------------------\n");
- clk_prepare_lock(); + clk_prepare_lock_all();
for (; *lists; lists++) hlist_for_each_entry(c, *lists, child_node) clk_summary_show_subtree(s, c, 0);
- clk_prepare_unlock(); + clk_prepare_unlock_all();
return 0; } @@ -2343,7 +2368,7 @@ static int clk_dump(struct seq_file *s, void *data)
seq_printf(s, "{");
- clk_prepare_lock(); + clk_prepare_lock_all();
for (; *lists; lists++) { hlist_for_each_entry(c, *lists, child_node) { @@ -2354,7 +2379,7 @@ static int clk_dump(struct seq_file *s, void *data) } }
- clk_prepare_unlock(); + clk_prepare_unlock_all();
seq_puts(s, "}\n"); return 0; @@ -2572,7 +2597,7 @@ static int __clk_core_init(struct clk_core *core) if (!core) return -EINVAL;
- clk_prepare_lock(); + clk_prepare_lock_ctrl(core);
/* check to see if a clock with this name is already registered */ if (clk_core_lookup(core->name)) { @@ -2696,10 +2721,12 @@ static int __clk_core_init(struct clk_core *core) * redundant call to the .set_rate op, if it exists */ if (parent) { + clk_prepare_lock_children(orphan); __clk_set_parent_before(orphan, parent); __clk_set_parent_after(orphan, parent, NULL); __clk_recalc_accuracies(orphan); __clk_recalc_rates(orphan, 0); + clk_prepare_unlock_children(orphan); } }
@@ -2726,7 +2753,7 @@ static int __clk_core_init(struct clk_core *core)
kref_init(&core->ref); out: - clk_prepare_unlock(); + clk_prepare_unlock_ctrl(core);
if (!ret) clk_debug_register(core); @@ -2752,18 +2779,19 @@ struct clk *__clk_create_clk(struct clk_hw *hw, const char *dev_id, clk->con_id = con_id; clk->max_rate = ULONG_MAX;
- clk_prepare_lock(); + /* FIXME: Check if core->ctrl is always initialized here (in all paths) */ + clk_prepare_lock_ctrl(clk->core); hlist_add_head(&clk->clks_node, &hw->core->clks); - clk_prepare_unlock(); + clk_prepare_unlock_ctrl(clk->core);
return clk; }
void __clk_free_clk(struct clk *clk) { - clk_prepare_lock(); + clk_prepare_lock_ctrl(clk->core); hlist_del(&clk->clks_node); - clk_prepare_unlock(); + clk_prepare_unlock_ctrl(clk->core);
kfree(clk); } @@ -2979,12 +3007,18 @@ void clk_unregister(struct clk *clk)
clk_debug_unregister(clk->core);
+ /* + * Take prepare lock again because controller lock + * will have to be released before final clk_release. + */ clk_prepare_lock(); + clk_prepare_lock_parents(clk->core);
if (clk->core->ops == &clk_nodrv_ops) { pr_err("%s: unregistered clock: %s\n", __func__, clk->core->name); - goto unlock; + clk_prepare_unlock_parents(clk->core); + goto half_unlock; } /* * Assign empty clock ops for consumers that might still hold @@ -3009,8 +3043,10 @@ void clk_unregister(struct clk *clk) if (clk->core->prepare_count) pr_warn("%s: unregistering prepared clock: %s\n", __func__, clk->core->name); + clk_prepare_unlock_parents(clk->core); kref_put(&clk->core->ref, __clk_release); -unlock: + +half_unlock: clk_prepare_unlock(); } EXPORT_SYMBOL_GPL(clk_unregister); @@ -3166,7 +3202,12 @@ void __clk_put(struct clk *clk) if (!clk || WARN_ON_ONCE(IS_ERR(clk))) return;
+ /* + * Take prepare lock again because controller lock + * will have to be released before final clk_release. + */ clk_prepare_lock(); + clk_prepare_lock_parents(clk->core);
hlist_del(&clk->clks_node); if (clk->min_rate > clk->core->req_rate || @@ -3174,6 +3215,9 @@ void __clk_put(struct clk *clk) clk_core_set_rate_nolock(clk->core, clk->core->req_rate);
owner = clk->core->owner; + + clk_prepare_unlock_parents(clk->core); + kref_put(&clk->core->ref, __clk_release);
clk_prepare_unlock(); @@ -3213,7 +3257,7 @@ int clk_notifier_register(struct clk *clk, struct notifier_block *nb) if (!clk || !nb) return -EINVAL;
- clk_prepare_lock(); + clk_prepare_lock_ctrl(clk->core);
/* search the list of notifiers for this clk */ list_for_each_entry(cn, &clk_notifier_list, node) @@ -3237,7 +3281,7 @@ int clk_notifier_register(struct clk *clk, struct notifier_block *nb) clk->core->notifier_count++;
out: - clk_prepare_unlock(); + clk_prepare_unlock_ctrl(clk->core);
return ret; } @@ -3262,7 +3306,7 @@ int clk_notifier_unregister(struct clk *clk, struct notifier_block *nb) if (!clk || !nb) return -EINVAL;
- clk_prepare_lock(); + clk_prepare_lock_ctrl(clk->core);
list_for_each_entry(cn, &clk_notifier_list, node) if (cn->clk == clk) @@ -3284,7 +3328,7 @@ int clk_notifier_unregister(struct clk *clk, struct notifier_block *nb) ret = -ENOENT; }
- clk_prepare_unlock(); + clk_prepare_unlock_ctrl(clk->core);
return ret; }
This reverts commit 10ff4c5239a137abfc896ec73ef3d15a0f86a16a. --- drivers/i2c/busses/i2c-exynos5.c | 24 +++++------------------- 1 file changed, 5 insertions(+), 19 deletions(-)
diff --git a/drivers/i2c/busses/i2c-exynos5.c b/drivers/i2c/busses/i2c-exynos5.c index c0e3ada02876..8710052eeb6b 100644 --- a/drivers/i2c/busses/i2c-exynos5.c +++ b/drivers/i2c/busses/i2c-exynos5.c @@ -671,9 +671,7 @@ static int exynos5_i2c_xfer(struct i2c_adapter *adap, return -EIO; }
- ret = clk_enable(i2c->clk); - if (ret) - return ret; + clk_prepare_enable(i2c->clk);
for (i = 0; i < num; i++, msgs++) { stop = (i == num - 1); @@ -697,7 +695,7 @@ static int exynos5_i2c_xfer(struct i2c_adapter *adap, }
out: - clk_disable(i2c->clk); + clk_disable_unprepare(i2c->clk); return ret; }
@@ -749,9 +747,7 @@ static int exynos5_i2c_probe(struct platform_device *pdev) return -ENOENT; }
- ret = clk_prepare_enable(i2c->clk); - if (ret) - return ret; + clk_prepare_enable(i2c->clk);
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); i2c->regs = devm_ioremap_resource(&pdev->dev, mem); @@ -803,10 +799,6 @@ static int exynos5_i2c_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, i2c);
- clk_disable(i2c->clk); - - return 0; - err_clk: clk_disable_unprepare(i2c->clk); return ret; @@ -818,8 +810,6 @@ static int exynos5_i2c_remove(struct platform_device *pdev)
i2c_del_adapter(&i2c->adap);
- clk_unprepare(i2c->clk); - return 0; }
@@ -831,8 +821,6 @@ static int exynos5_i2c_suspend_noirq(struct device *dev)
i2c->suspended = 1;
- clk_unprepare(i2c->clk); - return 0; }
@@ -842,9 +830,7 @@ static int exynos5_i2c_resume_noirq(struct device *dev) struct exynos5_i2c *i2c = platform_get_drvdata(pdev); int ret = 0;
- ret = clk_prepare_enable(i2c->clk); - if (ret) - return ret; + clk_prepare_enable(i2c->clk);
ret = exynos5_hsi2c_clock_setup(i2c); if (ret) { @@ -853,7 +839,7 @@ static int exynos5_i2c_resume_noirq(struct device *dev) }
exynos5_i2c_init(i2c); - clk_disable(i2c->clk); + clk_disable_unprepare(i2c->clk); i2c->suspended = 0;
return 0;
This reverts commit 34e81ad5f0b60007c95995eb7803da7e00c6c611.
Signed-off-by: Krzysztof Kozlowski k.kozlowski@samsung.com
Conflicts: drivers/i2c/busses/i2c-s3c2410.c --- drivers/i2c/busses/i2c-s3c2410.c | 23 ++++++----------------- 1 file changed, 6 insertions(+), 17 deletions(-)
diff --git a/drivers/i2c/busses/i2c-s3c2410.c b/drivers/i2c/busses/i2c-s3c2410.c index 38dc1cacfd8b..571250415617 100644 --- a/drivers/i2c/busses/i2c-s3c2410.c +++ b/drivers/i2c/busses/i2c-s3c2410.c @@ -771,16 +771,14 @@ static int s3c24xx_i2c_xfer(struct i2c_adapter *adap, int retry; int ret;
- ret = clk_enable(i2c->clk); - if (ret) - return ret; + clk_prepare_enable(i2c->clk);
for (retry = 0; retry < adap->retries; retry++) {
ret = s3c24xx_i2c_doxfer(i2c, msgs, num);
if (ret != -EAGAIN) { - clk_disable(i2c->clk); + clk_disable_unprepare(i2c->clk); return ret; }
@@ -789,7 +787,7 @@ static int s3c24xx_i2c_xfer(struct i2c_adapter *adap, udelay(100); }
- clk_disable(i2c->clk); + clk_disable_unprepare(i2c->clk); return -EREMOTEIO; }
@@ -1165,7 +1163,7 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev) }
ret = s3c24xx_i2c_init(i2c); - clk_disable(i2c->clk); + clk_disable_unprepare(i2c->clk); if (ret != 0) { dev_err(&pdev->dev, "I2C controller init failed\n"); clk_unprepare(i2c->clk); @@ -1180,7 +1178,6 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev) i2c->irq = ret = platform_get_irq(pdev, 0); if (ret <= 0) { dev_err(&pdev->dev, "cannot find IRQ\n"); - clk_unprepare(i2c->clk); return ret; }
@@ -1188,7 +1185,6 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev) 0, dev_name(&pdev->dev), i2c); if (ret != 0) { dev_err(&pdev->dev, "cannot claim IRQ %d\n", i2c->irq); - clk_unprepare(i2c->clk); return ret; } } @@ -1196,7 +1192,6 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev) ret = s3c24xx_i2c_register_cpufreq(i2c); if (ret < 0) { dev_err(&pdev->dev, "failed to register cpufreq notifier\n"); - clk_unprepare(i2c->clk); return ret; }
@@ -1218,7 +1213,6 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev) dev_err(&pdev->dev, "failed to add bus to i2c core\n"); pm_runtime_disable(&pdev->dev); s3c24xx_i2c_deregister_cpufreq(i2c); - clk_unprepare(i2c->clk); return ret; }
@@ -1230,8 +1224,6 @@ static int s3c24xx_i2c_remove(struct platform_device *pdev) { struct s3c24xx_i2c *i2c = platform_get_drvdata(pdev);
- clk_unprepare(i2c->clk); - pm_runtime_disable(&pdev->dev);
s3c24xx_i2c_deregister_cpufreq(i2c); @@ -1262,16 +1254,13 @@ static int s3c24xx_i2c_resume_noirq(struct device *dev) { struct platform_device *pdev = to_platform_device(dev); struct s3c24xx_i2c *i2c = platform_get_drvdata(pdev); - int ret;
if (!IS_ERR(i2c->sysreg)) regmap_write(i2c->sysreg, EXYNOS5_SYS_I2C_CFG, i2c->sys_i2c_cfg);
- ret = clk_enable(i2c->clk); - if (ret) - return ret; + clk_prepare_enable(i2c->clk); s3c24xx_i2c_init(i2c); - clk_disable(i2c->clk); + clk_disable_unprepare(i2c->clk); i2c->suspended = 0;
return 0;
On 08/16/2016 03:34 PM, Krzysztof Kozlowski wrote:
Hi,
RFC, please, do not apply, maybe except patch #1 which is harmless.
Introduction
The patchset brings new entity: clock controller representing a hardware block. The clock controller comes with its own prepare lock which is used then in many places. The idea is to fix the deadlock mentioned in commit 10ff4c5239a1 ("i2c: exynos5: Fix possible ABBA deadlock by keeping I2C clock prepared") and commit 34e81ad5f0b6 ("i2c: s3c2410: fix ABBA deadlock by keeping clock prepared").
Damn, I forgot to describe the overall idea. :) It is mentioned in patch 15 but probably not many will have enough of patience to reach it.
The locking idea ================ Clock controllers representing different hardware blocks, will contain its own prepare lock which protects the clocks inside controller. The hierarchy itself is protected by global lock.
In prepare path, the global prepare lock is removed. This is direct solution for the deadlock.
Clock hierarchy imposes also hierarchy between controllers so when a prepare happens, also parents have to be locked.
Following locking design was chosen: 1. For prepare/unprepare paths: lock only clock controller and its parents. 2. For recalc rates paths: lock global lock, the controller and its children. 3. For reparent paths: lock entire tree up down (children and parents) and the global lock as well.
In each case of traversing the clock hierarchy, the locking of controllers is always from children to parents.
Best regards, Krzysztof
On Tue, Aug 16, 2016 at 03:51:10PM +0200, Krzysztof Kozlowski wrote:
On 08/16/2016 03:34 PM, Krzysztof Kozlowski wrote:
Hi,
RFC, please, do not apply, maybe except patch #1 which is harmless.
Introduction
The patchset brings new entity: clock controller representing a hardware block. The clock controller comes with its own prepare lock which is used then in many places. The idea is to fix the deadlock mentioned in commit 10ff4c5239a1 ("i2c: exynos5: Fix possible ABBA deadlock by keeping I2C clock prepared") and commit 34e81ad5f0b6 ("i2c: s3c2410: fix ABBA deadlock by keeping clock prepared").
Damn, I forgot to describe the overall idea. :) It is mentioned in patch 15 but probably not many will have enough of patience to reach it.
The locking idea
Clock controllers representing different hardware blocks, will contain its own prepare lock which protects the clocks inside controller. The hierarchy itself is protected by global lock.
In prepare path, the global prepare lock is removed. This is direct solution for the deadlock.
Clock hierarchy imposes also hierarchy between controllers so when a prepare happens, also parents have to be locked.
Following locking design was chosen:
- For prepare/unprepare paths: lock only clock controller and its parents.
- For recalc rates paths: lock global lock, the controller and its children.
- For reparent paths: lock entire tree up down (children and parents) and the global lock as well.
In each case of traversing the clock hierarchy, the locking of controllers is always from children to parents.
Best regards, Krzysztof
I have been playing with these patches on my Arndale board and they certainly do seem to resolve the interaction issues I have been SPI and the clocking framework, which is awesome and lets me sensibly add the clocking framework into our codec drivers. I will keep investigating and for what its worth have a little more detailed look through the code.
Thanks, Charles
On Fri, Aug 19, 2016 at 03:46:40PM +0100, Charles Keepax wrote:
On Tue, Aug 16, 2016 at 03:51:10PM +0200, Krzysztof Kozlowski wrote:
On 08/16/2016 03:34 PM, Krzysztof Kozlowski wrote:
Hi,
RFC, please, do not apply, maybe except patch #1 which is harmless.
Introduction
The patchset brings new entity: clock controller representing a hardware block. The clock controller comes with its own prepare lock which is used then in many places. The idea is to fix the deadlock mentioned in commit 10ff4c5239a1 ("i2c: exynos5: Fix possible ABBA deadlock by keeping I2C clock prepared") and commit 34e81ad5f0b6 ("i2c: s3c2410: fix ABBA deadlock by keeping clock prepared").
Damn, I forgot to describe the overall idea. :) It is mentioned in patch 15 but probably not many will have enough of patience to reach it.
The locking idea
Clock controllers representing different hardware blocks, will contain its own prepare lock which protects the clocks inside controller. The hierarchy itself is protected by global lock.
In prepare path, the global prepare lock is removed. This is direct solution for the deadlock.
Clock hierarchy imposes also hierarchy between controllers so when a prepare happens, also parents have to be locked.
Following locking design was chosen:
- For prepare/unprepare paths: lock only clock controller and its parents.
- For recalc rates paths: lock global lock, the controller and its children.
- For reparent paths: lock entire tree up down (children and parents) and the global lock as well.
In each case of traversing the clock hierarchy, the locking of controllers is always from children to parents.
Best regards, Krzysztof
I have been playing with these patches on my Arndale board and they certainly do seem to resolve the interaction issues I have been SPI and the clocking framework, which is awesome and lets me sensibly add the clocking framework into our codec drivers. I will keep investigating and for what its worth have a little more detailed look through the code.
I am really happy to hear it!
Along with other Samsung guys from Poland we really spent a lot of time figuring out all the locking cases, possible scenarios and new issues which could come out of it.
I am glad that it solves also other people's cases, not only ours!
Certainly there is a lot of things to improve in the patchset. Probably merging the new "clock controller" entity into clock provider makes sense.
Best regards, Krzysztof
Hello Krzysztof,
On 08/16/2016 09:34 AM, Krzysztof Kozlowski wrote:
Hi,
RFC, please, do not apply, maybe except patch #1 which is harmless.
Introduction
The patchset brings new entity: clock controller representing a hardware block. The clock controller comes with its own prepare lock which is used then in many places. The idea is to fix the deadlock mentioned in commit 10ff4c5239a1 ("i2c: exynos5: Fix possible ABBA deadlock by keeping I2C clock prepared") and commit 34e81ad5f0b6 ("i2c: s3c2410: fix ABBA deadlock by keeping clock prepared").
I'm not familiar enough with the common clock framework to do a proper review of this patch-set, but I've tested both on an Exynos5800 Peach Pi Chromebook and an Exynos5422 Odroid XU4 board and I didn't find any clock regressions.
Also, I confirmed that the possible deadlock in the Odroid XU4 that was fixed by reverted commit 10ff4c5239a1 doesn't happen anymore with your patches.
Tested-by: Javier Martinez Canillas javier@osg.samsung.com
Best regards,
On Fri, Aug 19, 2016 at 03:31:08PM -0400, Javier Martinez Canillas wrote:
Hello Krzysztof,
On 08/16/2016 09:34 AM, Krzysztof Kozlowski wrote:
Hi,
RFC, please, do not apply, maybe except patch #1 which is harmless.
Introduction
The patchset brings new entity: clock controller representing a hardware block. The clock controller comes with its own prepare lock which is used then in many places. The idea is to fix the deadlock mentioned in commit 10ff4c5239a1 ("i2c: exynos5: Fix possible ABBA deadlock by keeping I2C clock prepared") and commit 34e81ad5f0b6 ("i2c: s3c2410: fix ABBA deadlock by keeping clock prepared").
I'm not familiar enough with the common clock framework to do a proper review of this patch-set, but I've tested both on an Exynos5800 Peach Pi Chromebook and an Exynos5422 Odroid XU4 board and I didn't find any clock regressions.
Also, I confirmed that the possible deadlock in the Odroid XU4 that was fixed by reverted commit 10ff4c5239a1 doesn't happen anymore with your patches.
Tested-by: Javier Martinez Canillas javier@osg.samsung.com
Thanks for testing, I appreciate that!
Best regards, Krzysztof
On 08/16, Krzysztof Kozlowski wrote:
Hi,
RFC, please, do not apply, maybe except patch #1 which is harmless.
Introduction
The patchset brings new entity: clock controller representing a hardware block. The clock controller comes with its own prepare lock which is used then in many places. The idea is to fix the deadlock mentioned in commit 10ff4c5239a1 ("i2c: exynos5: Fix possible ABBA deadlock by keeping I2C clock prepared") and commit 34e81ad5f0b6 ("i2c: s3c2410: fix ABBA deadlock by keeping clock prepared").
Disclaimer
Request for comments, so:
- Only exynos_defconfig builds,
- A lot of FIXME/TODO note still,
- Checkpatch not run, lines not aligned,
- Other (non-exynos) drivers not converted,
- Probably not yet bisectable,
- Locking became quite complex. The previous one lock was simple. Inefficient and dead-lock prone but simple. Because of clock hierarchy spanning through controllers, the new locking became quite complicated. I don't like it but...
Details
In Exynos-based boards case the deadlock occurs between clock's prepare_lock and regmap-i2c's lock:
CPU #0: CPU #1: lock(regmap) s2mps11-clk: clk_prepare_lock()
i2c-exynos: clk_prepare_lock() - wait lock(regmap) - wait
The clk_prepare_lock() on both CPUs come from different clock drivers and components:
- I2C clock is part of SoC block and is required by i2c-s3c2410/i2c-exynos5 driver,
- S2MPS11 clock is separate device, however driver uses I2C regmap.
The deadlock was reported by lockdep (always) and was happening in 20% of boots of Odroid XU3 with multi_v7 defconfig. Workaround for deadlock was implemented by removing prepare/unprepare calls from I2C transfers. However these are just workarounds... which after applying this patch can be reverted.
Additionally Marek Szyprowski's work on domains/clocks/pinctrl exposed the deadlock again in different configuration.
Per-controller locks seems to be a misnomer. These patches look more like per-clk_register() locks. The clk registrant can decide how fine grained to make the prepare locking scheme. They can decide to have one lock per clk, one lock for the entire tree, or one lock per arbitrary subtree. I'd prefer to drop the controller concept unless it's really done that way, by attaching to the OF clk provider or device pointer.
Given that, it seems that conceptually these patches allow clk registrants to carve out a set of clks that they want to be treated as an atomic unit with regards to the prepare lock. TL;DR trees of per-process recursive prepare mutex locks where the locks cover one or more clks.
The locks are recursive per-process to simplify the problem where we need to acquire the same lock multiple times while traversing a tree that shares the same lock, right? Otherwise we would need to track which locks we've already grabbed and refcount and we wouldn't be able to re-enter the framework from within the clk ops of the same atomic unit.
As long as we can guarantee that we always take the locks in the same order we'll be fine. The burden to ensure that would be placed on the clk registrants though, and verifying that will be left up to humans? That seems fragile and error prone. We can always say that we take the global lock and then traverse the children up to the roots, but we can't be sure that during the tree traversal we don't take locks in different order because of how the clk registrant has structured things.
It would be great if the different clk APIs were listed, with the order of how locks are taken BTW. And really overall just more information about how the locking scheme works. I had to read this patch series quite a few times to come to (hopefully the right) conclusions.
So I'm not very fond of this design because the locking scheme is pretty much out of the hands of the framework and can be easily broken. I'm biased of course, because I'd prefer we go with my wwmutex design of per-clk locks[1]. Taking locks in any order works fine there, and we resolve quite a few long standing locking problems that we have while improving scalability. The problem there is that we don't get the recursive mutex design (maybe that's a benefit!). Once a clk_op reenters the framework with consumer APIs and tries to grab the same lock we deadlock. This is why I've been slowly splitting consumers from providers so we can easily identify these cases. If we had something like coordinated clk rate switching, we could get rid of clk_ops reentering the framework and avoid this problem (and we really do need to do that).
The one thing I do like about this patch series though is the gradual movement of providers to the new locking scheme. Maybe if we combined that idea with per-clk wwmutex locks we could have the best of both worlds and fix reentrancy later when we need it? There's wwmutex_trylock() that we could use. Perhaps always pass a NULL context if some clk flag isn't set (CLK_USE_PER_CLK_LOCK). Plus we could assign the same wwmutex all over the place when that flag isn't set and allocate a unique wwmutex at clk_register time otherwise. I haven't thought about it deeply, so there may be some glaring problem that I'm missing like when we have half the clks in a tree with the flag set or something. This is where driving home from the office for 45 minutes helps.
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-April/251039.html
Hi Stephen,
Krzysztof has left Samsung, but we would like to continue this task, because the ABBA dead-locks related to global prepare lock becomes more and more problematic for us to workaround.
On 2016-09-09 02:24, Stephen Boyd wrote:
On 08/16, Krzysztof Kozlowski wrote:
RFC, please, do not apply, maybe except patch #1 which is harmless.
Introduction
The patchset brings new entity: clock controller representing a hardware block. The clock controller comes with its own prepare lock which is used then in many places. The idea is to fix the deadlock mentioned in commit 10ff4c5239a1 ("i2c: exynos5: Fix possible ABBA deadlock by keeping I2C clock prepared") and commit 34e81ad5f0b6 ("i2c: s3c2410: fix ABBA deadlock by keeping clock prepared").
Disclaimer
Request for comments, so:
- Only exynos_defconfig builds,
- A lot of FIXME/TODO note still,
- Checkpatch not run, lines not aligned,
- Other (non-exynos) drivers not converted,
- Probably not yet bisectable,
- Locking became quite complex. The previous one lock was simple. Inefficient and dead-lock prone but simple. Because of clock hierarchy spanning through controllers, the new locking became quite complicated. I don't like it but...
Details
In Exynos-based boards case the deadlock occurs between clock's prepare_lock and regmap-i2c's lock:
CPU #0: CPU #1: lock(regmap) s2mps11-clk: clk_prepare_lock()
i2c-exynos: clk_prepare_lock() - wait lock(regmap) - wait
The clk_prepare_lock() on both CPUs come from different clock drivers and components:
- I2C clock is part of SoC block and is required by i2c-s3c2410/i2c-exynos5 driver,
- S2MPS11 clock is separate device, however driver uses I2C regmap.
The deadlock was reported by lockdep (always) and was happening in 20% of boots of Odroid XU3 with multi_v7 defconfig. Workaround for deadlock was implemented by removing prepare/unprepare calls from I2C transfers. However these are just workarounds... which after applying this patch can be reverted.
Additionally Marek Szyprowski's work on domains/clocks/pinctrl exposed the deadlock again in different configuration.
Per-controller locks seems to be a misnomer. These patches look more like per-clk_register() locks. The clk registrant can decide how fine grained to make the prepare locking scheme. They can decide to have one lock per clk, one lock for the entire tree, or one lock per arbitrary subtree. I'd prefer to drop the controller concept unless it's really done that way, by attaching to the OF clk provider or device pointer.
This was not phrased well but the idea was to let provider/controller have ability to define which clocks will use common lock. Probably removing the concept of clock controller and moving the lock to clk_provider will be much better approach.
Given that, it seems that conceptually these patches allow clk registrants to carve out a set of clks that they want to be treated as an atomic unit with regards to the prepare lock. TL;DR trees of per-process recursive prepare mutex locks where the locks cover one or more clks.
The locks are recursive per-process to simplify the problem where we need to acquire the same lock multiple times while traversing a tree that shares the same lock, right? Otherwise we would need to track which locks we've already grabbed and refcount and we wouldn't be able to re-enter the framework from within the clk ops of the same atomic unit.
As long as we can guarantee that we always take the locks in the same order we'll be fine. The burden to ensure that would be placed on the clk registrants though, and verifying that will be left up to humans? That seems fragile and error prone. We can always say that we take the global lock and then traverse the children up to the roots, but we can't be sure that during the tree traversal we don't take locks in different order because of how the clk registrant has structured things.
It would be great if the different clk APIs were listed, with the order of how locks are taken BTW. And really overall just more information about how the locking scheme works. I had to read this patch series quite a few times to come to (hopefully the right) conclusions.
Right, such change requires much more documentation and especially that it took significant amount of time to figure out all(?) possible use cases and locking schemes.
So I'm not very fond of this design because the locking scheme is pretty much out of the hands of the framework and can be easily broken.
Well, switching from a single global lock to more granular locking is always a good approach. Please note that the global lock sooner or later became a serious bottleneck if one wants to make somehow more aggressive power management and clock gating.
I'm biased of course, because I'd prefer we go with my wwmutex design of per-clk locks[1]. Taking locks in any order works fine there, and we resolve quite a few long standing locking problems that we have while improving scalability. The problem there is that we don't get the recursive mutex design (maybe that's a benefit!).
Do you have any plan to continue working on your approach? per-clk wwmutex looks like an overkill on the first glance, but that's probably the only working solution if you want to get rid of recursive locks. I'm still not really convinced that we really need wwmutex here, especially if it is possible to guarantee the same order of locking operations inside the clock core. This requires a bit of cooperation from clock providers (with proper documentation and a list of DO/DON'T it shouldn't be that hard).
Once a clk_op reenters the framework with consumer APIs and tries to grab the same lock we deadlock. This is why I've been slowly splitting consumers from providers so we can easily identify these cases. If we had something like coordinated clk rate switching, we could get rid of clk_ops reentering the framework and avoid this problem (and we really do need to do that).
I'm not sure that this makes really sense split consumers and providers. You will get recursive calls to clk core anyway with consumers calls if you are implementing i2c clock, for which an i2c bus driver does it's own clock gating (i2c controller uses consumer clk api).
The one thing I do like about this patch series though is the gradual movement of providers to the new locking scheme. Maybe if we combined that idea with per-clk wwmutex locks we could have the best of both worlds and fix reentrancy later when we need it? There's wwmutex_trylock() that we could use. Perhaps always pass a NULL context if some clk flag isn't set (CLK_USE_PER_CLK_LOCK). Plus we could assign the same wwmutex all over the place when that flag isn't set and allocate a unique wwmutex at clk_register time otherwise. I haven't thought about it deeply, so there may be some glaring problem that I'm missing like when we have half the clks in a tree with the flag set or something. This is where driving home from the office for 45 minutes helps.
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-April/251039.html
Best regards
On 11/04, Marek Szyprowski wrote:
Hi Stephen,
Krzysztof has left Samsung, but we would like to continue this task, because the ABBA dead-locks related to global prepare lock becomes more and more problematic for us to workaround.
Hmm. Ok. Thanks for the info.
On 2016-09-09 02:24, Stephen Boyd wrote:
So I'm not very fond of this design because the locking scheme is pretty much out of the hands of the framework and can be easily broken.
Well, switching from a single global lock to more granular locking is always a good approach. Please note that the global lock sooner or later became a serious bottleneck if one wants to make somehow more aggressive power management and clock gating.
I'm not so sure switching from a global lock to a more granular lock is _always_ a great idea. Sometimes simpler code is better, even if it doesn't scale to a million clk nodes. The largest systems I've seen only have clocks in the hundreds, and a majority of those aren't rate changing in parallel, so it's not like we're suffering from VFS type scalability problems here with tens of thousands of inodes.
That isn't to say I don't agree there's a scalability problem here, but I'd like to actually see numbers to prove that there's some sort of scalability problem before making drastic changes.
I'm biased of course, because I'd prefer we go with my wwmutex design of per-clk locks[1]. Taking locks in any order works fine there, and we resolve quite a few long standing locking problems that we have while improving scalability. The problem there is that we don't get the recursive mutex design (maybe that's a benefit!).
Do you have any plan to continue working on your approach? per-clk wwmutex looks like an overkill on the first glance, but that's probably the only working solution if you want to get rid of recursive locks. I'm still not really convinced that we really need wwmutex here, especially if it is possible to guarantee the same order of locking operations inside the clock core. This requires a bit of cooperation from clock providers (with proper documentation and a list of DO/DON'T it shouldn't be that hard).
So far I haven't gotten around to resurrecting the wwmutex stuff. If you have interest in doing it that's great. Having a locking scheme with rules of DO/DON'T sounds brittle to me, unless it can be automated to find problems. I know that I'm not going to spend time policing that.
Once a clk_op reenters the framework with consumer APIs and tries to grab the same lock we deadlock. This is why I've been slowly splitting consumers from providers so we can easily identify these cases. If we had something like coordinated clk rate switching, we could get rid of clk_ops reentering the framework and avoid this problem (and we really do need to do that).
I'm not sure that this makes really sense split consumers and providers. You will get recursive calls to clk core anyway with consumers calls if you are implementing i2c clock, for which an i2c bus driver does it's own clock gating (i2c controller uses consumer clk api).
I suppose this is a different topic. Regardless of the recursive call or not, we can easily see that a clk consumer is also a clk provider and just knowing that is useful. Once we know that, we can look to see if they're calling clk consumer APIs from their provider callbacks which is not desired because it makes it impossible to get rid of the recursive lock design. If the lock is per-clock, then recursion doesn't happen when the provider is also a consumer. If it does, that's bad and lockdep should tell us.
participants (7)
-
Charles Keepax
-
Javier Martinez Canillas
-
Krzysztof Kozlowski
-
Krzysztof Kozlowski
-
Marek Szyprowski
-
Mark Brown
-
Stephen Boyd