On Tue, May 07, 2013 at 03:06:39PM -0500, Matt Sealey wrote:
On Mon, May 6, 2013 at 5:06 PM, Mark Brown broonie@kernel.org wrote:
Just for reference the supplies on the Efika MX are VDDD 1.2V, VDDDA 1.65V and VDDIO 2.775V respectively. For the sake of a couple 0.1uF capacitors you're not really saving much money in the lowest cost configuration, and you're doubling to tripling power consumption. If 15mW makes a difference to you (especially if you're supplying the above from rails connected to other devices and you're concerned about current limiting etc.) it makes sense to go for the lower power one.
Well. some applications just don't care about power consumption.
It is hard to imagine an i.MX board without 1.2V, 1.6V and 3.3V from a PMIC to supply the codec so letting VDDD float and using the internal LDO seems like an exercise in selling the codec to people who aren't using other Freescale parts..
Even if there's physically a supply at the appropriate voltage people can have concerns about running noisy digital stuff near their analogue, the on-chip regulators aren't purely about component cost. Not sure that this is a consideration here but...
I think most of the weird problems is the LDO configuration itself - Linux assumes that it's not set up, when if VDDD isn't there, it is already. Weird sequencing of supplies by boards (either not bringing any supplies down or not bringing them down correctly in a reasonable order) on warm reboot and enabling it *regardless* at revision 0x11 or above which is just turning (on our boards) a 1.2V supply into a better filtered 1.2V supply and making the chip run a little hotter.
Is it actually having a postive impact on performance?
I am curious about the version check more than anything. Either the internal LDO is required to filter poor supplies on some boards (which ones?) or revisions equal or above 0x11 had some DIFFERENT digital supply requirement - I seem to remember the reference manual stating 1.2V was a suitable input supply, but the minimum is 1.1V in the latest RM. Maybe if you run it at 1.2V there's some problem. It's not documented why that decision would be made.. no errata..
That sounds like tolerances doesn't it? Spec of 1.2V.
So maybe we don't disable the cache, or it gets ported to regmap, but the order should be enable regulators in a non-cached way then fill the cache AFTERWARDS - the code at ToT mainline doesn't fix the cache
I'm sorry, this makes no sense to me. If we know the physical defaults for the device and we can get the device into that state then we should have no need to read from the cache. If we don't know what they are then they shouldn't be listed.
In the case VDDD isn't supplied when the other two supplies are brought up, then the LDO is enabled, so if the register controlling the LDO configuration is in the cache.. that might explain a lot...
I still don't understand this at all. Is the power on value for the register variable?
I think the point that you lost me was your assumption that the regulator enable needs to be done without the cache. What makes you believe that this is the case?
Because that's what Freescale did in their code as per their comments
- if only because they needed to do this kind of setup before the
cache/defaults were populated. I assume they populate the cache from the chip at probe because the defaults change per revision and the presence of VDDD or not may change some of the registers in the cache.
We just have no idea why they're doing what they're doing; this is all guesswork. I've not seen anything that indicates that anyone who's sent any code has any understanding of what they're changing.
It may also be on boards like i.MX6 Sabrelite that the "defaults" are not the "defaults" over a warm reboot, since the chip never powers off (just "optionally" loses clock depending on whether CLKO is reconfigured or not at some point) and probably retains everything you ever set.
This is why most drivers do a hardware reset of the device on gaining control of it, in order to ensure that the hardware is in a known state. I see that the driver doesn't do that except through the regulators. I see this driver doesn't do that which isn't terribly clever, though it's not clear that the chip has suitable support.
We want to do two possible things...
- At probe, dump some known good defaults into the chip and use those as a cache
- At probe, read out all the registers assuming they're already known
good defaults into a cache
Clearly the first option is more sane since otherwise you've no idea if the chip is configured sensibly.
The intent being in the first case to make for damn sure the codec is in a well known state, and in the second case JUST to reduce i2c writes further down the line. Freescale seem to like the second one,
This isn't the sole function of the register cache, it's also used to implement most of suspend and resume and to allow the device to be powered off when idle but still present controls.
Mainline is doing the first set, although it's definitely possible those defaults are not the true defaults if they were derived from a board like the Sabrelite where a warm reboot might retain all the state from previous boots.. the precedence of that list of defaults is basically unknown.. and I recall Freescale updating them in the BSP a lot.
I'd *really* hope that the defaults come from the chip datasheet. If they don't that's extremely disappointing.
So the problem here might be some quirky board designs, a random implementation of an erratum workaround with no documentation, a bunch
My impression based on what I'm seeing here, including things like your comments about lots of register default changes, is that people working on the driver haven't had much understanding of what they're doing and have got confused as a result. Sounds like a good first step is getting the register defaults sane...