On Thu, Jun 06, 2013 at 04:31:59PM +0200, Daniel Mack wrote:
On 06.06.2013 16:24, Mark Brown wrote:
Usually you just set the bit clock to be whatever the minimim clock needed for the data is - there's helpers in soc-utils.c to get the number - or the next highest sensible rate if there's a division problem.
Hmm, but in case the codec is slave to all clocks, it must have a way to determine what the bit clock rate (or the ratio to MCLK, respectively) is. It can't just _set_ it. Which detail am I missing?
If nothing else you're missing what happens if the driver for the device generating the clock decides to change the rate for some reason.
It'd be rather unusual for something to care what the bit clock rate was if it wasn't generating it - generally it's just shifting data in with it so so long as the requisite number of edges appear its fine. Do you really have devices for which this is a problem, and are you sure they're not actually looking for the sample size?
Yes, and thus you start to see the problems doing this sort of stuff generically. There's often also a whole bunch of different ways the clocking can be set up
But there's always a MCLK, a BCLK and a LRCLK. And thus, there are always ratios between them. It might even make sense to let the core inform the codec drivers, instead of relying on the machine code.
There generally will be, but knowing what they should be and who should provide them is a different game - and of course they're frequently shared between multiple interfaces too, or there may be constraints from elsewhere. I'm not sure that specifying the rates without also being able to specify the sources is generally useful, and things that are purely digital may not have or need an MCLK at all (CPUs don't tend to care too much when they're in slave mode for example).
LRCLK is fixed by the sample rate so that just comes down from the application layer.
I guess what I'm saying is that it'd be nice but it falls over far too quickly when I start thinking about a general implementation. I think long term we want to move all the clocking stuff into the clock API since otherwise you end up reimplementing that. Right now we're a bit stuck because the clock API isn't usefully generic yet, too many platforms either have a custom one or don't enable the common one.