On 06.06.2013 16:24, Mark Brown wrote:
On Thu, Jun 06, 2013 at 04:09:52PM +0200, Daniel Mack wrote:
On 06.06.2013 15:56, Mark Brown wrote:
I don't think this is a terribly sensible idea; as soon as you start relying on these dividers in machine code you're going to run into drivers that just don't implement them either due to hardware or due to them being able to figure things out by themselves and...
I do see that we have a way to propagate the sysclk, but how would you determine the bit clock rate from a codec driver?
Usually you just set the bit clock to be whatever the minimim clock needed for the data is - there's helpers in soc-utils.c to get the number - or the next highest sensible rate if there's a division problem.
Hmm, but in case the codec is slave to all clocks, it must have a way to determine what the bit clock rate (or the ratio to MCLK, respectively) is. It can't just _set_ it. Which detail am I missing?
Also, the same problem with freely definable indices is true for .set_sysclk() as well. Not all drivers expect the actual MCLK rate here.
Yes, and thus you start to see the problems doing this sort of stuff generically. There's often also a whole bunch of different ways the clocking can be set up
But there's always a MCLK, a BCLK and a LRCLK. And thus, there are always ratios between them. It might even make sense to let the core inform the codec drivers, instead of relying on the machine code.
which can make a material difference to the quality of the output and may require some system specific taste to choose.
I see your point, but no solution yet :)
Daniel