[alsa-devel] Generic clock divider indices
Hi Mark, Liam,
we're using generic ASoC platform code for some audio devices, which is agnostic of the actual Codecs that it deals with, as the information is provided by DT nodes only.
That works very well so far, but I'm now facing a case where I need to pass a special clock divider information down to the codec driver. The fact that the IDs of the dividers can be arbitrarily defined by the drivers leads to the unfortunate situation that I now have to make the platform code aware of the codec drivers again.
Would it be an idea to define some commonly used dividers in the core? If they start at 0x80000000, they won't collide with existing ones, and we could simply add them as alternative case statements to existing drivers.
Some commonly used ones I can think of are
MCLK / BCLK MCLK / LRCLK BCLK / LRCLK
That way, platform code could just pass the values dows to the codec drivers, whether or not they know what to do with it.
What do you think?
Daniel
On Thu, Jun 06, 2013 at 03:44:04PM +0200, Daniel Mack wrote:
Would it be an idea to define some commonly used dividers in the core? If they start at 0x80000000, they won't collide with existing ones, and we could simply add them as alternative case statements to existing drivers.
I don't think this is a terribly sensible idea; as soon as you start relying on these dividers in machine code you're going to run into drivers that just don't implement them either due to hardware or due to them being able to figure things out by themselves and...
Some commonly used ones I can think of are
MCLK / BCLK MCLK / LRCLK BCLK / LRCLK
That way, platform code could just pass the values dows to the codec drivers, whether or not they know what to do with it.
...frankly the above all look like dividers that the drivers should just be figuring out all by themselves anyway without bothering the machine driver. If that's the sort of stuff you're having to do it seems more generally useful to work on making the relevant drivers require less manual control by the machine driver.
On 06.06.2013 15:56, Mark Brown wrote:
On Thu, Jun 06, 2013 at 03:44:04PM +0200, Daniel Mack wrote:
Would it be an idea to define some commonly used dividers in the core? If they start at 0x80000000, they won't collide with existing ones, and we could simply add them as alternative case statements to existing drivers.
I don't think this is a terribly sensible idea; as soon as you start relying on these dividers in machine code you're going to run into drivers that just don't implement them either due to hardware or due to them being able to figure things out by themselves and...
I do see that we have a way to propagate the sysclk, but how would you determine the bit clock rate from a codec driver?
Also, the same problem with freely definable indices is true for .set_sysclk() as well. Not all drivers expect the actual MCLK rate here.
Daniel
On Thu, Jun 06, 2013 at 04:09:52PM +0200, Daniel Mack wrote:
On 06.06.2013 15:56, Mark Brown wrote:
I don't think this is a terribly sensible idea; as soon as you start relying on these dividers in machine code you're going to run into drivers that just don't implement them either due to hardware or due to them being able to figure things out by themselves and...
I do see that we have a way to propagate the sysclk, but how would you determine the bit clock rate from a codec driver?
Usually you just set the bit clock to be whatever the minimim clock needed for the data is - there's helpers in soc-utils.c to get the number - or the next highest sensible rate if there's a division problem.
Also, the same problem with freely definable indices is true for .set_sysclk() as well. Not all drivers expect the actual MCLK rate here.
Yes, and thus you start to see the problems doing this sort of stuff generically. There's often also a whole bunch of different ways the clocking can be set up which can make a material difference to the quality of the output and may require some system specific taste to choose.
The fact that the clock API isn't usefully generic is not a help here of course.
On 06.06.2013 16:24, Mark Brown wrote:
On Thu, Jun 06, 2013 at 04:09:52PM +0200, Daniel Mack wrote:
On 06.06.2013 15:56, Mark Brown wrote:
I don't think this is a terribly sensible idea; as soon as you start relying on these dividers in machine code you're going to run into drivers that just don't implement them either due to hardware or due to them being able to figure things out by themselves and...
I do see that we have a way to propagate the sysclk, but how would you determine the bit clock rate from a codec driver?
Usually you just set the bit clock to be whatever the minimim clock needed for the data is - there's helpers in soc-utils.c to get the number - or the next highest sensible rate if there's a division problem.
Hmm, but in case the codec is slave to all clocks, it must have a way to determine what the bit clock rate (or the ratio to MCLK, respectively) is. It can't just _set_ it. Which detail am I missing?
Also, the same problem with freely definable indices is true for .set_sysclk() as well. Not all drivers expect the actual MCLK rate here.
Yes, and thus you start to see the problems doing this sort of stuff generically. There's often also a whole bunch of different ways the clocking can be set up
But there's always a MCLK, a BCLK and a LRCLK. And thus, there are always ratios between them. It might even make sense to let the core inform the codec drivers, instead of relying on the machine code.
which can make a material difference to the quality of the output and may require some system specific taste to choose.
I see your point, but no solution yet :)
Daniel
On Thu, Jun 06, 2013 at 04:31:59PM +0200, Daniel Mack wrote:
On 06.06.2013 16:24, Mark Brown wrote:
Usually you just set the bit clock to be whatever the minimim clock needed for the data is - there's helpers in soc-utils.c to get the number - or the next highest sensible rate if there's a division problem.
Hmm, but in case the codec is slave to all clocks, it must have a way to determine what the bit clock rate (or the ratio to MCLK, respectively) is. It can't just _set_ it. Which detail am I missing?
If nothing else you're missing what happens if the driver for the device generating the clock decides to change the rate for some reason.
It'd be rather unusual for something to care what the bit clock rate was if it wasn't generating it - generally it's just shifting data in with it so so long as the requisite number of edges appear its fine. Do you really have devices for which this is a problem, and are you sure they're not actually looking for the sample size?
Yes, and thus you start to see the problems doing this sort of stuff generically. There's often also a whole bunch of different ways the clocking can be set up
But there's always a MCLK, a BCLK and a LRCLK. And thus, there are always ratios between them. It might even make sense to let the core inform the codec drivers, instead of relying on the machine code.
There generally will be, but knowing what they should be and who should provide them is a different game - and of course they're frequently shared between multiple interfaces too, or there may be constraints from elsewhere. I'm not sure that specifying the rates without also being able to specify the sources is generally useful, and things that are purely digital may not have or need an MCLK at all (CPUs don't tend to care too much when they're in slave mode for example).
LRCLK is fixed by the sample rate so that just comes down from the application layer.
I guess what I'm saying is that it'd be nice but it falls over far too quickly when I start thinking about a general implementation. I think long term we want to move all the clocking stuff into the clock API since otherwise you end up reimplementing that. Right now we're a bit stuck because the clock API isn't usefully generic yet, too many platforms either have a custom one or don't enable the common one.
participants (2)
-
Daniel Mack
-
Mark Brown