[alsa-devel] Ordering in soc_pcm_hw_params()

jonsmirl at gmail.com jonsmirl at gmail.com
Wed Aug 13 14:25:22 CEST 2014


On Tue, Aug 12, 2014 at 2:20 PM, Mark Brown <broonie at kernel.org> wrote:
> On Tue, Aug 12, 2014 at 08:45:22AM -0400, jonsmirl at gmail.com wrote:
>> On Tue, Aug 12, 2014 at 7:57 AM, Mark Brown <broonie at kernel.org> wrote:
>
>> > Yes, as has been said several times now it's the responsibility of the
>> > machine driver to coordinate clocking in the card so if you want to use
>
>> I hear this, but I'm conflicted about putting a generic capability
>> into the machine driver.
>
> It's a generic machine driver so having generic code in there seems
> reasonable.
>
>> 1) Are sysclk and mclk the same thing? Inside of simple-card they seem
>> to be but I can possibly see arguments that they aren't.
>
> Probably in the context of systems simple enough to use simple-card.
>
>> 2) Should sysclk for the cpu-dai and codec-dai be set independently?
>> In simple-card they can be set to conflicting values on the two nodes
>> in the DTS. Should sysclk be a single property for the machine?
>
> No, clocks might be at different rates for different devices.  One
> device might divide down the clock to the other.

What do you think about adding fields for min/max allowed sysclk to
snd_soc_dai_driver? In my case the SOC can run the sysclk at 100Mhz,
but the attached codec can only handle 27Mhz.

Then in simple-card when it is computing the 512fs clock it would use
the min/max sysclk numbers to clip its range. Alternatively, I could
add some range checking code the the set-sysclk implementation to
error out on out of range. Then do some searching around until no one
errors.

sgtl5000 can do 96Khz, but only at 256fs. I'm pumping 49Mhz into it
right now and it works, but that clearly violates the datasheet.

>
>> 3) If sysclk is defined by a phandle to a clock, how should clock
>> accuracy be handled? For example in my CPU the best clock that can be
>> made is about 0.01% less than what it should be. If you take that
>> frequency number and start doing integer division on it you won't get
>> the expect results because of truncation effects. Diving by 44100
>> gives 511.95 which truncates to 511 instead of 512.
>
> I'm not sure it's worth worrying about systems that can't generate
> accurate clocks at this point, but perhaps someone will think of
> something sensible to do.
>
>> 4) If the sysclk is going to change based on the music, should it just
>> flip between 22.5/24.5 or should it use the FS multiplier in the
>> simple DTS and always be set to FS*music? For example FS=512 so for
>> 44100 it is set to 22.5Mhz and 8000 it is set to 4Mhz.
>
> If a fixed multiplier is set I'd expect to see it observed; obviously a
> device may be constrained in what it can actually use though and a fixed
> multiplier might not be the best configuration.
>
>> 5) What should the syntax for this look like in simple?
>
> It really depends what the "this" is; try to think of something
> tasteful.
>
>> 6) Longer term set-sysclk could be replaced by using phandles for the
>> clock and then clk notifiers in the components to listen for it being
>> changed. That would remove this problem of having multiple sysclks in
>> the system and the DTS syntax for identifying them.
>
>> So in my case I expose a MCLK clk. sgtl5000 codec references that
>> clock with a phandle. As the cpu-dai looks at the music it changes the
>> rate on MCLK. Codec gets notified of that change (if it is listening)
>> and can act on it.
>
> OK, but that's not meaningfully different to just having the machine
> driver call set_sysclk() and only works if the best way to change the
> clock rate is to change the clocking inside the CPU which isn't going
> to be the case for all systems.  That's the hard part, we have to be
> able to scale up to handling it.



-- 
Jon Smirl
jonsmirl at gmail.com


More information about the Alsa-devel mailing list