On 08/13/2014 07:00 PM, jonsmirl@gmail.com wrote:
On Wed, Aug 13, 2014 at 12:35 PM, Mark Brown broonie@kernel.org wrote:
On Wed, Aug 13, 2014 at 08:25:22AM -0400, jonsmirl@gmail.com wrote:
On Tue, Aug 12, 2014 at 2:20 PM, Mark Brown broonie@kernel.org wrote:
- Should sysclk for the cpu-dai and codec-dai be set independently?
In simple-card they can be set to conflicting values on the two nodes in the DTS. Should sysclk be a single property for the machine?
No, clocks might be at different rates for different devices. One device might divide down the clock to the other.
What do you think about adding fields for min/max allowed sysclk to snd_soc_dai_driver? In my case the SOC can run the sysclk at 100Mhz, but the attached codec can only handle 27Mhz.
If we're going to do constraints they should be done properly so need to be able to represent specific numbers too. It's probably a clock API problem, independently implementing it seems redundant. Doing simple things in simple-card for the common cases makes sense while the clock API isn't something we can rely on but equally we don't want to be doing huge amounts, and of course simple-card is just a subset of what people are doing.
Right now I could make the set_sysclk implementations return -EINVAL if the clock is out of range. Then add some logic to simple-card to try again with a different FS multipler. Or at least I can print some error message to give a clue as to why the song won't play.
The driver should definitely reject sysclk rates it can not support. I think this idea is what will give you the best results, at least short term. It quite simple to implement yet effective at solving the problem and does not add new DT ABI that we need to support forever. It could later be replaced with a more sophisticated fs rate enumeration scheme.
- Lars