On Wed, Sep 13, 2017 at 10:02:20AM +0200, Arnaud Mouiche wrote:
Could you please give me a few set of examples of how you set set_sysclk(), set_tdm_slot() with the current driver? The idea here is to figure out a way to calculate the bclk in hw_params without getting set_sysclk() involved any more.
Here is one, where a bclk = 4*16*fs is expected
In another setup, there are 8 x 16 bits slots, whatever the number of active channels is. In this case bclk = 128 * fs The number of slots is completely arbitrary. Some slots can even be reserved for communication between codecs that don't communicate with linux.
In summary, bclk = sample rate * slots * slot_width;
I will update my patch soon.
Unfortunately, it looks like a work around to me. I understand the idea of leaving set_sysclk() out there to override the bit clock is convenient, but it is not a standard ALSA design and may eventually introduce new problems like today.
I agree. I'm not conservative at all concerning this question. I don't see a way to remove set_sysclk without breaking current TDM users anyway, at least for those who don't have their code upstreamed.
Which TDM case would be broken by this removal? The only impact that I can see is that the ASoC core returns an ENOTSUPP for a set_sysclk() call now, which is something that a dai-link driver should have taken care of anyway.
All information provided through snd_soc_dai_set_tdm_slot( cpu_dai, mask, mask, slots, width ) should be enough In this case, for TDM users
bclk = slots * width * fs (where slots is != channels)
will manage 99 % of the cases. And the remaining 1% will concern people who need to hack the kernel so widely they don't care about the set_sysclk removal.
A patch from those people will be always welcome.
- fsl-asoc-card.c : *something will break since
snd_soc_dai_set_sysclk returned code is checked*
I've already submitted a patch to ignore all ENOTSUPP.