On Fri, Jul 17, 2020 at 01:16:42PM +0200, Arnaud Ferraris wrote:
Hi Nic,
Le 02/07/2020 à 20:42, Nicolin Chen a écrit :
Hi Arnaud,
On Thu, Jul 02, 2020 at 04:22:31PM +0200, Arnaud Ferraris wrote:
The current ASRC driver hardcodes the input and output clocks used for sample rate conversions. In order to allow greater flexibility and to cover more use cases, it would be preferable to select the clocks using device-tree properties.
We recent just merged a new change that auto-selecting internal clocks based on sample rates as the first option -- ideal ratio mode is the fallback mode now. Please refer to: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?...
While working on fixing the automatic clock selection (see my v3), I came across another potential issue, which would be better explained with an example:
- Input has sample rate 8kHz and uses clock SSI1 with rate 512kHz
- Output has sample rate 16kHz and uses clock SSI2 with rate 1024kHz
Let's say my v3 patch is merged, then the selected input clock will be SSI1, while the selected output clock will be SSI2. In that case, it's all good, as the driver will calculate the dividers right.
Now, suppose a similar board has the input wired to SSI2 and output to SSI1, meaning we're now in the following case:
- Input has sample rate 8kHz and uses clock SSI2 with rate 512kHz
- Output has sample rate 16kHz and uses clock SSI1 with rate 1024kHz
(the same result is achieved during capture with the initial example setup, as input and output properties are then swapped)
In that case, the selected clocks will still be SSI1 for input (just because it appears first in the clock table), and SSI2 for output, meaning the calculated dividers will be:
- input: 512 / 16 => 32 (should be 64)
- output: 1024 / 8 => 128 (should be 64 here too)
I don't get the 32, 128 and 64 parts. Would you please to elaborate a bit? What you said sounds to me like the driver calculates wrong dividers?