Hi ALSA ML
At soc_pcm_apply_msb(), (A) part finds max sig_bits for Codec, and (B) part finds max sig_bits for CPU and, set it to substream via soc_pcm_set_msb() at (X), (Y).
static void soc_pcm_apply_msb() { ... ^ for_each_rtd_codec_dais(rtd, i, codec_dai) { (A) ... v bits = max(pcm_codec->sig_bits, bits); }
^ for_each_rtd_cpu_dais(rtd, i, cpu_dai) { (B) ... v cpu_bits = max(pcm_cpu->sig_bits, cpu_bits); }
(X) soc_pcm_set_msb(substream, bits); (Y) soc_pcm_set_msb(substream, cpu_bits); }
I wonder do we need both (X) (Y) ? I think we can merge (A) and (B) (= find Codec/CPU max sig_bits), and call soc_pcm_set_msb() once, but am I misunderstand ?
We have many patch around here, and below are the main patches. The 1st patch has both (X)(Y) code.
new 19bdcc7aeed4169820be6a683c422fc06d030136 ("ASoC: Add multiple CPU DAI support for PCM ops")
57be92066f68e63bd4a72a65d45c3407c0cb552a ("ASoC: soc-pcm: cleanup soc_pcm_apply_msb()")
c8dd1fec47d0b1875f292c40bed381b343e38b40 ("ASoC: pcm: Refactor soc_pcm_apply_msb for multicodecs")
58ba9b25454fe9b6ded804f69cb7ed4500b685fc ("ASoC: Allow drivers to specify how many bits are significant on a DAI") old
Thank you for your help !!
Best regards --- Kuninori Morimoto