Question about soc_pcm_apply_msb()
Hi ALSA ML
At soc_pcm_apply_msb(), (A) part finds max sig_bits for Codec, and (B) part finds max sig_bits for CPU and, set it to substream via soc_pcm_set_msb() at (X), (Y).
static void soc_pcm_apply_msb() { ... ^ for_each_rtd_codec_dais(rtd, i, codec_dai) { (A) ... v bits = max(pcm_codec->sig_bits, bits); }
^ for_each_rtd_cpu_dais(rtd, i, cpu_dai) { (B) ... v cpu_bits = max(pcm_cpu->sig_bits, cpu_bits); }
(X) soc_pcm_set_msb(substream, bits); (Y) soc_pcm_set_msb(substream, cpu_bits); }
I wonder do we need both (X) (Y) ? I think we can merge (A) and (B) (= find Codec/CPU max sig_bits), and call soc_pcm_set_msb() once, but am I misunderstand ?
We have many patch around here, and below are the main patches. The 1st patch has both (X)(Y) code.
new 19bdcc7aeed4169820be6a683c422fc06d030136 ("ASoC: Add multiple CPU DAI support for PCM ops")
57be92066f68e63bd4a72a65d45c3407c0cb552a ("ASoC: soc-pcm: cleanup soc_pcm_apply_msb()")
c8dd1fec47d0b1875f292c40bed381b343e38b40 ("ASoC: pcm: Refactor soc_pcm_apply_msb for multicodecs")
58ba9b25454fe9b6ded804f69cb7ed4500b685fc ("ASoC: Allow drivers to specify how many bits are significant on a DAI") old
Thank you for your help !!
Best regards --- Kuninori Morimoto
On 12/16/21 7:53 AM, Kuninori Morimoto wrote:
Hi ALSA ML
At soc_pcm_apply_msb(), (A) part finds max sig_bits for Codec, and (B) part finds max sig_bits for CPU and, set it to substream via soc_pcm_set_msb() at (X), (Y).
static void soc_pcm_apply_msb() { ... ^ for_each_rtd_codec_dais(rtd, i, codec_dai) { (A) ... v bits = max(pcm_codec->sig_bits, bits); }
^ for_each_rtd_cpu_dais(rtd, i, cpu_dai) { (B) ... v cpu_bits = max(pcm_cpu->sig_bits, cpu_bits); }
(X) soc_pcm_set_msb(substream, bits); (Y) soc_pcm_set_msb(substream, cpu_bits); }
I wonder do we need both (X) (Y) ? I think we can merge (A) and (B) (= find Codec/CPU max sig_bits), and call soc_pcm_set_msb() once, but am I misunderstand ?
We need both. Or alternatively you could write soc_pcm_set_msb(substream, min(bits, cpu_bits)).
What this does is it computes the maximum msb bits from both the CPU side and the CODEC side and then sets the msb bits reported to userspace to the minimum of the two.
The largest number of MSBs we'll see on the CODEC side is the max() and the largest number of MSBs we'll see on the CPU side is the max(). And the number of MSBs that the application will be able to see is the smaller of the two.
Hi Lars-Peter
Thank you for your feedback
I wonder do we need both (X) (Y) ? I think we can merge (A) and (B) (= find Codec/CPU max sig_bits), and call soc_pcm_set_msb() once, but am I misunderstand ?
We need both. Or alternatively you could write soc_pcm_set_msb(substream, min(bits, cpu_bits)).
What this does is it computes the maximum msb bits from both the CPU side and the CODEC side and then sets the msb bits reported to userspace to the minimum of the two.
The largest number of MSBs we'll see on the CODEC side is the max() and the largest number of MSBs we'll see on the CPU side is the max(). And the number of MSBs that the application will be able to see is the smaller of the two.
Oh, yes. thank you for explaining details. I think snd_pcm_hw_rule_msbits() was the point.
Best regards --- Kuninori Morimoto
participants (2)
-
Kuninori Morimoto
-
Lars-Peter Clausen