[RFC PATCH v2 5/5] ASoC: apple: Add macaudio machine driver

Martin Povišer povik+lin at cutebit.org
Mon Jun 6 23:33:25 CEST 2022


> On 6. 6. 2022, at 23:22, Pierre-Louis Bossart <pierre-louis.bossart at linux.intel.com> wrote:
> 
> On 6/6/22 15:46, Martin Povišer wrote:
>> (I am having trouble delivering mail to linux.intel.com, so I reply to the list
>> and CC at least...)
>> 
>>> On 6. 6. 2022, at 22:02, Pierre-Louis Bossart <pierre-louis.bossart at linux.intel.com> wrote:
>>> 
>>> 
>>>> + * Virtual FE/BE Playback Topology
>>>> + * -------------------------------
>>>> + *
>>>> + * The platform driver has independent frontend and backend DAIs with the
>>>> + * option of routing backends to any of the frontends. The platform
>>>> + * driver configures the routing based on DPCM couplings in ASoC runtime
>>>> + * structures, which in turn is determined from DAPM paths by ASoC. But the
>>>> + * platform driver doesn't supply relevant DAPM paths and leaves that up for
>>>> + * the machine driver to fill in. The filled-in virtual topology can be
>>>> + * anything as long as a particular backend isn't connected to more than one
>>>> + * frontend at any given time. (The limitation is due to the unsupported case
>>>> + * of reparenting of live BEs.)
>>>> + *
>>>> + * The DAPM routing that this machine-level driver makes up has two use-cases
>>>> + * in mind:
>>>> + *
>>>> + * - Using a single PCM for playback such that it conditionally sinks to either
>>>> + *   speakers or headphones based on the plug-in state of the headphones jack.
>>>> + *   All the while making the switch transparent to userspace. This has the
>>>> + *   drawback of requiring a sample stream suited for both speakers and
>>>> + *   headphones, which is hard to come by on machines where tailored DSP for
>>>> + *   speakers in userspace is desirable or required.
>>>> + *
>>>> + * - Driving the headphones and speakers from distinct PCMs, having userspace
>>>> + *   bridge the difference and apply different signal processing to the two.
>>>> + *
>>>> + * In the end the topology supplied by this driver looks like this:
>>>> + *
>>>> + *  PCMs (frontends)                   I2S Port Groups (backends)
>>>> + *  ────────────────                   ──────────────────────────
>>>> + *
>>>> + *  ┌──────────┐       ┌───────────────► ┌─────┐     ┌──────────┐
>>>> + *  │ Primary  ├───────┤                 │ Mux │ ──► │ Speakers │
>>>> + *  └──────────┘       │    ┌──────────► └─────┘     └──────────┘
>>>> + *                ┌─── │ ───┘             ▲
>>>> + *  ┌──────────┐  │    │                  │
>>>> + *  │Secondary ├──┘    │     ┌────────────┴┐
>>>> + *  └──────────┘       ├────►│Plug-in Demux│
>>>> + *                     │     └────────────┬┘
>>>> + *                     │                  │
>>>> + *                     │                  ▼
>>>> + *                     │                 ┌─────┐     ┌──────────┐
>>>> + *                     └───────────────► │ Mux │ ──► │Headphones│
>>>> + *                                       └─────┘     └──────────┘
>>>> + */
>>> 
>>> In Patch2, the 'clusters' are described as front-ends, with I2S as
>>> back-ends. Here the PCMs are described as front-ends, but there's no
>>> mention of clusters. Either one of the two descriptions is outdated, or
>>> there's something missing to help reconcile the two pieces of information?
>> 
>> Both descriptions should be in sync. Maybe I don’t know the proper
>> terminology. In both cases the frontend is meant to be the actual I2S
>> transceiver unit, and backend the I2S port on the SoC’s periphery,
>> which can be routed to any of transceiver units. (Multiple ports can
>> be routed to the same unit, which means the ports will have the same
>> clocks and data line -- that's a configuration we need to support to
>> drive some of the speaker arrays, hence the backend/frontend
>> distinction).
>> 
>> Maybe I am using 'PCM' in a confusing way here? What I meant is a
>> subdevice that’s visible from userspace, because I have seen it used
>> that way in ALSA codebase.
> 
> Yes, I think most people familiar with DPCM would take the 'PCM
> frontend' as some sort of generic DMA transfer from system memory, while
> the 'back end' is more the actual serial link. In your case, the
> front-end is already very low-level and tied to I2S already. I think
> that's fine, it's just that using different terms for 'cluster' and
> 'PCM' in different patches could lead to confusions.

OK, will take this into account then.

>>>> +static int macaudio_get_runtime_mclk_fs(struct snd_pcm_substream *substream)
>>>> +{
>>>> +	struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
>>>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(rtd->card);
>>>> +	struct snd_soc_dpcm *dpcm;
>>>> +
>>>> +	/*
>>>> +	 * If this is a FE, look it up in link_props directly.
>>>> +	 * If this is a BE, look it up in the respective FE.
>>>> +	 */
>>>> +	if (!rtd->dai_link->no_pcm)
>>>> +		return ma->link_props[rtd->dai_link->id].mclk_fs;
>>>> +
>>>> +	for_each_dpcm_fe(rtd, substream->stream, dpcm) {
>>>> +		int fe_id = dpcm->fe->dai_link->id;
>>>> +
>>>> +		return ma->link_props[fe_id].mclk_fs;
>>>> +	}
>>> 
>>> I am not sure what the concept of mclk would mean for a front-end? This
>>> is typically very I2S-specific, i.e. a back-end property, no?
>> 
>> Right, that’s a result of the confusion from above. Hope I cleared it up
>> somehow. The frontend already decides the clocks and data serialization,
>> hence mclk/fs is a frontend-prop here.
> 
> What confuses me in this code is that it looks possible that the front-
> and back-end could have different mclk values? I think a comment is
> missing that the values are identical, it's just that there's a
> different way to access it depending on the link type?

Well, that’s exactly what I am trying to convey by the comment above
in macaudio_get_runtime_mclk_fs. Maybe I should do something to make
it more obvious.

>>>> +static int macaudio_be_init(struct snd_soc_pcm_runtime *rtd)
>>>> +{
>>>> +	struct snd_soc_card *card = rtd->card;
>>>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(card);
>>>> +	struct macaudio_link_props *props = &ma->link_props[rtd->dai_link->id];
>>>> +	struct snd_soc_dai *dai;
>>>> +	int i, ret;
>>>> +
>>>> +	ret = macaudio_be_assign_tdm(rtd);
>>>> +	if (ret < 0)
>>>> +		return ret;
>>>> +
>>>> +	if (props->is_headphones) {
>>>> +		for_each_rtd_codec_dais(rtd, i, dai)
>>>> +			snd_soc_component_set_jack(dai->component, &ma->jack, NULL);
>>>> +	}
>>> 
>>> this is weird, set_jack() is invoked by the ASoC core. You shouldn't
>>> need to do this?
>> 
>> That’s interesting. Where would it be invoked? How does ASoC know which codec
>> it attaches to?
> 
> sorry, my comment was partly invalid.
> 
> set_jack() is invoked in the machine driver indeed, what I found strange
> is that you may have different codecs handling the jack? What is the
> purpose of that loop?

There’s no need for the loop, there’s a single codec. OK, will remove the loop
to make it less confusing.

>>>> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
>>>> +				void *data);
>>>> +
>>>> +static struct notifier_block macaudio_jack_nb = {
>>>> +	.notifier_call = macaudio_jack_event,
>>>> +};
>>> 
>>> why is this needed? we have already many ways of dealing with the jack
>>> events (dare I say too many ways?).
>> 
>> Because I want to update the DAPM paths based on the jack status,
>> specifically I want to set macaudio_plugin_demux. I don’t know how
>> else it could be done.
> 
> I don't know either but I have never seen notifier blocks being used. I
> would think there are already ways to do this with DAPM events.
> 
> 
>>>> +static int macaudio_jack_event(struct notifier_block *nb, unsigned long event,
>>>> +				void *data)
>>>> +{
>>>> +	struct snd_soc_jack *jack = data;
>>>> +	struct macaudio_snd_data *ma = snd_soc_card_get_drvdata(jack->card);
>>>> +
>>>> +	ma->jack_plugin_state = !!event;
>>>> +
>>>> +	if (!ma->plugin_demux_kcontrol)
>>>> +		return 0;
>>>> +
>>>> +	snd_soc_dapm_mux_update_power(&ma->card.dapm, ma->plugin_demux_kcontrol,
>>>> +				      ma->jack_plugin_state,
>>>> +				      (struct soc_enum *) &macaudio_plugin_demux_enum, NULL);
>>> 
>>> the term 'plugin' can be understood in many ways by different audio
>>> folks. 'plugin' is usually the term used for processing libraries (VST,
>>> LADSPA, etc). I think here you meant 'jack control'?
>> 
>> So ‘jack control’ would be understood as the jack plugged/unplugged status?
> 
> The 'Headphone Jack' or 'Headset Mic Jack' kcontrols typically track the
> status. Other terms are 'jack detection'. "plugin" is not a very common
> term here.

OK



More information about the Alsa-devel mailing list