Re: [alsa-devel] [PATCH 2/2] ASoC: codecs: Add da7218 codec driver
On Thu, Nov 05, 2015 at 10:43:19AM +0000, Adam Thomson wrote:
+/* ALC */ +static void da7218_alc_calib(struct snd_soc_codec *codec) +{
- struct da7218_priv *da7218 = snd_soc_codec_get_drvdata(codec);
- u8 calib_ctrl;
- int i = 0;
- bool calibrated = false;
- /* Bypass cache so it saves current settings */
- regcache_cache_bypass(da7218->regmap, true);
What ensures that nothing else is running at the same time this is?
+static int da7218_mic_lvl_det_sw_put(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
+{
Why is this a user visible control?
- /* Default all mixers off */
- snd_soc_write(codec, DA7218_DROUTING_OUTDAI_1L, 0);
- snd_soc_write(codec, DA7218_DROUTING_OUTDAI_1R, 0);
- snd_soc_write(codec, DA7218_DROUTING_OUTDAI_2L, 0);
- snd_soc_write(codec, DA7218_DROUTING_OUTDAI_2R, 0);
- snd_soc_write(codec, DA7218_DROUTING_OUTFILT_1L, 0);
- snd_soc_write(codec, DA7218_DROUTING_OUTFILT_1R, 0);
- snd_soc_write(codec, DA7218_DROUTING_ST_OUTFILT_1L, 0);
- snd_soc_write(codec, DA7218_DROUTING_ST_OUTFILT_1R, 0);
We generally just use the device defaults, why change them?
On November 5, 2015 15:28, Mark Brown wrote:
+/* ALC */ +static void da7218_alc_calib(struct snd_soc_codec *codec) +{
- struct da7218_priv *da7218 = snd_soc_codec_get_drvdata(codec);
- u8 calib_ctrl;
- int i = 0;
- bool calibrated = false;
- /* Bypass cache so it saves current settings */
- regcache_cache_bypass(da7218->regmap, true);
What ensures that nothing else is running at the same time this is?
Is a fair point. Originally I was saving the state of registers then re-instating them at the end, which worked fine, but then was trying to be clever and tidy things up by bypassing the cache instead. Will revert back to the previous method.
+static int da7218_mic_lvl_det_sw_put(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
+{
Why is this a user visible control?
I can envisage in a system you may want to choose which capture channels can trigger level detection (if any), and this may change depending on the use-case at the time, so having it as a control makes sense to me.
- /* Default all mixers off */
- snd_soc_write(codec, DA7218_DROUTING_OUTDAI_1L, 0);
- snd_soc_write(codec, DA7218_DROUTING_OUTDAI_1R, 0);
- snd_soc_write(codec, DA7218_DROUTING_OUTDAI_2L, 0);
- snd_soc_write(codec, DA7218_DROUTING_OUTDAI_2R, 0);
- snd_soc_write(codec, DA7218_DROUTING_OUTFILT_1L, 0);
- snd_soc_write(codec, DA7218_DROUTING_OUTFILT_1R, 0);
- snd_soc_write(codec, DA7218_DROUTING_ST_OUTFILT_1L, 0);
- snd_soc_write(codec, DA7218_DROUTING_ST_OUTFILT_1R, 0);
We generally just use the device defaults, why change them?
I figured it made more sense to have the device start with audio routes disabled but I can remove this as it's really not essential.
On Fri, Nov 06, 2015 at 11:11:38AM +0000, Opensource [Adam Thomson] wrote:
On November 5, 2015 15:28, Mark Brown wrote:
+static int da7218_mic_lvl_det_sw_put(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
+{
Why is this a user visible control?
I can envisage in a system you may want to choose which capture channels can trigger level detection (if any), and this may change depending on the use-case at the time, so having it as a control makes sense to me.
What is a "capture channel" here?
On November 6, 2015 11:22, Mark Brown wrote:
+static int da7218_mic_lvl_det_sw_put(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
+{
Why is this a user visible control?
I can envisage in a system you may want to choose which capture channels can trigger level detection (if any), and this may change depending on the use-case at the time, so having it as a control makes sense to me.
What is a "capture channel" here?
Input filters 1L/R and 2L/R, which are fed from either Mic1(ADC1) or DMic1L/R and Mic2(ADC2) or DMic2L/R.
On Fri, Nov 06, 2015 at 11:53:00AM +0000, Opensource [Adam Thomson] wrote:
On November 6, 2015 11:22, Mark Brown wrote:
I can envisage in a system you may want to choose which capture channels can trigger level detection (if any), and this may change depending on the use-case at the time, so having it as a control makes sense to me.
What is a "capture channel" here?
Input filters 1L/R and 2L/R, which are fed from either Mic1(ADC1) or DMic1L/R and Mic2(ADC2) or DMic2L/R.
Hang on, is this just recording a DC value with the ADC and then looking at that?
On November 6, 2015 11:55, Mark Brown wrote:
I can envisage in a system you may want to choose which capture channels can trigger level detection (if any), and this may change depending on the use-case at the time, so having it as a control makes sense to me.
What is a "capture channel" here?
Input filters 1L/R and 2L/R, which are fed from either Mic1(ADC1) or DMic1L/R and Mic2(ADC2) or DMic2L/R.
Hang on, is this just recording a DC value with the ADC and then looking at that?
The RMS of the Mic signal is taken and compared to the trigger level set. If it's above that level then an IRQ is raised.
On Fri, Nov 06, 2015 at 01:17:28PM +0000, Opensource [Adam Thomson] wrote:
On November 6, 2015 11:55, Mark Brown wrote:
Hang on, is this just recording a DC value with the ADC and then looking at that?
The RMS of the Mic signal is taken and compared to the trigger level set. If it's above that level then an IRQ is raised.
What I'm trying to figure out here is if this depends on the audio routing at runtime or if it's got dedicated configuration?
On November 8, 2015 10:34, Mark Brown wrote:
Hang on, is this just recording a DC value with the ADC and then looking at that?
The RMS of the Mic signal is taken and compared to the trigger level set. If it's above that level then an IRQ is raised.
What I'm trying to figure out here is if this depends on the audio routing at runtime or if it's got dedicated configuration?
This feature is available for any/all mics connected. Which mics are enabled is a runtime configuration of routing, so to me it makes sense also that we can configure which channel triggers an event, based on our scenario at that time.
On Mon, Nov 09, 2015 at 12:28:39PM +0000, Opensource [Adam Thomson] wrote:
On November 8, 2015 10:34, Mark Brown wrote:
What I'm trying to figure out here is if this depends on the audio routing at runtime or if it's got dedicated configuration?
This feature is available for any/all mics connected. Which mics are enabled is a runtime configuration of routing, so to me it makes sense also that we can configure which channel triggers an event, based on our scenario at that time.
The general userspace expectation is that the detection is always active and consistent rather than varying at runtime - runtime variability might be a bit surprising for it, and even then variability in what is detected based on other settings is a bit surprising. If the hardware is that limited I guess it's about all that can be done but I'm still not clear what the use cases are for configuring the levels (as opposed ot the routing).
On November 9, 2015 14:02, Mark Brown wrote:
What I'm trying to figure out here is if this depends on the audio routing at runtime or if it's got dedicated configuration?
This feature is available for any/all mics connected. Which mics are enabled is a runtime configuration of routing, so to me it makes sense also that we can configure which channel triggers an event, based on our scenario at that time.
The general userspace expectation is that the detection is always active and consistent rather than varying at runtime - runtime variability might be a bit surprising for it, and even then variability in what is detected based on other settings is a bit surprising. If the hardware is that limited I guess it's about all that can be done but I'm still not clear what the use cases are for configuring the levels (as opposed ot the routing).
How about the example of always on voice in Android, which can be enabled and disabled, depending on user settings, and routing will vary depending on which mic is in use at the time? For the levelling is it not plausible that a user could configure the level based on their current environment. You have moderately loud background noise, then your threshold would want to be higher, but in a quiet environment the likelihood is you would want to lower that threshold?
On Tue, Nov 10, 2015 at 01:55:30PM +0000, Opensource [Adam Thomson] wrote:
On November 9, 2015 14:02, Mark Brown wrote:
The general userspace expectation is that the detection is always active and consistent rather than varying at runtime - runtime variability might be a bit surprising for it, and even then variability in what is detected based on other settings is a bit surprising. If the hardware is that limited I guess it's about all that can be done but I'm still not clear what the use cases are for configuring the levels (as opposed ot the routing).
How about the example of always on voice in Android, which can be enabled and disabled, depending on user settings, and routing will vary depending on which mic is in use at the time? For the levelling is it not plausible that a user could configure the level based on their current environment. You have moderately loud background noise, then your threshold would want to be higher, but in a quiet environment the likelihood is you would want to lower that threshold?
So this *isn't* a normal mic detection feature? What's the userspace interface for reporting then?
On November 10, 2015 14:15, Mark Brown wrote:
The general userspace expectation is that the detection is always active and consistent rather than varying at runtime - runtime variability might be a bit surprising for it, and even then variability in what is detected based on other settings is a bit surprising. If the hardware is that limited I guess it's about all that can be done but I'm still not clear what the use cases are for configuring the levels (as opposed ot the routing).
How about the example of always on voice in Android, which can be enabled and disabled, depending on user settings, and routing will vary depending on which mic is in use at the time? For the levelling is it not plausible that a user could configure the level based on their current environment. You have moderately loud background noise, then your threshold would want to be higher, but in a quiet environment the likelihood is you would want to lower that threshold?
So this *isn't* a normal mic detection feature? What's the userspace interface for reporting then?
By mic detection you thought this was to detect if a mic was present or not? It's to detect the noise level on a mic and raise an event if the captured sound is above a specific threshold level. Apologies if that wasn't clear.
In the driver code I'm using KEY_VOICECOMMAND, and simulating a press and release of this key, to indicate to user-space. This seemed like the obvious choice for this feature to me, although I'd happily get your opinion on this.
On Tue, Nov 10, 2015 at 02:24:13PM +0000, Opensource [Adam Thomson] wrote:
On November 10, 2015 14:15, Mark Brown wrote:
So this *isn't* a normal mic detection feature? What's the userspace interface for reporting then?
By mic detection you thought this was to detect if a mic was present or not?
That and button detection.
It's to detect the noise level on a mic and raise an event if the captured sound is above a specific threshold level. Apologies if that wasn't clear.
In the driver code I'm using KEY_VOICECOMMAND, and simulating a press and release of this key, to indicate to user-space. This seemed like the obvious choice for this feature to me, although I'd happily get your opinion on this.
That seems like a particularly unfortunate choice given that VOICECOMMAND is used in the standard Google headset mapping (see ts3a227e for an example, that's a device specifically aimed at providing accessory detection in Chromebooks). There's also been some pushback against using the input devices due to the difficulty in enabling apps to access input devices - ALSA controls were preferred instead but that's less helpful for tinyalsa. Perhaps that can be added relatively easily, or a uevent or something.
Not sure what the best way forward here is, the other implementations of this that I'm aware of do more of the detection in offload and present streams of detected audio to userspace via normal capture.
I would at least suggest moving this into a separate patch and doing the integration separately.
On November 10, 2015 15:45, Mark Brown wrote:
It's to detect the noise level on a mic and raise an event if the captured sound is above a specific threshold level. Apologies if that wasn't clear.
In the driver code I'm using KEY_VOICECOMMAND, and simulating a press and release of this key, to indicate to user-space. This seemed like the obvious choice for this feature to me, although I'd happily get your opinion on this.
That seems like a particularly unfortunate choice given that VOICECOMMAND is used in the standard Google headset mapping (see ts3a227e for an example, that's a device specifically aimed at providing accessory detection in Chromebooks). There's also been some pushback against using the input devices due to the difficulty in enabling apps to access input devices - ALSA controls were preferred instead but that's less helpful for tinyalsa. Perhaps that can be added relatively easily, or a uevent or something.
I chose VOICECOMMAND as I thought this kind of feature might offer the same kind of use as the physical button, but if this only for Google headset use then fair enough.
Not sure what the best way forward here is, the other implementations of this that I'm aware of do more of the detection in offload and present streams of detected audio to userspace via normal capture.
Yes, this is far more simplistic, and any voice processing or capture is not handled by the codec. It just an indication of above threshold noise level at the mic. For the implementations you know of, how are those events indicated to user-space?
I would at least suggest moving this into a separate patch and doing the integration separately.
Are you happy for me to leave the actual controls for this feature in, without the user-space reporting side? Otherwise it's a pain to strip that out, and then re-instate later. The event can be masked off until the user-space reporting is added in a subsequent patch.
On Tue, Nov 10, 2015 at 04:21:04PM +0000, Opensource [Adam Thomson] wrote:
On November 10, 2015 15:45, Mark Brown wrote:
That seems like a particularly unfortunate choice given that VOICECOMMAND is used in the standard Google headset mapping (see ts3a227e for an example, that's a device specifically aimed at providing accessory detection in Chromebooks). There's also been some pushback against using the input devices due to the difficulty in enabling apps to access input devices - ALSA controls were preferred instead but that's less helpful for tinyalsa. Perhaps that can be added relatively easily, or a uevent or something.
I chose VOICECOMMAND as I thought this kind of feature might offer the same kind of use as the physical button, but if this only for Google headset use then fair enough.
No, that's a generic button but the point is that the expected workflow from userspace is going to be different if the user pressed a button to initiate a voice command compared to if they use an activation phrase.
Not sure what the best way forward here is, the other implementations of this that I'm aware of do more of the detection in offload and present streams of detected audio to userspace via normal capture.
Yes, this is far more simplistic, and any voice processing or capture is not handled by the codec. It just an indication of above threshold noise level at the mic. For the implementations you know of, how are those events indicated to user-space?
I'm not aware of any implementations that just do the activity detection. I've seen hardware with it but nobody using it in software.
I would at least suggest moving this into a separate patch and doing the integration separately.
Are you happy for me to leave the actual controls for this feature in, without the user-space reporting side? Otherwise it's a pain to strip that out, and then re-instate later. The event can be masked off until the user-space reporting is added in a subsequent patch.
Possibly, let's see what the code looks like.
participants (2)
-
Mark Brown
-
Opensource [Adam Thomson]