[alsa-devel] [Minutes] ELCE Audio mini conf.
I've written up the minutes here below (including Vinod's minutes after I left for my flight). Please do feel free to correct or add any missing items.
If you intend to reply and discuss a topic in detail then it would probably be helpful to include the topic in the $SUBJECT (so that it's easy to track multiple topics that may arise from the minutes).
Jaroslav, I have some PDF slides from a few delegates. Can I post them on to the Wiki ? This failed last time I tried after the Edinburgh audio conference (Wiki could not upload PDFs ??).
Soundwire: Pierre =================
- Slides to be posted. - Shadow registers required for regamp. Mark: need to make sure we can commit changes, and rules for witting. Reuse of codec drivers since reg mapping should be similar to i2c regmaps. - Some devices wont support run time clock changes. Need compatible devices on bus. - Each device has unique ID set by HW. Device enumeration can be auto discovered. Can detect device but not capabilities. - Device capabilities can come from hard coded driver data or ACPI/DT data. Windows may use inf files for capabilities. - Define bus api for master "controller". - Grouped triggers can be used for start duplex streams for similtanious capture and playback. - DAI is physical connection, logical links need to be worked out with Routing config Mark: Redefining of ports/slots at runtime i.e. Leave up to userspace for handling supported configurations of ports.
Energy efficiency: Alexander (Minutes also supplied by Alexander). ==================================================================
Power consumption of a SandyBridge-based Sony laptop has been measured with a workload that does only audio playback. To ensure a reproducible workload, the whole system has been constructed by a script using debootstrap, and placed into an initramfs. Various versions of PulseAudio were compared to CRAS and to the use of ALSA hw device, at latencies 7 ms, 28 ms and 448 ms. Even in the best case (ALSA), going from 448 to 28 ms means using extra 300 mW. For PulseAudio, the overhead of using low latency is even higher, presumably because of a chatty protocol. In version 7, there was some work (srbchannel) with the aim to reduce this overhead, but there is still more way to go. There was a suggestion to use perf to analyze the problem further. It should be noted that latencies as high as 448 ms are not usable without the ability to rewind, which PulseAudio posesses (at the cost of more complex, and actually incorrect, code) and CRAS doesn't. So CRAS decision to never do rewinds costs it 300 mW (448 ms latency is usable only for benchmarking purposes). Note: at least two people in the audience misunderstood this statement as originally formulated (thinko?), and got it as "PulseAudio decision to do rewinds costs it 300 mW", which is of course wrong. The cost of rewinds is in code complexity, real user-reported bugs and "how to test this code path" questions from developers, which, in the opinion of Alexander Patrakov, may outweigh the gains in energy efficiency (Arun Raghavan disagrees). A proposal was made by P.-L. Bossart to introduce a hw param that the application would set if it intends to never do rewinds, with the implication that a SkyLake driver can use this information for codec-side power saving (by prefetching audio data and powering the link off). Arun Raghavan (presumably due to the above misunderstanding) disagreed and instead proposed to make the device non-rewindable if this means power savings. Alexander is in favor of the hw param, because it would be useful for CRAS, but harmful for the current PulseAudio (it would force abandoning high latency). A future version of PulseAudio could, though, use a separate DSP stream for a music application (if there is only one), and, because a new application never appears, and the hardware volume is precise enough, there would never be a need to rewind. This, though, would need eliminating the support for rewinds from the client API (the last two parameters for pa_stream_write()), which can be done by ignoring these parameters.
Documentation: Takashi ====================== - Patches that introduce new functionality should also include documentation. - Caretaker required for docs. Who ? - Uploading materials, in central location, ask kernel.org. - Charles and Rakesh voluntered to write some docs.
Alsa-lib: release schedule: Vinod. ================================== - Request for more frequent release. Release inline with 1 - 2 kernel releases. - Integrate tinycompress into alsa release.
Testing/QA: Liam, Takashi, Mark, Pierre ======================================= - BAT tool: Now upstream, WiP for review comments docs and new features. - Mark doing brute force tool for kcontrol testing. Kcontrol status can be watched via debubFS. Multiple threads. - Takashi: Need unit testing codec drivers. Some codec vendors have tools for this but something generic would be good. QA unit test could use topology readback to check topology in userspace. - Pierre: Latency testing, USB device to toggle GPIO (to indicate stream start) and audio latency measured from this point. Available on 01.org latency tools. To be extended to BAT.
HDA/gfx: relevant people not in room. ===================================== - Pierre: Hotplug DP over USB C required. Needs input from GFX folks.
BATCH flag for USB: Arun. ========================= - Flag does not respond to reality, lets deprecate it. no users. - Dylan: need to know transfer size for CRAS (uses extra samples for buffering). - BATCH flag means period size transfers, applications that use new granularity API can ignore batch flag. Pierre: to implement.
Splitting out controls: Takashi ===============================
- Restricted access. Consensus to restrict access to some controls due to possibility of breaking HW at kernel level. i.e. prevent feeding digital Mic into HP amp to prevent speaker over heating. - Some clamping APIs already exist, but these should be extended to include DAPM.
Simple card: ACPI support. Vinod ================================
- Clock drivers required alongside helper functions to help initialise DAI links etc. - Vinod to publish spec as RFC in Q4. - Doubts of using DMI names for differentiation.
Topology: Liam, Mengdong ======================== - Proposals to use debugFS/ioctl/media controller for exporting topology data to userspace. Drawbacks with all three approaches, but are aimed at different end users i.e. debugFS aimed at kernel audio developers/integrators/testers and media aimed at userspace developers and end users. Will probably support both debugFS (or other kernel file) and media API. Mark to discuss at Media team in Seoul. ASoC internals to gather data for debugFS and Media API will be the same code so nothing blocking initial development.
Tiny-fication. Keyon, Liam ==========================
- IoT devices have tight memory footprints so audio kernel and userspace can be very limited. - Userspace is fine with tinyalsa. - Currently Keyon has worked on some IoT device options to disable ALSA functions. e.g. code saving of 13k for refinement, but some IoT options can break existing userpsace. - Suggestion is to research tinycompress API for general PCM usage. - Look at removing DAPM strings (will need topology dump tool to automate this though). - Some structures can also be shrunk.
Android configuration consolidation ===================================
- different configuration formats XML, ucm, This tool takes Android XML and converts to UCM - uses Sony Xperia - generates UCM file from XML - Android: Try to converge to a HAL, but diverges quickly. Quirks for modem etc cant be represented well. - Arun will send the link to his work
HDA Restructuring: Vinod & Rakesh =================================
- Discussed HW and SW overview - Future HDA work and codec conversion discussion <dont have detailed notes here as I was talking>
DPMST: Mengdong ================ - One DVI port can be used to transmit multiple independent streams. - Three display pipelines in HW, so at most 3 streams can be supported across all ports - Need to modify representation to have stream on device rather than per pin. - While moving the monitor can be incompatible so should be modified - display should tell us unplug/plug on move. Sound server reacts to it. - This notification thru Component fwk. - Various options are discussed. Needs more offline discussion on approach and to solve this issue
ALSA Core Challenges: ====================== - ALSA Core locking is complicated. Core code is quite difficult to understand. - PCM linking makes things complex - Add documentation for locks. - Controls can be hidden in UI tools through iface_cards. - DPCM hidden PCMs should not be shown in usermode, hide them from Usermode
Multi Channel ASoC Controls; ============================= - channel maps dont make sense for ASoC as asoc paths dont have channel information - conclusion: add channel maps on HDMI edge. - we can add multi-channel controls to the asoc
Liam
Dne 12.10.2015 v 15:49 Liam Girdwood napsal(a):
I've written up the minutes here below (including Vinod's minutes after I left for my flight). Please do feel free to correct or add any missing items.
Thank you very much for this work.
Jaroslav
On Mon, Oct 12, 2015 at 02:49:46PM +0100, Liam Girdwood wrote:
I've written up the minutes here below
Thanks!
Splitting out controls: Takashi
- Restricted access. Consensus to restrict access to some controls due
to possibility of breaking HW at kernel level. i.e. prevent feeding digital Mic into HP amp to prevent speaker over heating.
I'd like that. rt5631. Avoiding at the moment by removing the controls.
On 2015-10-12 22:59, James Cameron wrote:
On Mon, Oct 12, 2015 at 02:49:46PM +0100, Liam Girdwood wrote:
I've written up the minutes here below
Thanks!
Splitting out controls: Takashi
- Restricted access. Consensus to restrict access to some controls due
to possibility of breaking HW at kernel level. i.e. prevent feeding digital Mic into HP amp to prevent speaker over heating.
I'd like that. rt5631. Avoiding at the moment by removing the controls.
IIRC, the debate was over "do not expose dangerous controls to userspace at all" vs "expose dangerous controls controls only to root".
I'm strongly voting for "do not expose to userspace at all".
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
If BIOS is buggy and cannot protect the machine from being physically damaged, then we need to work around that in the kernel. Otherwise there is a bug in the kernel.
And if the kernel is buggy, we should fix the kernel. Period. :-)
On 2015年10月13日 15:07, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
On Mon, Oct 12, 2015 at 02:49:46PM +0100, Liam Girdwood wrote:
I've written up the minutes here below
Thanks!
Splitting out controls: Takashi
- Restricted access. Consensus to restrict access to some
controls due to possibility of breaking HW at kernel level. i.e. prevent feeding digital Mic into HP amp to prevent speaker over heating.
I'd like that. rt5631. Avoiding at the moment by removing the controls.
IIRC, the debate was over "do not expose dangerous controls to userspace at all" vs "expose dangerous controls controls only to root".
I'm strongly voting for "do not expose to userspace at all".
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
If BIOS is buggy and cannot protect the machine from being physically damaged, then we need to work around that in the kernel. Otherwise there is a bug in the kernel.
And if the kernel is buggy, we should fix the kernel. Period. :-)
I agree that also we don't expose dangerous controls at all, or expose but do some restraining at codec driver side, e.g. changing dangerous setting to save one. We *cannot* make sure 'root' may know the detail about it and leave the risk to him. :)
~Keyon
On 10/13/15 2:07 AM, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
On Mon, Oct 12, 2015 at 02:49:46PM +0100, Liam Girdwood wrote:
I've written up the minutes here below
Thanks!
Splitting out controls: Takashi
- Restricted access. Consensus to restrict access to some controls
due to possibility of breaking HW at kernel level. i.e. prevent feeding digital Mic into HP amp to prevent speaker over heating.
I'd like that. rt5631. Avoiding at the moment by removing the controls.
IIRC, the debate was over "do not expose dangerous controls to userspace at all" vs "expose dangerous controls controls only to root".
I'm strongly voting for "do not expose to userspace at all".
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
If BIOS is buggy and cannot protect the machine from being physically damaged, then we need to work around that in the kernel. Otherwise there is a bug in the kernel.
And if the kernel is buggy, we should fix the kernel. Period. :-)
There are just too many variables linked to acoustic, mechanical and thermal design that just can't be handled with a simple rule at the kernel level or even BIOS/firmware. There were quite a few people in the room who voiced their opinion that handling 'dangerous' controls was an exercise for the integrator, not something that can be handled with a one-size-fits-all fix. 'userspace' is also a vague definition, most audio servers will use profiles that will avoid using bad configurations. It's not clear to me that we have to protect against a user setting random values with alsamixer.
On 2015-10-13 16:55, Pierre-Louis Bossart wrote:
On 10/13/15 2:07 AM, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
On Mon, Oct 12, 2015 at 02:49:46PM +0100, Liam Girdwood wrote:
I've written up the minutes here below
Thanks!
Splitting out controls: Takashi
- Restricted access. Consensus to restrict access to some controls
Note: There is NOT consensus.
due to possibility of breaking HW at kernel level. i.e. prevent feeding digital Mic into HP amp to prevent speaker over heating.
I'd like that. rt5631. Avoiding at the moment by removing the controls.
IIRC, the debate was over "do not expose dangerous controls to userspace at all" vs "expose dangerous controls controls only to root".
I'm strongly voting for "do not expose to userspace at all".
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
If BIOS is buggy and cannot protect the machine from being physically damaged, then we need to work around that in the kernel. Otherwise there is a bug in the kernel.
And if the kernel is buggy, we should fix the kernel. Period. :-)
There are just too many variables linked to acoustic, mechanical and thermal design that just can't be handled with a simple rule at the kernel level or even BIOS/firmware. There were quite a few people in the room who voiced their opinion that handling 'dangerous' controls was an exercise for the integrator, not something that can be handled with a one-size-fits-all fix.
If userspace can make complex decisions to avoid damage, then so can the kernel. The integrator can just write that logic into the kernel instead of writing it into userspace.
'userspace' is also a vague definition, most audio servers will use profiles that will avoid using bad configurations. It's not clear to me that we have to protect against a user setting random values with alsamixer.
Whether the devices are phones, laptops, or everything in between, people are replacing their original software with other software. Surely those people are messing around with mixer controls, and surely they don't want their devices damaged.
These people might replace the kernel as well, of course, but are less likely to do so than to replace userspace; and if they do, they are less likely to mess around with a particular audio driver, especially if that driver has a big warning sign saying "changing this value might fry your speaker" (which, IMO, would be appropriate).
On 10/13/15 10:56 AM, David Henningsson wrote:
On 2015-10-13 16:55, Pierre-Louis Bossart wrote:
On 10/13/15 2:07 AM, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
On Mon, Oct 12, 2015 at 02:49:46PM +0100, Liam Girdwood wrote:
I've written up the minutes here below
Thanks!
Splitting out controls: Takashi
- Restricted access. Consensus to restrict access to some controls
Note: There is NOT consensus.
due to possibility of breaking HW at kernel level. i.e. prevent feeding digital Mic into HP amp to prevent speaker over heating.
I'd like that. rt5631. Avoiding at the moment by removing the controls.
IIRC, the debate was over "do not expose dangerous controls to userspace at all" vs "expose dangerous controls controls only to root".
I'm strongly voting for "do not expose to userspace at all".
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
If BIOS is buggy and cannot protect the machine from being physically damaged, then we need to work around that in the kernel. Otherwise there is a bug in the kernel.
And if the kernel is buggy, we should fix the kernel. Period. :-)
There are just too many variables linked to acoustic, mechanical and thermal design that just can't be handled with a simple rule at the kernel level or even BIOS/firmware. There were quite a few people in the room who voiced their opinion that handling 'dangerous' controls was an exercise for the integrator, not something that can be handled with a one-size-fits-all fix.
If userspace can make complex decisions to avoid damage, then so can the kernel. The integrator can just write that logic into the kernel instead of writing it into userspace.
That seems to go against everything we've done over the last few years. And it's not even feasible to put kernel-level safeguards with the new topology tool, since the kcontrols are instantiated at run time based on a firmware description.
'userspace' is also a vague definition, most audio servers will use profiles that will avoid using bad configurations. It's not clear to me that we have to protect against a user setting random values with alsamixer.
Whether the devices are phones, laptops, or everything in between, people are replacing their original software with other software. Surely those people are messing around with mixer controls, and surely they don't want their devices damaged.
These people might replace the kernel as well, of course, but are less likely to do so than to replace userspace; and if they do, they are less likely to mess around with a particular audio driver, especially if that driver has a big warning sign saying "changing this value might fry your speaker" (which, IMO, would be appropriate).
It's a valid case but I would argue that if you install new software you should use profiles that have been shared or used by others. If you want to protect against the limited knowledge of an individual user 'messing around mixer controls' then this is going too far.
On 2015-10-13 18:08, Pierre-Louis Bossart wrote:
On 10/13/15 10:56 AM, David Henningsson wrote:
On 2015-10-13 16:55, Pierre-Louis Bossart wrote:
On 10/13/15 2:07 AM, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
On Mon, Oct 12, 2015 at 02:49:46PM +0100, Liam Girdwood wrote:
I've written up the minutes here below
Thanks!
Splitting out controls: Takashi
- Restricted access. Consensus to restrict access to some controls
Note: There is NOT consensus.
due to possibility of breaking HW at kernel level. i.e. prevent feeding digital Mic into HP amp to prevent speaker over heating.
I'd like that. rt5631. Avoiding at the moment by removing the controls.
IIRC, the debate was over "do not expose dangerous controls to userspace at all" vs "expose dangerous controls controls only to root".
I'm strongly voting for "do not expose to userspace at all".
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
If BIOS is buggy and cannot protect the machine from being physically damaged, then we need to work around that in the kernel. Otherwise there is a bug in the kernel.
And if the kernel is buggy, we should fix the kernel. Period. :-)
There are just too many variables linked to acoustic, mechanical and thermal design that just can't be handled with a simple rule at the kernel level or even BIOS/firmware. There were quite a few people in the room who voiced their opinion that handling 'dangerous' controls was an exercise for the integrator, not something that can be handled with a one-size-fits-all fix.
If userspace can make complex decisions to avoid damage, then so can the kernel. The integrator can just write that logic into the kernel instead of writing it into userspace.
That seems to go against everything we've done over the last few years.
Alsactl, a tool I believe is running by default on boot of almost every Linux distro, sets the mixer to (pretty high) values solely based on their names. It does so without user interaction, and it runs as root.
Likewise, alsamixer is one of the first tools you go to when you have problems with your sound not working. Ubuntu, Debian, Arch and more have guides on their wikis of how to use it.
Hence I'd say that allowing mixer controls to destroy your hardware is to go against everything done over *more* than the last few years.
Also, if simply booting up a Linux distro from a USB stick (or however you root your device) has even the slightest chance of destroying your hardware, that'd be a pretty big punch against getting people to try it out. :-(
And if that wasn't enough, it's also an attack vector for malicious apps.
And it's not even feasible to put kernel-level safeguards with the new topology tool, since the kcontrols are instantiated at run time based on a firmware description.
Again, not buying it. A topology tool must not create kcontrols that could destroy the machine.
'userspace' is also a vague definition, most audio servers will use profiles that will avoid using bad configurations. It's not clear to me that we have to protect against a user setting random values with alsamixer.
Whether the devices are phones, laptops, or everything in between, people are replacing their original software with other software. Surely those people are messing around with mixer controls, and surely they don't want their devices damaged.
These people might replace the kernel as well, of course, but are less likely to do so than to replace userspace; and if they do, they are less likely to mess around with a particular audio driver, especially if that driver has a big warning sign saying "changing this value might fry your speaker" (which, IMO, would be appropriate).
It's a valid case but I would argue that if you install new software you should use profiles that have been shared or used by others.If you want to protect against the limited knowledge of an individual user 'messing around mixer controls' then this is going too far.
We actively enable and advocate that people with limited knowledge can 'mess around mixer controls'. That's why we have an alsamixer application in the first place, and teach people how to use it.
We actively enable and advocate that people with limited knowledge can 'mess around mixer controls'. That's why we have an alsamixer application in the first place, and teach people how to use it.
What you are describing is the traditional approach where the number of controls is limited, a couple of switches here and a set of volume controls there. With new devices having mixers all over the place, be it in codecs or DSPs, it's not uncommon to have several hundred controls. There is no way users will be able to find out on their own what values they should use and it would be misleading to think developers are able to identify all lethal combinations of settings. We've also moved all these control settings from kernel to userspace to avoid hardcoding values that are platform specific. Bottom line we have to move to profiles, stop guessing values based on control names or avoid letting users poke random values in alsamixer. This just doesn't scale any more. thinking that the alsamixer command-line remains the default user-facing interface moving forward is just not right, it's a developer tool.
On Fri, 2015-10-16 at 09:49 -0500, Pierre-Louis Bossart wrote:
We actively enable and advocate that people with limited knowledge can 'mess around mixer controls'. That's why we have an alsamixer application in the first place, and teach people how to use it.
What you are describing is the traditional approach where the number of controls is limited, a couple of switches here and a set of volume controls there. With new devices having mixers all over the place, be it in codecs or DSPs, it's not uncommon to have several hundred controls. There is no way users will be able to find out on their own what values they should use and it would be misleading to think developers are able to identify all lethal combinations of settings. We've also moved all these control settings from kernel to userspace to avoid hardcoding values that are platform specific. Bottom line we have to move to profiles, stop guessing values based on control names or avoid letting users poke random values in alsamixer. This just doesn't scale any more. thinking that the alsamixer command-line remains the default user-facing interface moving forward is just not right, it's a developer tool.
I believe that you are misunderstanding David's point. Yes, there can be a large number of controls (the Wolfson WM8280 has over 400 controls, some Cirrus Logic codecs have nearly 900). The point was not whether the users should understand the meaning of all these controls but that they should be able to "play around and see what happens" without any risk of bricking their hardware. Regardless of whether what they are doing is meaningful or whether it's really feasible to set hundreds of controls correctly from the command line, they shouldn't damage the hardware. Not even root should have the ability to actually damage the hardware.
On Fri, Oct 16, 2015 at 04:24:02PM +0100, Richard Fitzgerald wrote:
On Fri, 2015-10-16 at 09:49 -0500, Pierre-Louis Bossart wrote:
Bottom line we have to move to profiles, stop guessing values based on control names or avoid letting users poke random values in alsamixer. This just doesn't scale any more. thinking that the alsamixer command-line remains the default user-facing interface moving forward is just not right, it's a developer tool.
I believe that you are misunderstanding David's point. Yes, there can be a large number of controls (the Wolfson WM8280 has over 400 controls, some Cirrus Logic codecs have nearly 900). The point was not whether the users should understand the meaning of all these controls but that they should be able to "play around and see what happens" without any risk of bricking their hardware. Regardless of whether what they are doing is meaningful or whether it's really feasible to set hundreds of controls correctly from the command line, they shouldn't damage the hardware. Not even root should have the ability to actually damage the hardware.
No, the point here is that it's unrealistically difficult to prevent hardware damage on all systems without substantial work - even at the simple level of constraining the maximum gain through the system we currently lack the information to do that (never mind the capacity to do the calculations), and getting the limits into the kernel would involve carrying the machine specific callibration data around since it's not in the firmwares of the relevant systems.
We can't even think about the possibility of doing this without getting the controls into the topology and once we can do that we would need to work out what we're trying to protect against and how.
On Fri, 16 Oct 2015 16:49:25 +0200, Pierre-Louis Bossart wrote:
We actively enable and advocate that people with limited knowledge can 'mess around mixer controls'. That's why we have an alsamixer application in the first place, and teach people how to use it.
What you are describing is the traditional approach where the number of controls is limited, a couple of switches here and a set of volume controls there. With new devices having mixers all over the place, be it in codecs or DSPs, it's not uncommon to have several hundred controls. There is no way users will be able to find out on their own what values they should use and it would be misleading to think developers are able to identify all lethal combinations of settings. We've also moved all these control settings from kernel to userspace to avoid hardcoding values that are platform specific.
Right, and this is the problem. The system integration information isn't the thing a typical user would need to handle.
OK, we can leave them in user space. It's more flexible, yes. But it's a bad design if a *normal* user is allowed and needs to handle all these. A normal user needs (and is allowed) only limited presets; your car won't allow you accessing hundreds of knobs, e.g. a child on a rear seat suddenly triggers the turbo-jump button in combination with a back-fire while driving on a highway :)
Bottom line we have to move to profiles, stop guessing values based on control names or avoid letting users poke random values in alsamixer. This just doesn't scale any more. thinking that the alsamixer command-line remains the default user-facing interface moving forward is just not right, it's a developer tool.
Well, it doesn't matter whether it's alsamixer or whatever program. The point is that *any* user program might screw up things easily, even if it's not intentional.
For Android, it wasn't a big issue, so far, just because only few people touch the audio setup manually. But now the former embedded things get more migrated to the desktop scenes, and it's pretty normal that a user just runs normal PA and ALSA apps with it nowadays like PC. It'll get more and more in the next years.
So, yes, we have profiles to manage the setups in user-space. This is very good, scalable, per se. The problem is, however, rather how to harden this management.
I still think that the driver can give the first-level filter or permission isolation. It should be doable also in topology f/w, in theory.
Then, we can think of more hardening in user-space side on top of it. For example, running sound (or UCM) daemon with a privileged user, and let it alone managing the sensitive sound setup while a normal user is allowed to adjust only limited presets given by the profile.
Just my $0.02, currently floating on my fuzzy head in bed.
Takashi
On Tue, 2015-10-13 at 17:56 +0200, David Henningsson wrote:
On 2015-10-13 16:55, Pierre-Louis Bossart wrote:
On 10/13/15 2:07 AM, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
On Mon, Oct 12, 2015 at 02:49:46PM +0100, Liam Girdwood wrote:
I've written up the minutes here below
Thanks!
Splitting out controls: Takashi
- Restricted access. Consensus to restrict access to some controls
Note: There is NOT consensus.
Sorry, this should be "control values" instead of "controls". I guess my typing speed was not fast enough to keep up :(
My understanding at the time was we could use the existing clamp API to clamp kcontrol values (in machine drivers) to reasonable/safe values.
I did hear many objections to setting some kcontrols as RO for certain users and RW for other users.
Liam
On Tue, 2015-10-13 at 09:07 +0200, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
On Mon, Oct 12, 2015 at 02:49:46PM +0100, Liam Girdwood wrote:
I've written up the minutes here below
Thanks!
Splitting out controls: Takashi
- Restricted access. Consensus to restrict access to some controls due
to possibility of breaking HW at kernel level. i.e. prevent feeding digital Mic into HP amp to prevent speaker over heating.
I'd like that. rt5631. Avoiding at the moment by removing the controls.
IIRC, the debate was over "do not expose dangerous controls to userspace at all" vs "expose dangerous controls controls only to root".
I'm strongly voting for "do not expose to userspace at all".
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
If BIOS is buggy and cannot protect the machine from being physically damaged, then we need to work around that in the kernel. Otherwise there is a bug in the kernel.
And if the kernel is buggy, we should fix the kernel. Period. :-)
I agree with you in principle that if it can break the hardware then either it shouldn't be exposed to user-side at all, or it should be checked by the kernel/driver to prevent bad settings.
However, what about this sort of scenario: some codec has a speaker volume range of 0..100, all of which are valid and safe. Manufacturer X makes a device with an inadequate speaker that can be damaged with volume settings above 80. How is that protected? There's nothing wrong with the codec driver. There's no software at all for a speaker - it's just a speaker. Where do we put a hard limit of 80 on a codec control for one specific device? If it was my codec driver I don't want to have to put a workaround for one specific device because manufacturer X chose the wrong type of speaker. Or do we not care about the "stupid manufacturer" cases and we're only interested in protecting the device the control directly applies to - in this example it's a codec control so it mustn't damage the codec but we don't care if poor hardware design means it could damage other hardware connected to the codec.
On Fri, 16 Oct 2015 17:35:30 +0200, Richard Fitzgerald wrote:
On Tue, 2015-10-13 at 09:07 +0200, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
On Mon, Oct 12, 2015 at 02:49:46PM +0100, Liam Girdwood wrote:
I've written up the minutes here below
Thanks!
Splitting out controls: Takashi
- Restricted access. Consensus to restrict access to some controls due
to possibility of breaking HW at kernel level. i.e. prevent feeding digital Mic into HP amp to prevent speaker over heating.
I'd like that. rt5631. Avoiding at the moment by removing the controls.
IIRC, the debate was over "do not expose dangerous controls to userspace at all" vs "expose dangerous controls controls only to root".
I'm strongly voting for "do not expose to userspace at all".
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
If BIOS is buggy and cannot protect the machine from being physically damaged, then we need to work around that in the kernel. Otherwise there is a bug in the kernel.
And if the kernel is buggy, we should fix the kernel. Period. :-)
I agree with you in principle that if it can break the hardware then either it shouldn't be exposed to user-side at all, or it should be checked by the kernel/driver to prevent bad settings.
However, what about this sort of scenario: some codec has a speaker volume range of 0..100, all of which are valid and safe. Manufacturer X makes a device with an inadequate speaker that can be damaged with volume settings above 80. How is that protected? There's nothing wrong with the codec driver. There's no software at all for a speaker - it's just a speaker. Where do we put a hard limit of 80 on a codec control for one specific device? If it was my codec driver I don't want to have to put a workaround for one specific device because manufacturer X chose the wrong type of speaker. Or do we not care about the "stupid manufacturer" cases and we're only interested in protecting the device the control directly applies to - in this example it's a codec control so it mustn't damage the codec but we don't care if poor hardware design means it could damage other hardware connected to the codec.
There is snd_soc_limit_volume() function to override the volume range from a machine driver for such a purpose. This was what was suggested in the meeting.
Takashi
On Fri, 2015-10-16 at 18:00 +0200, Takashi Iwai wrote:
On Fri, 16 Oct 2015 17:35:30 +0200, Richard Fitzgerald wrote:
On Tue, 2015-10-13 at 09:07 +0200, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
On Mon, Oct 12, 2015 at 02:49:46PM +0100, Liam Girdwood wrote:
I've written up the minutes here below
Thanks!
Splitting out controls: Takashi
- Restricted access. Consensus to restrict access to some controls due
to possibility of breaking HW at kernel level. i.e. prevent feeding digital Mic into HP amp to prevent speaker over heating.
I'd like that. rt5631. Avoiding at the moment by removing the controls.
IIRC, the debate was over "do not expose dangerous controls to userspace at all" vs "expose dangerous controls controls only to root".
I'm strongly voting for "do not expose to userspace at all".
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
If BIOS is buggy and cannot protect the machine from being physically damaged, then we need to work around that in the kernel. Otherwise there is a bug in the kernel.
And if the kernel is buggy, we should fix the kernel. Period. :-)
I agree with you in principle that if it can break the hardware then either it shouldn't be exposed to user-side at all, or it should be checked by the kernel/driver to prevent bad settings.
However, what about this sort of scenario: some codec has a speaker volume range of 0..100, all of which are valid and safe. Manufacturer X makes a device with an inadequate speaker that can be damaged with volume settings above 80. How is that protected? There's nothing wrong with the codec driver. There's no software at all for a speaker - it's just a speaker. Where do we put a hard limit of 80 on a codec control for one specific device? If it was my codec driver I don't want to have to put a workaround for one specific device because manufacturer X chose the wrong type of speaker. Or do we not care about the "stupid manufacturer" cases and we're only interested in protecting the device the control directly applies to - in this example it's a codec control so it mustn't damage the codec but we don't care if poor hardware design means it could damage other hardware connected to the codec.
There is snd_soc_limit_volume() function to override the volume range from a machine driver for such a purpose. This was what was suggested in the meeting.
Takashi
OK, I didn't know that but I do now, so that wasn't a good example. But how about something more complex. Let's say it was a set of coefficient values for a filter. That's not a simple range check, it would need specialized code to understand whether the coefficients were safe.
Really my point was that if all hardware was completely isolated from other hardware you can error-check controls. But when you start hooking up bits of hardware to other bits of hardware, it becomes more complex defining what is safe, and who is responsible for checking that it is safe, and where the knowledge about how to check it's safe should live.
That said, I'm not a fan of the "unless we can fix everything we shouldn't fix anything" attitude. Fixing something is always better than fixing nothing. So the fact that combining real hardware can introduce new types of unsafe settings isn't an argument against error-checking control values.
On Fri, 16 Oct 2015 18:31:54 +0200, Richard Fitzgerald wrote:
On Fri, 2015-10-16 at 18:00 +0200, Takashi Iwai wrote:
On Fri, 16 Oct 2015 17:35:30 +0200, Richard Fitzgerald wrote:
On Tue, 2015-10-13 at 09:07 +0200, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
On Mon, Oct 12, 2015 at 02:49:46PM +0100, Liam Girdwood wrote:
I've written up the minutes here below
Thanks!
Splitting out controls: Takashi
- Restricted access. Consensus to restrict access to some controls due
to possibility of breaking HW at kernel level. i.e. prevent feeding digital Mic into HP amp to prevent speaker over heating.
I'd like that. rt5631. Avoiding at the moment by removing the controls.
IIRC, the debate was over "do not expose dangerous controls to userspace at all" vs "expose dangerous controls controls only to root".
I'm strongly voting for "do not expose to userspace at all".
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
If BIOS is buggy and cannot protect the machine from being physically damaged, then we need to work around that in the kernel. Otherwise there is a bug in the kernel.
And if the kernel is buggy, we should fix the kernel. Period. :-)
I agree with you in principle that if it can break the hardware then either it shouldn't be exposed to user-side at all, or it should be checked by the kernel/driver to prevent bad settings.
However, what about this sort of scenario: some codec has a speaker volume range of 0..100, all of which are valid and safe. Manufacturer X makes a device with an inadequate speaker that can be damaged with volume settings above 80. How is that protected? There's nothing wrong with the codec driver. There's no software at all for a speaker - it's just a speaker. Where do we put a hard limit of 80 on a codec control for one specific device? If it was my codec driver I don't want to have to put a workaround for one specific device because manufacturer X chose the wrong type of speaker. Or do we not care about the "stupid manufacturer" cases and we're only interested in protecting the device the control directly applies to - in this example it's a codec control so it mustn't damage the codec but we don't care if poor hardware design means it could damage other hardware connected to the codec.
There is snd_soc_limit_volume() function to override the volume range from a machine driver for such a purpose. This was what was suggested in the meeting.
Takashi
OK, I didn't know that but I do now, so that wasn't a good example. But how about something more complex. Let's say it was a set of coefficient values for a filter. That's not a simple range check, it would need specialized code to understand whether the coefficients were safe.
Really my point was that if all hardware was completely isolated from other hardware you can error-check controls. But when you start hooking up bits of hardware to other bits of hardware, it becomes more complex defining what is safe, and who is responsible for checking that it is safe, and where the knowledge about how to check it's safe should live.
That said, I'm not a fan of the "unless we can fix everything we shouldn't fix anything" attitude. Fixing something is always better than fixing nothing. So the fact that combining real hardware can introduce new types of unsafe settings isn't an argument against error-checking control values.
Sure, systems will get more complex in future and more dynamic via f/w. It's impossible to cover all statically in each driver. As I mentioned in another mail, we should think of hardening in multiple levels.
Takashi
On 10/16/15 11:00 AM, Takashi Iwai wrote:
On Fri, 16 Oct 2015 17:35:30 +0200, Richard Fitzgerald wrote:
On Tue, 2015-10-13 at 09:07 +0200, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
On Mon, Oct 12, 2015 at 02:49:46PM +0100, Liam Girdwood wrote:
I've written up the minutes here below
Thanks!
Splitting out controls: Takashi
- Restricted access. Consensus to restrict access to some controls due
to possibility of breaking HW at kernel level. i.e. prevent feeding digital Mic into HP amp to prevent speaker over heating.
I'd like that. rt5631. Avoiding at the moment by removing the controls.
IIRC, the debate was over "do not expose dangerous controls to userspace at all" vs "expose dangerous controls controls only to root".
I'm strongly voting for "do not expose to userspace at all".
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
If BIOS is buggy and cannot protect the machine from being physically damaged, then we need to work around that in the kernel. Otherwise there is a bug in the kernel.
And if the kernel is buggy, we should fix the kernel. Period. :-)
I agree with you in principle that if it can break the hardware then either it shouldn't be exposed to user-side at all, or it should be checked by the kernel/driver to prevent bad settings.
However, what about this sort of scenario: some codec has a speaker volume range of 0..100, all of which are valid and safe. Manufacturer X makes a device with an inadequate speaker that can be damaged with volume settings above 80. How is that protected? There's nothing wrong with the codec driver. There's no software at all for a speaker - it's just a speaker. Where do we put a hard limit of 80 on a codec control for one specific device? If it was my codec driver I don't want to have to put a workaround for one specific device because manufacturer X chose the wrong type of speaker. Or do we not care about the "stupid manufacturer" cases and we're only interested in protecting the device the control directly applies to - in this example it's a codec control so it mustn't damage the codec but we don't care if poor hardware design means it could damage other hardware connected to the codec.
There is snd_soc_limit_volume() function to override the volume range from a machine driver for such a purpose. This was what was suggested in the meeting.
To say that a configuration is 'safe' requires a breadth of information from thermal, acoustic and mechanical design that is just not available to kernel contributors who work in parallel on different building blocks and different configurations. Adding a safeguard in the machine driver is not practical: it's not uncommon for manufacturers to swap out speakers to save a couple of cents on a specific production batch and a value set in stone in a driver would not work for all those different batches. So yes everyone should try and make sure that there are no 'dangerous' controls at their individual level but there is no way to protect hardware integrity in all cases if users punch-in values in alsamixer.
On Sat, 17 Oct 2015 17:54:09 +0200, Pierre-Louis Bossart wrote:
On 10/16/15 11:00 AM, Takashi Iwai wrote:
On Fri, 16 Oct 2015 17:35:30 +0200, Richard Fitzgerald wrote:
On Tue, 2015-10-13 at 09:07 +0200, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
On Mon, Oct 12, 2015 at 02:49:46PM +0100, Liam Girdwood wrote:
I've written up the minutes here below
Thanks!
Splitting out controls: Takashi
- Restricted access. Consensus to restrict access to some controls due
to possibility of breaking HW at kernel level. i.e. prevent feeding digital Mic into HP amp to prevent speaker over heating.
I'd like that. rt5631. Avoiding at the moment by removing the controls.
IIRC, the debate was over "do not expose dangerous controls to userspace at all" vs "expose dangerous controls controls only to root".
I'm strongly voting for "do not expose to userspace at all".
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
If BIOS is buggy and cannot protect the machine from being physically damaged, then we need to work around that in the kernel. Otherwise there is a bug in the kernel.
And if the kernel is buggy, we should fix the kernel. Period. :-)
I agree with you in principle that if it can break the hardware then either it shouldn't be exposed to user-side at all, or it should be checked by the kernel/driver to prevent bad settings.
However, what about this sort of scenario: some codec has a speaker volume range of 0..100, all of which are valid and safe. Manufacturer X makes a device with an inadequate speaker that can be damaged with volume settings above 80. How is that protected? There's nothing wrong with the codec driver. There's no software at all for a speaker - it's just a speaker. Where do we put a hard limit of 80 on a codec control for one specific device? If it was my codec driver I don't want to have to put a workaround for one specific device because manufacturer X chose the wrong type of speaker. Or do we not care about the "stupid manufacturer" cases and we're only interested in protecting the device the control directly applies to - in this example it's a codec control so it mustn't damage the codec but we don't care if poor hardware design means it could damage other hardware connected to the codec.
There is snd_soc_limit_volume() function to override the volume range from a machine driver for such a purpose. This was what was suggested in the meeting.
To say that a configuration is 'safe' requires a breadth of information from thermal, acoustic and mechanical design that is just not available to kernel contributors who work in parallel on different building blocks and different configurations. Adding a safeguard in the machine driver is not practical: it's not uncommon for manufacturers to swap out speakers to save a couple of cents on a specific production batch and a value set in stone in a driver would not work for all those different batches. So yes everyone should try and make sure that there are no 'dangerous' controls at their individual level but there is no way to protect hardware integrity in all cases if users punch-in values in alsamixer.
The question is *which* user. If it's a system user for a daemon or a management tool, it's fine. But if it's a normal user, it's bad. My original proposal (the separation of access levels) came from this POV.
I won't say that we can always save the world. But there is certainly a room for improvement for a little bit more safety than now. At least, if hardware manufacturer or system integrator already knows the dangerous part, we should provide some easy way to paper over it.
Takashi
On Sat, 17 Oct 2015, Takashi Iwai wrote:
I won't say that we can always save the world. But there is certainly a room for improvement for a little bit more safety than now. At least, if hardware manufacturer or system integrator already knows the dangerous part, we should provide some easy way to paper over it.
Is there any form of statistics on more precisely what type of failures in this area occur in the real world? Speakers have been mentioned (and I think were the original issue that started the discussion) and are a clear case. One could imagine output stages being overdriven (even with no load) in some way, or mismatched signal levels which could cause an opamp to fail for instance. But have any such cases been reported? It would seem very hard to diagnose for one thing, unless a hardware manufacturer explicitly noted it in some documentation (which I suspect would be very unlikely for say a motherboard or sound card manufacturer).
My point is really that it might be a good idea to get some idea of the potential failure modes before introducing a mechanism to protect us from them.
/Ricard
On Sun, Oct 18, 2015 at 08:41:42AM +0200, Ricard Wanderlof wrote:
Is there any form of statistics on more precisely what type of failures in this area occur in the real world? Speakers have been mentioned (and I think were the original issue that started the discussion) and are a clear case. One could imagine output stages being overdriven (even with no load) in some way, or mismatched signal levels which could cause an opamp to fail for instance. But have any such cases been reported? It would seem very hard to diagnose for one thing, unless a hardware manufacturer explicitly noted it in some documentation (which I suspect would be very unlikely for say a motherboard or sound card manufacturer).
Speaker related issues are overwhelmingly the most common thing here, either straight burnout due to overdriving or mechanical damage caused by vibrations.
17.10.2015 20:54, Pierre-Louis Bossart wrote:
To say that a configuration is 'safe' requires a breadth of information from thermal, acoustic and mechanical design that is just not available to kernel contributors who work in parallel on different building blocks and different configurations. Adding a safeguard in the machine driver is not practical: it's not uncommon for manufacturers to swap out speakers to save a couple of cents on a specific production batch and a value set in stone in a driver would not work for all those different batches.
So, here is a strawman counterproposal.
Design some safe and hardware-agnostic form of bytecode. This bytecode should be able to access mixer controls by name or by index, get proposed values, and decide, using arithmetical and logical operations, whether the proposal is safe. Load this bytecode as firmware, just like we load various overrides in snd-hda-intel.
I.e. the mechanism remains in the kernel, but the policy becomes easily swappable by the board manufacturer.
On Fri, Oct 16, 2015 at 04:35:30PM +0100, Richard Fitzgerald wrote:
However, what about this sort of scenario: some codec has a speaker volume range of 0..100, all of which are valid and safe. Manufacturer X makes a device with an inadequate speaker that can be damaged with volume settings above 80. How is that protected? There's nothing wrong with the codec driver. There's no software at all for a speaker - it's just a speaker. Where do we put a hard limit of 80 on a codec control for one specific device? If it was my codec driver I don't want to have to put a workaround for one specific device because manufacturer X chose the wrong type of speaker. Or do we not care about the "stupid manufacturer" cases and we're only interested in protecting the device the control directly applies to - in this example it's a codec control so it mustn't damage the codec but we don't care if poor hardware design means it could damage other hardware connected to the codec.
This is what machine drivers are for - providing system specific integration.
On Tue, Oct 13, 2015 at 09:07:20AM +0200, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
James, please don't drop CCs (this is the convention for kernel lists).
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
This is just not possible for most systems, there is no BIOS only a bootoloader which hands off control to the kernel and stops running at that point.
On 2015-10-30 03:36, Mark Brown wrote:
On Tue, Oct 13, 2015 at 09:07:20AM +0200, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
James, please don't drop CCs (this is the convention for kernel lists).
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
This is just not possible for most systems, there is no BIOS only a bootoloader which hands off control to the kernel and stops running at that point.
The BIOS can poke the hardware, set registers in such ways that volumes are limited. Registers that the kernel never touches. From my knowledge, this is more common than not on modern laptops with HDA codecs. A bootloader could potentially do the same - but in many embedded setups I suspect it makes just as much sense to do that during driver initialization instead.
Anyhow, we both agree on the fact that sometimes there is no hardware and no BIOS that protect us against hardware failure.
We currently have some kind of mechanism that protects the end user from unintentionally destroying the hardware. What we're arguing about is whether that mechanism should be in userspace (e g by having the GUI talk to UCM instead of amixer directly), or in the kernel.
I'm advocating having it as close to the hardware as possible - i e, in the kernel, because that protects additional classes of users from unintentionally destroying the hardware. As well as making it harder for malicious apps that intentionally want to do it.
There is no calculation that userspace can do, that the kernel can't do almost as easily, so I don't buy the argument that doing things in the kernel would be "unrealistically difficult". However, it seems that we have done (IMO) such a gross mis-design that changing this isn't possible overnight. But can we then try to move in the right direction, instead of moving in the wrong one?
On Fri, Oct 30, 2015 at 09:36:39AM +0100, David Henningsson wrote:
On 2015-10-30 03:36, Mark Brown wrote:
On Tue, Oct 13, 2015 at 09:07:20AM +0200, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
James, please don't drop CCs (this is the convention for kernel lists).
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
This is just not possible for most systems, there is no BIOS only a bootoloader which hands off control to the kernel and stops running at that point.
The BIOS can poke the hardware, set registers in such ways that volumes are limited. Registers that the kernel never touches.
I'm not sure how that would work. The BIOS developers and the kernel developers would have to agree on what registers are to be touched by the BIOS only. In my case, with the rt5631, the registers could not be easily isolated in that fashion. And if the kernel hit the reset bit in one of the registers, whatever the BIOS had set would be lost.
Oh, and power management. The codec can be powered down by suspend, with the BIOS uninvolved in resume.
From my knowledge, this is more common than not on modern laptops with HDA codecs. A bootloader could potentially do the same - but in many embedded setups I suspect it makes just as much sense to do that during driver initialization instead.
Anyhow, we both agree on the fact that sometimes there is no hardware and no BIOS that protect us against hardware failure.
We currently have some kind of mechanism that protects the end user from unintentionally destroying the hardware. What we're arguing about is whether that mechanism should be in userspace (e g by having the GUI talk to UCM instead of amixer directly), or in the kernel.
I'm advocating having it as close to the hardware as possible - i e, in the kernel, because that protects additional classes of users from unintentionally destroying the hardware. As well as making it harder for malicious apps that intentionally want to do it.
There is no calculation that userspace can do, that the kernel can't do almost as easily, so I don't buy the argument that doing things in the kernel would be "unrealistically difficult". However, it seems that we have done (IMO) such a gross mis-design that changing this isn't possible overnight. But can we then try to move in the right direction, instead of moving in the wrong one?
-- David Henningsson, Canonical Ltd. https://launchpad.net/~diwic
On 2015-10-30 09:53, James Cameron wrote:
On Fri, Oct 30, 2015 at 09:36:39AM +0100, David Henningsson wrote:
On 2015-10-30 03:36, Mark Brown wrote:
On Tue, Oct 13, 2015 at 09:07:20AM +0200, David Henningsson wrote:
On 2015-10-12 22:59, James Cameron wrote:
James, please don't drop CCs (this is the convention for kernel lists).
I personally believe that if the physical hardware can be set to state where it's bricked, the hardware itself is buggy.
If the hardware is buggy, this should be worked around in BIOS or whatever firmware is present on the machine. Otherwise there is a bug in BIOS.
This is just not possible for most systems, there is no BIOS only a bootoloader which hands off control to the kernel and stops running at that point.
The BIOS can poke the hardware, set registers in such ways that volumes are limited. Registers that the kernel never touches.
I'm not sure how that would work. The BIOS developers and the kernel developers would have to agree on what registers are to be touched by the BIOS only. In my case, with the rt5631, the registers could not be easily isolated in that fashion. And if the kernel hit the reset bit in one of the registers, whatever the BIOS had set would be lost.
Oh, and power management. The codec can be powered down by suspend, with the BIOS uninvolved in resume.
The way it has come to work on HDA is that the codec vendor has a number of "secret" registers, which they only tell the BIOS developers about, and (to our frustration) are quite reluctant to tell us about.
And certainly the BIOS is involved in setting these registers on S3 resume, too.
One could imagine having ACPI calls for putting a codec in suspend/resume, but as I said, in many cases (such as yours) this just makes things more complicated. It's better dealt with in the kernel codec driver instead.
From my knowledge, this is more common than not on modern laptops with HDA codecs. A bootloader could potentially do the same - but in many embedded setups I suspect it makes just as much sense to do that during driver initialization instead.
On Fri, Oct 30, 2015 at 10:04:30AM +0100, David Henningsson wrote:
On 2015-10-30 09:53, James Cameron wrote:
On Fri, Oct 30, 2015 at 09:36:39AM +0100, David Henningsson wrote:
The BIOS can poke the hardware, set registers in such ways that volumes are limited. Registers that the kernel never touches.
I'm not sure how that would work. The BIOS developers and the kernel developers would have to agree on what registers are to be touched by the BIOS only. In my case, with the rt5631, the registers could not be easily isolated in that fashion. And if the kernel hit the reset bit in one of the registers, whatever the BIOS had set would be lost.
Oh, and power management. The codec can be powered down by suspend, with the BIOS uninvolved in resume.
The way it has come to work on HDA is that the codec vendor has a number of "secret" registers, which they only tell the BIOS developers about, and (to our frustration) are quite reluctant to tell us about.
And certainly the BIOS is involved in setting these registers on S3 resume, too.
This is all extremely specific to how HDA works on x86 systems with UEFI and ACPI, it's not at all applicable generally.
One could imagine having ACPI calls for putting a codec in suspend/resume, but as I said, in many cases (such as yours) this just makes things more complicated. It's better dealt with in the kernel codec driver instead.
This then gets us back into the problem of having to have very system specific tuning in drivers somehow which is challenging to deploy.
Hi,
(If I were in ±0200、I would have joined in the meeting...)
There're some interesting issues in the minutes. Would I request someone more explaination about it?
BATCH flag for USB: Arun.
- Flag does not respond to reality, lets deprecate it. no users.
- Dylan: need to know transfer size for CRAS (uses extra samples for
buffering).
- BATCH flag means period size transfers, applications that use new
granularity API can ignore batch flag. Pierre: to implement.
The first item said 'BATCH flag should be deprecated', while BLOCK_TRANSFER flag is not mentioned. Are there some discussions about the differences between these two flags?
I think the APIs suppose that the number of PCM frames in one transferring is the same as the number of PCM frames in one period of buffer. PCM device driver developers must always satisfy this principle? Or the APIs allow them to implement such differences and present proper value to userspace?
ALSA Core Challenges:
- ALSA Core locking is complicated. Core code is quite difficult to
understand.
- PCM linking makes things complex
- Add documentation for locks.
- Controls can be hidden in UI tools through iface_cards.
- DPCM hidden PCMs should not be shown in usermode, hide them from
Usermode
The third item mentions about 'iface_cards', while there'no such structure in kernel/userspace. What's it and what is the 'UI tools'? Does it means to produce some GUI widget?
Sorry just to take my questions but I'm not a participant of the meeting...
Regards
Takashi Sakamoto
13.10.2015 19:09, Takashi Sakamoto wrote:
Hi,
(If I were in ±0200、I would have joined in the meeting...)
There're some interesting issues in the minutes. Would I request someone more explaination about it?
BATCH flag for USB: Arun.
- Flag does not respond to reality, lets deprecate it. no users.
- Dylan: need to know transfer size for CRAS (uses extra samples for
buffering).
- BATCH flag means period size transfers, applications that use new
granularity API can ignore batch flag. Pierre: to implement.
The first item said 'BATCH flag should be deprecated', while BLOCK_TRANSFER flag is not mentioned. Are there some discussions about the differences between these two flags?
There is no conspiracy here, just inaccurately-written minutes. The proposal to deprecate was about the BLOCK_TRANSFER flag (because currently it means absolutely othing and has no users), although I support deprecation of the BATCH flag too. Please see this email that explains the intended meaning of the flags:
http://mailman.alsa-project.org/pipermail/alsa-devel/2015-June/093750.html
I think the APIs suppose that the number of PCM frames in one transferring is the same as the number of PCM frames in one period of buffer. PCM device driver developers must always satisfy this principle? Or the APIs allow them to implement such differences and present proper value to userspace?
The intention is to provide driver developers the means to express the situation with the transfer size and pointer granularity in a useful manner, and two boolean flags just don't cut it. So some new API is needed.
Hi Alexander,
On Oct 13 2015 23:44, Alexander E. Patrakov wrote:
13.10.2015 19:09, Takashi Sakamoto wrote:
Hi,
(If I were in ±0200、I would have joined in the meeting...)
There're some interesting issues in the minutes. Would I request someone more explaination about it?
BATCH flag for USB: Arun.
- Flag does not respond to reality, lets deprecate it. no users.
- Dylan: need to know transfer size for CRAS (uses extra samples for
buffering).
- BATCH flag means period size transfers, applications that use new
granularity API can ignore batch flag. Pierre: to implement.
The first item said 'BATCH flag should be deprecated', while BLOCK_TRANSFER flag is not mentioned. Are there some discussions about the differences between these two flags?
There is no conspiracy here, just inaccurately-written minutes. The proposal to deprecate was about the BLOCK_TRANSFER flag (because currently it means absolutely othing and has no users), although I support deprecation of the BATCH flag too.
In my understanding, these two flags relate to calculating the number of transferred PCM frames within a period. Although, just menthioning about one of them is a bit strange to me. So I asked it. I have no interests in the others.
Please see this email that explains the intended meaning of the flags:
http://mailman.alsa-project.org/pipermail/alsa-devel/2015-June/093750.html
Anyway, I think it's a worse way to change the interfaces between userspace without enough documentation.
I think the APIs suppose that the number of PCM frames in one transferring is the same as the number of PCM frames in one period of buffer. PCM device driver developers must always satisfy this principle? Or the APIs allow them to implement such differences and present proper value to userspace?
The intention is to provide driver developers the means to express the situation with the transfer size and pointer granularity in a useful manner, and two boolean flags just don't cut it. So some new API is needed.
Ditto.
Regards
Takashi Sakamoto
On 10/13/15 9:09 AM, Takashi Sakamoto wrote:
Hi,
(If I were in ±0200、I would have joined in the meeting...)
There're some interesting issues in the minutes. Would I request someone more explaination about it?
BATCH flag for USB: Arun.
- Flag does not respond to reality, lets deprecate it. no users.
- Dylan: need to know transfer size for CRAS (uses extra samples for
buffering).
- BATCH flag means period size transfers, applications that use new
granularity API can ignore batch flag. Pierre: to implement.
The first item said 'BATCH flag should be deprecated', while BLOCK_TRANSFER flag is not mentioned. Are there some discussions about the differences between these two flags?
I think the APIs suppose that the number of PCM frames in one transferring is the same as the number of PCM frames in one period of buffer. PCM device driver developers must always satisfy this principle? Or the APIs allow them to implement such differences and present proper value to userspace?
The idea was to rely on the new hw_params field that was tentatively called max_burst to report how much data is pulled/pushed at once by the hardware. If the driver indicated that it handles a complete period this would provide the same functionality as the BATCH flag that could then be deprecated. I will try to post patches soon.
ALSA Core Challenges:
- ALSA Core locking is complicated. Core code is quite difficult to
understand.
- PCM linking makes things complex
- Add documentation for locks.
- Controls can be hidden in UI tools through iface_cards.
- DPCM hidden PCMs should not be shown in usermode, hide them from
Usermode
The third item mentions about 'iface_cards', while there'no such structure in kernel/userspace. What's it and what is the 'UI tools'? Does it means to produce some GUI widget?
The idea was that if there is a new interface used instead of 'mixer' then the new controls would not be made visible to UI tools that look for the mixer interface. But of course a new tool looking for the new iface would be able to display everything so it's only a work-around.
Sorry just to take my questions but I'm not a participant of the meeting...
we certainly hope you can join next time :-)
On Tue, 2015-10-13 at 23:09 +0900, Takashi Sakamoto wrote:
Hi,
(If I were in ±0200、I would have joined in the meeting...)
Fwiw, maybe we should do an additional hangout option next time for people who cant travel. This depends on demand though and whether the venue will give us decent internet access and we can get a mic set up for presenters. Lets look into it for next time though....
Liam
On Mon, Oct 12, 2015 at 02:49:46PM +0100, Liam Girdwood wrote:
HDA/gfx: relevant people not in room.
- Pierre: Hotplug DP over USB C required. Needs input from GFX folks.
Not just display people either, there's a bunch of core infrastructure needed for handling USB C before we get onto any particular functions. There's some people looking at it but it's not clear that they're working together or heading towards mainline yet.
participants (12)
-
Alexander E. Patrakov
-
David Henningsson
-
James Cameron
-
Jaroslav Kysela
-
Keyon
-
Liam Girdwood
-
Mark Brown
-
Pierre-Louis Bossart
-
Ricard Wanderlof
-
Richard Fitzgerald
-
Takashi Iwai
-
Takashi Sakamoto