On Thu, Dec 22, 2011 at 03:19:40PM +0100, Takashi Iwai wrote:
Mark Brown wrote:
Once we've got a userspace interface we're pretty much stuck with it...
Yes, but this is no new interface. It uses the existing control API.
In terms of system calls it's not new but once you start saying "jacks will appear as controls called X with type Y" or whatever that's also an interface built on top of the raw interface which applications can start relying on. I'd call the entire ALSA control naming standard an interface for example.
Supporting multiple objects on a single jack is a very basic requirement in order to support headsets (which exist on PCs as well, the MacBooks for example) - we can usually distinguish between at least headset and headphone.
Interesting. This can be a single jack connected to multiple pins, so the codec sees two individual detection points. So, if we want to expose pins, it'd become different from the jack representation. Hmm...
At the hardware level this is often a single detection point with multiple values - a common technique for distinguishing a headphone is that it's a microphone that's had the button shorted ever since it was plugged in. With all the different techniques for this stuff you end up with a N:M mapping between jacks and detection methods and the same again for things reported.