On Tue, Apr 27, 2010 at 4:09 AM, Benjamin Herrenschmidt benh@kernel.crashing.org wrote:
On Tue, 2010-04-27 at 10:54 +0100, Mark Brown wrote:
I'd just like to add that I *really* want to see you guys come to some sort of firm and documented conclusion about the way to handle situations like this. Some variant of this seems to come up every single time anyone tries to do anything to do with audio on a system using the device tree and it's getting really repetitive. What would be really useful for audio at this point would be if we could get some sort of decision about how to represent this stuff which we can point people at so that work on systems using the device tree can be done without having to deal with the device tree layout discussions that frequently seem to be involved.
Yes, you're right. I completely agree.
[...]
Keep in mind that it's perfectly kosher to create nodes for "virtual" devices. IE. We could imagine a node for the "sound subsystem" that doesn't actually correspond to any physical device but contain the necessary properties that binds everything together. You could even have multiple of these if you have separate set of sound HW that aren't directly dependant.
I don't have bandwidth to contribute much in this discussion right now, at least not to lead it, so I'm happy to let others do so, but I'm happy to provide feedback from my own experience as proposals are made.
Unfortunately, I'm in the same boat. :-( However, I'll be at UDS in 2 weeks time, and I know audio is a big concern for the Ubuntu folks. A bunch of the ARM vendors will be there too. I'll schedule a session to talk about audio bindings and hopefully that way make some headway on defining a binding that makes sense and is actually useful.
g.