At Sun, 17 Oct 2010 23:11:26 +0100, Mark Brown wrote:
On Sun, Oct 17, 2010 at 11:36:13PM +0200, Takashi Iwai wrote:
Mark Brown wrote:
OK, but what makes it different from keeping the stuff in sound tree rather than staging tree? Or you are suggesting to remove the driver from staging tree, either?
I do have some qualms about staging but given Greg's active policing of it and work to push things into the actual trees the warnings about its quality and general mainlineness are much more meaningful than those for experimental. There's also the tainting of kernels, which is really useful.
But who of the end user would care about it? It's distributor and developer who care.
And, is the code quality is in question, which is supposed to be fixable there? Your argument against this driver doesn't sound like fixable.
So, what I'm trying to say is:
- we can accept the present solution in a certain level, rather then just refusing it
I think this is the wrong approach. I think this says to embedded CPU vendors that they can go off and reinvent the wheel with their own embedded audio stacks.
If this brings really benefit to _user_, why not?
We've got multiple classes of users here - system integrators are involved too - and there's also the issue of how we maintain stuff going forward if users get shiny new machines and we have to work out what the patched audio drivers that come with them actually mean.
So then why you do you think keeping that stuff in staging tree is OK while sound tree is not? It's a question to be or not to be. End-users never hesitate using the staging if it works. They don't notice the difference; they don't notice kernel taint either.
The staging driver is also in upstream; distribution enables it almost always, no matter how the quality of each driver is. For developers, the distinction is clear, of course. But for the rest, it's not.
Thus, to my eyes, there is no difference between keeping it in staging and other tree. It's nothing but a question where to cook.
So, if you really don't want this style of implementation, you'll have to take an action for getting rid of the driver from staging tree. Otherwise others will submit the similar drivers, and they'll land there, too. Then we can keep the stuff in sound or other git tree in another branch until the things gets sorted out.
If all the CPU vendors already in mainline were to have gone down this route we'd have sixteen different ways to add support for a new board right now, with lots of redundancy in the CODEC
OS exists for supporting the hardware. Yes, we have now a great framework, and h/w vendors should support it. But, why we must restrict ourselves by this, and don't allow us to use the full hardware features at all? The hardware encoding/decoding are nice and long-wanted features, indeed.
I'm sorry I don't really understand what you're saying here. The encoder offload stuff is all totally orthogonal to the issue of using the ASoC APIs. On the CPU side the ASoC API is mostly a thin wrapper around the standard ALSA API so anything that works in ALSA should slot fairly readily into ASoC. What I'd expect to see happen is that the CODEC and board stuff would get pulled out and everything else would stay pretty much as-is, including the DSP.
CPUs like OMAP (which are obviously already in the ASoC framework) also support this sort of stuff, the issues with mainline for them have been around the API to the DSP, though it looks like the TI stuff in staging is getting sorted out now.
Well, what I don't understand in your argument is why we must stop it being deployed. Because it doesn't match with the current design of the framework? If so, a straight question must come next: can't the framework itself be extended to suit with such hardware?
Please bear in mind that we've already seen similar stacks from other vendors (Marvell and TI being the main ones I've been aware of) getting replaced as part of mainlining, and a couple of others I'm aware of but NDAed with doing the same thing. If you're saying you'll accept this approach and bypass the existing embedded audio stack then the pressure on vendors to do the right thing and move over to the Linux embedded audio stack is greatly reduced.
Well, think about it from the user's POV. "Why can't I use the h/w audio decoding feature on linux?" "Because it doesn't match with the philosophy of the existing linux audio framework."
Like I say I'm having real trouble connecting the above with what I wrote.
We already have an existing framework for embedded audio devices. We may want to extend or improve it but I don't see a pressing need to develop a new one. Whenever we end up with two different ways of doing the same thing it's not great, it just makes for confusion and redundancy.
So, are you saying that we can extend ASoC to provide the support this kind of hardware feature? If yes, then we can work on it first.
I'm saying I see nothing about this hardware which should prevent it working with ASoC right now. As I have said in terms of the overall structure of the device it's not really doing anything that hasn't been done elsewhere. There's stuff we'd have to bodge in the short term, but only fairly small bits that can be handled fairly straightforwardly.
OK, then we should fix this first.
thanks,
Takashi