[alsa-devel] [PATCH] sst: Intel SST audio driver
Takashi Iwai
tiwai at suse.de
Sun Oct 17 23:36:13 CEST 2010
At Sun, 17 Oct 2010 17:18:08 +0100,
Mark Brown wrote:
>
> On Sun, Oct 17, 2010 at 01:14:56PM +0200, Takashi Iwai wrote:
> > Mark Brown wrote:
>
> > > I'm really concerned about this idea. Allowing embedded vendors to push
> > > drivers into mainline which don't work with the standard frameworks for
> > > what they're doing sets a bad precedent which makes it much harder to
> > > push back on other vendors who also want to push their own stacks.
>
> > We've had (and still have) many pre-ASoC drivers, but the
> > _maintenance_ cost of them were relatively small from the global
> > sound-tree POV. As long as the driver is self-contained, there is no
> > big hustle for maintenance.
>
> You're only seeing what goes on upstream. Outside of upstream you'll
> usually find a lot of people doing a lot of redundant work to get their
> boards working, code that never finds its way upstream and quite often
> gets done fairly reptitvely. It's no hassle upstream because nobody
> ever bothers upstreaming anything much after the original driver but for
> system integrators trying to use the driver on their platform which
> differs from the vendor reference and distributors trying to support
> multiple machines it's a different kettle of fish.
OK, but what makes it different from keeping the stuff in sound tree
rather than staging tree? Or you are suggesting to remove the driver
from staging tree, either?
> > Of course, the further _development_ is a totally different question.
> > If the development diverges to too many different h/w components, a
> > certain framework would be certainly needed. This is how ASoC was
> > introduced and developed by Liam and you.
>
> This is pretty much a given in the embedded marketplace if a CPU is at
> all successful - apart from anything else the embedded audio state of
> the art is on much quicker cycles than most components so the components
> which are leading edge today may not make quite so much sense in a year.
> The hardware is all constructed with standard interfaces between the
> parts (though there are no standards at all for the control of the
> devices) so these sorts of changes are very straightforward for the
> hardware guys to do, meaning that the software stack needs to be
> similarly modularised.
>
> > So, what I'm trying to say is:
> > - we can accept the present solution in a certain level, rather then
> > just refusing it
>
> I think this is the wrong approach. I think this says to embedded CPU
> vendors that they can go off and reinvent the wheel with their own
> embedded audio stacks.
If this brings really benefit to _user_, why not?
> If all the CPU vendors already in mainline were
> to have gone down this route we'd have sixteen different ways to add
> support for a new board right now, with lots of redundancy in the CODEC
> side between them and each with a different set of features and
> limitations in how you can customise for your board (especially if you
> want to get code upstream). That wouldn't be terribly pleasant, it'd
> put Linux's audio support right back where all the other embedded OSs
> are.
OS exists for supporting the hardware.
Yes, we have now a great framework, and h/w vendors should support it.
But, why we must restrict ourselves by this, and don't allow us to
use the full hardware features at all? The hardware encoding/decoding
are nice and long-wanted features, indeed.
> Please bear in mind that we've already seen similar stacks from other
> vendors (Marvell and TI being the main ones I've been aware of) getting
> replaced as part of mainlining, and a couple of others I'm aware of but
> NDAed with doing the same thing. If you're saying you'll accept this
> approach and bypass the existing embedded audio stack then the pressure
> on vendors to do the right thing and move over to the Linux embedded
> audio stack is greatly reduced.
Well, think about it from the user's POV.
"Why can't I use the h/w audio decoding feature on linux?"
"Because it doesn't match with the philosophy of the existing linux
audio framework."
> > - it can be taken into the main sound tree as an EXPERIMENTAL one,
> > instead of keeping it in staging tree forever
>
> I definitely think EXPERIMENTAL is too weak a pushback - I can't see
> that stopping anyone going out there and saying they have standard audio
> drivers in mainline which is the check box people are looking for.
> Even distributors routinely enable EXPERIMENTAL.
Ditto for staging tree :)
> Like I say, this is not just a concern for this driver it's also a big
> worry with other CPU vendors and with future Intel chips.
>
> > These don't mean that the current driver will be necessarily regarded
> > as the standard. We might need to end up developing the new better
> > standard framework for similar devices. It doesn't conflict with
> > merging the merge action.
>
> We already have an existing framework for embedded audio devices. We
> may want to extend or improve it but I don't see a pressing need to
> develop a new one. Whenever we end up with two different ways of doing
> the same thing it's not great, it just makes for confusion and
> redundancy.
So, are you saying that we can extend ASoC to provide the support this
kind of hardware feature? If yes, then we can work on it first.
thanks,
Takashi
More information about the Alsa-devel
mailing list