[alsa-devel] [PATCH] sst: Intel SST audio driver
Takashi Iwai
tiwai at suse.de
Sun Oct 17 13:14:56 CEST 2010
At Sun, 17 Oct 2010 11:36:27 +0100,
Mark Brown wrote:
>
> On Sun, Oct 17, 2010 at 11:02:45AM +0200, Takashi Iwai wrote:
>
> > SST driver has been once (sort of) posted and reviewed, but stalled
> > since then. But in general I'm not against merging to the sound main
> > tree at all. The only reason that it didn't occur was that the
> > activity just stopped somehow.
>
> I'm really concerned about this idea. Allowing embedded vendors to push
> drivers into mainline which don't work with the standard frameworks for
> what they're doing sets a bad precedent which makes it much harder to
> push back on other vendors who also want to push their own stacks.
We've had (and still have) many pre-ASoC drivers, but the
_maintenance_ cost of them were relatively small from the global
sound-tree POV. As long as the driver is self-contained, there is no
big hustle for maintenance.
Of course, the further _development_ is a totally different question.
If the development diverges to too many different h/w components, a
certain framework would be certainly needed. This is how ASoC was
introduced and developed by Liam and you.
So, what I'm trying to say is:
- we can accept the present solution in a certain level, rather then
just refusing it
- it can be taken into the main sound tree as an EXPERIMENTAL one,
instead of keeping it in staging tree forever
These don't mean that the current driver will be necessarily regarded
as the standard. We might need to end up developing the new better
standard framework for similar devices. It doesn't conflict with
merging the merge action.
Anyway, I'll need to review patches again from the current
standpoint, together with Mark's reviews, before further decisions.
And, I'll be open for suggestions. Take my comments above just as my
current "feelings" :)
thanks,
Takashi
> This sort of thing is very common in the embedded space for audio, to a
> large extent because other OSs don't have any sort of standard framework
> for representing the hardware. The experience is that it's always
> pretty painful if there's any active work with the device - hardware
> changes that should be straightforward like substituting a new CODEC or
> even changing the physical configuration of the outputs become much more
> involved. You can see in the current code that the bulk of the audio
> configuration is register write sequences for various operations, at
> best you end up needing to replicate those into each vendor's stack for
> each CODEC that gets deployed and at worst you end up replicating
> everything per-board rather than per-CPU. This isn't great for anyone,
> it's a lot more work and a lot less clear.
>
> The Moorestown CPU appears to be very standard embedded audio hardware -
> a CPU communicating with an external CODEC over I2S/PCM data links - and
> so I can't see any reason why we should treat it differently to other
> such hardware.
More information about the Alsa-devel
mailing list