Re: [alsa-devel] [PATCH] sst: Intel SST audio driver
On Sun, Oct 03, 2010 at 11:22:44AM +0100, Alan Cox wrote:
Having things as a series does make things easier for review, reducing reviewer fatigue if nothing else. I've given this a bit of a once over below but not a deep dive - splitting the series would really help with reviewability.
It really wouldn't help in this case for most people I think. You end up with what there is in the original which is a series of patches for old code all quite large and just a big one chopped into chunks, followed by a series of updates that fix half the bugs you found reviewing the first big splat.
IIRC at least one version had a split where the ALSA integration stuff was separated out from the underlying DSP interface code - that was pretty helpful since it helps focus on the ALSA specifics.
I did think about it, but what history we have is fairly basic because it was basically written then fired in my direction to help sort out
Yeah, I wasn't thinking about history so much as about helping break down the areas covered for comprehensibility.
To be honest all this stuff looks sufficiently generic that it might be worth considering factoring out an abstraction which can be used by other offload engines - having a standard kernel API for them would be a real win. That's just a nice to have, though.
Agreed, although gstreamer is pretty good at that it would save work if it can be partly generic. It's not trivial however because the offload
Plus the fact that not everyone is using gstreamer at the application level :/
interface with suitable firmware loaded does things other than PCM and you have very firmware specific interfaces for configuring those.
We ought to be able to come up with something for the core streaming stuff, though. Like I say, it's just a nice to have though.
So are you happy for it to go into staging. I'm hoping that way at least all the changes will be visible and trackable. I'll start by sending GregKH a follow up patch which is a TODO list summary based on your feedback
Happy is a strong term but this looks like exactly the sort of stuff staging is there for so yes, it should go in. Probably best to look for feedback from at least Takashi and/or Jaroslav as well, but obviously we can update the TODO later.
I do have some nervousness about the concept of staging for embedded stuff since I worry that inclusion in staging can send the wrong message to vendors but that's a completely separate issue to this driver.
IIRC at least one version had a split where the ALSA integration stuff was separated out from the underlying DSP interface code - that was pretty helpful since it helps focus on the ALSA specifics.
Yes but that split is no longer there once the clean up patches sit on top because they weren't put together as separate bits. This is exactly the sort of reason I want to get it in staging.
Agreed, although gstreamer is pretty good at that it would save work if it can be partly generic. It's not trivial however because the offload
Plus the fact that not everyone is using gstreamer at the application level :/
interface with suitable firmware loaded does things other than PCM and you have very firmware specific interfaces for configuring those.
We ought to be able to come up with something for the core streaming stuff, though. Like I say, it's just a nice to have though.
I would have thought PCM at least was also going to have some kind of common structure.
I do have some nervousness about the concept of staging for embedded stuff since I worry that inclusion in staging can send the wrong message to vendors but that's a completely separate issue to this driver.
Noted. But I'll point you at the SEP driver which did get bogged down for ages for reasons I can't really go into publically, and we therefore pulled out of staging.
Alan
On Mon, Oct 04, 2010 at 10:04:24AM +0100, Alan Cox wrote:
IIRC at least one version had a split where the ALSA integration stuff was separated out from the underlying DSP interface code - that was pretty helpful since it helps focus on the ALSA specifics.
Yes but that split is no longer there once the clean up patches sit on top because they weren't put together as separate bits. This is exactly the sort of reason I want to get it in staging.
Sure, though that's what rebase is there for.
We ought to be able to come up with something for the core streaming stuff, though. Like I say, it's just a nice to have though.
I would have thought PCM at least was also going to have some kind of common structure.
ALSA provides that already, pretty much? I guess PCM also falls out of compressed CODEC support as a noop CODEC.
I do have some nervousness about the concept of staging for embedded stuff since I worry that inclusion in staging can send the wrong message to vendors but that's a completely separate issue to this driver.
Noted. But I'll point you at the SEP driver which did get bogged down for ages for reasons I can't really go into publically, and we therefore pulled out of staging.
It's not specifically about this driver, it's more of a general concern about the whole concept and how vendors less used to Linux could easily get the wrong end of the stick.
At Mon, 4 Oct 2010 10:04:24 +0100, Alan Cox wrote:
IIRC at least one version had a split where the ALSA integration stuff was separated out from the underlying DSP interface code - that was pretty helpful since it helps focus on the ALSA specifics.
Yes but that split is no longer there once the clean up patches sit on top because they weren't put together as separate bits. This is exactly the sort of reason I want to get it in staging.
Agreed, although gstreamer is pretty good at that it would save work if it can be partly generic. It's not trivial however because the offload
Plus the fact that not everyone is using gstreamer at the application level :/
interface with suitable firmware loaded does things other than PCM and you have very firmware specific interfaces for configuring those.
We ought to be able to come up with something for the core streaming stuff, though. Like I say, it's just a nice to have though.
I would have thought PCM at least was also going to have some kind of common structure.
I do have some nervousness about the concept of staging for embedded stuff since I worry that inclusion in staging can send the wrong message to vendors but that's a completely separate issue to this driver.
Noted. But I'll point you at the SEP driver which did get bogged down for ages for reasons I can't really go into publically, and we therefore pulled out of staging.
Sorry to be too late to join the review (as I've been too busy and in other troubles...)
SST driver has been once (sort of) posted and reviewed, but stalled since then. But in general I'm not against merging to the sound main tree at all. The only reason that it didn't occur was that the activity just stopped somehow.
Anyway, I'm going to review in the next week or so. Then maybe we can move it to sound/ directory (e.g. creating sound/platform/x86 directory) before rc1, unless you'd like to keep the stuff rather in staging tree.
thanks,
Takashi
On Sun, Oct 17, 2010 at 11:02:45AM +0200, Takashi Iwai wrote:
SST driver has been once (sort of) posted and reviewed, but stalled since then. But in general I'm not against merging to the sound main tree at all. The only reason that it didn't occur was that the activity just stopped somehow.
I'm really concerned about this idea. Allowing embedded vendors to push drivers into mainline which don't work with the standard frameworks for what they're doing sets a bad precedent which makes it much harder to push back on other vendors who also want to push their own stacks.
This sort of thing is very common in the embedded space for audio, to a large extent because other OSs don't have any sort of standard framework for representing the hardware. The experience is that it's always pretty painful if there's any active work with the device - hardware changes that should be straightforward like substituting a new CODEC or even changing the physical configuration of the outputs become much more involved. You can see in the current code that the bulk of the audio configuration is register write sequences for various operations, at best you end up needing to replicate those into each vendor's stack for each CODEC that gets deployed and at worst you end up replicating everything per-board rather than per-CPU. This isn't great for anyone, it's a lot more work and a lot less clear.
The Moorestown CPU appears to be very standard embedded audio hardware - a CPU communicating with an external CODEC over I2S/PCM data links - and so I can't see any reason why we should treat it differently to other such hardware.
At Sun, 17 Oct 2010 11:36:27 +0100, Mark Brown wrote:
On Sun, Oct 17, 2010 at 11:02:45AM +0200, Takashi Iwai wrote:
SST driver has been once (sort of) posted and reviewed, but stalled since then. But in general I'm not against merging to the sound main tree at all. The only reason that it didn't occur was that the activity just stopped somehow.
I'm really concerned about this idea. Allowing embedded vendors to push drivers into mainline which don't work with the standard frameworks for what they're doing sets a bad precedent which makes it much harder to push back on other vendors who also want to push their own stacks.
We've had (and still have) many pre-ASoC drivers, but the _maintenance_ cost of them were relatively small from the global sound-tree POV. As long as the driver is self-contained, there is no big hustle for maintenance.
Of course, the further _development_ is a totally different question. If the development diverges to too many different h/w components, a certain framework would be certainly needed. This is how ASoC was introduced and developed by Liam and you.
So, what I'm trying to say is: - we can accept the present solution in a certain level, rather then just refusing it - it can be taken into the main sound tree as an EXPERIMENTAL one, instead of keeping it in staging tree forever
These don't mean that the current driver will be necessarily regarded as the standard. We might need to end up developing the new better standard framework for similar devices. It doesn't conflict with merging the merge action.
Anyway, I'll need to review patches again from the current standpoint, together with Mark's reviews, before further decisions. And, I'll be open for suggestions. Take my comments above just as my current "feelings" :)
thanks,
Takashi
This sort of thing is very common in the embedded space for audio, to a large extent because other OSs don't have any sort of standard framework for representing the hardware. The experience is that it's always pretty painful if there's any active work with the device - hardware changes that should be straightforward like substituting a new CODEC or even changing the physical configuration of the outputs become much more involved. You can see in the current code that the bulk of the audio configuration is register write sequences for various operations, at best you end up needing to replicate those into each vendor's stack for each CODEC that gets deployed and at worst you end up replicating everything per-board rather than per-CPU. This isn't great for anyone, it's a lot more work and a lot less clear.
The Moorestown CPU appears to be very standard embedded audio hardware - a CPU communicating with an external CODEC over I2S/PCM data links - and so I can't see any reason why we should treat it differently to other such hardware.
On Sun, Oct 17, 2010 at 01:14:56PM +0200, Takashi Iwai wrote:
Mark Brown wrote:
I'm really concerned about this idea. Allowing embedded vendors to push drivers into mainline which don't work with the standard frameworks for what they're doing sets a bad precedent which makes it much harder to push back on other vendors who also want to push their own stacks.
We've had (and still have) many pre-ASoC drivers, but the _maintenance_ cost of them were relatively small from the global sound-tree POV. As long as the driver is self-contained, there is no big hustle for maintenance.
You're only seeing what goes on upstream. Outside of upstream you'll usually find a lot of people doing a lot of redundant work to get their boards working, code that never finds its way upstream and quite often gets done fairly reptitvely. It's no hassle upstream because nobody ever bothers upstreaming anything much after the original driver but for system integrators trying to use the driver on their platform which differs from the vendor reference and distributors trying to support multiple machines it's a different kettle of fish.
Of course, the further _development_ is a totally different question. If the development diverges to too many different h/w components, a certain framework would be certainly needed. This is how ASoC was introduced and developed by Liam and you.
This is pretty much a given in the embedded marketplace if a CPU is at all successful - apart from anything else the embedded audio state of the art is on much quicker cycles than most components so the components which are leading edge today may not make quite so much sense in a year. The hardware is all constructed with standard interfaces between the parts (though there are no standards at all for the control of the devices) so these sorts of changes are very straightforward for the hardware guys to do, meaning that the software stack needs to be similarly modularised.
So, what I'm trying to say is:
- we can accept the present solution in a certain level, rather then just refusing it
I think this is the wrong approach. I think this says to embedded CPU vendors that they can go off and reinvent the wheel with their own embedded audio stacks. If all the CPU vendors already in mainline were to have gone down this route we'd have sixteen different ways to add support for a new board right now, with lots of redundancy in the CODEC side between them and each with a different set of features and limitations in how you can customise for your board (especially if you want to get code upstream). That wouldn't be terribly pleasant, it'd put Linux's audio support right back where all the other embedded OSs are.
Please bear in mind that we've already seen similar stacks from other vendors (Marvell and TI being the main ones I've been aware of) getting replaced as part of mainlining, and a couple of others I'm aware of but NDAed with doing the same thing. If you're saying you'll accept this approach and bypass the existing embedded audio stack then the pressure on vendors to do the right thing and move over to the Linux embedded audio stack is greatly reduced.
- it can be taken into the main sound tree as an EXPERIMENTAL one, instead of keeping it in staging tree forever
I definitely think EXPERIMENTAL is too weak a pushback - I can't see that stopping anyone going out there and saying they have standard audio drivers in mainline which is the check box people are looking for. Even distributors routinely enable EXPERIMENTAL.
Like I say, this is not just a concern for this driver it's also a big worry with other CPU vendors and with future Intel chips.
These don't mean that the current driver will be necessarily regarded as the standard. We might need to end up developing the new better standard framework for similar devices. It doesn't conflict with merging the merge action.
We already have an existing framework for embedded audio devices. We may want to extend or improve it but I don't see a pressing need to develop a new one. Whenever we end up with two different ways of doing the same thing it's not great, it just makes for confusion and redundancy.
At Sun, 17 Oct 2010 17:18:08 +0100, Mark Brown wrote:
On Sun, Oct 17, 2010 at 01:14:56PM +0200, Takashi Iwai wrote:
Mark Brown wrote:
I'm really concerned about this idea. Allowing embedded vendors to push drivers into mainline which don't work with the standard frameworks for what they're doing sets a bad precedent which makes it much harder to push back on other vendors who also want to push their own stacks.
We've had (and still have) many pre-ASoC drivers, but the _maintenance_ cost of them were relatively small from the global sound-tree POV. As long as the driver is self-contained, there is no big hustle for maintenance.
You're only seeing what goes on upstream. Outside of upstream you'll usually find a lot of people doing a lot of redundant work to get their boards working, code that never finds its way upstream and quite often gets done fairly reptitvely. It's no hassle upstream because nobody ever bothers upstreaming anything much after the original driver but for system integrators trying to use the driver on their platform which differs from the vendor reference and distributors trying to support multiple machines it's a different kettle of fish.
OK, but what makes it different from keeping the stuff in sound tree rather than staging tree? Or you are suggesting to remove the driver from staging tree, either?
Of course, the further _development_ is a totally different question. If the development diverges to too many different h/w components, a certain framework would be certainly needed. This is how ASoC was introduced and developed by Liam and you.
This is pretty much a given in the embedded marketplace if a CPU is at all successful - apart from anything else the embedded audio state of the art is on much quicker cycles than most components so the components which are leading edge today may not make quite so much sense in a year. The hardware is all constructed with standard interfaces between the parts (though there are no standards at all for the control of the devices) so these sorts of changes are very straightforward for the hardware guys to do, meaning that the software stack needs to be similarly modularised.
So, what I'm trying to say is:
- we can accept the present solution in a certain level, rather then just refusing it
I think this is the wrong approach. I think this says to embedded CPU vendors that they can go off and reinvent the wheel with their own embedded audio stacks.
If this brings really benefit to _user_, why not?
If all the CPU vendors already in mainline were to have gone down this route we'd have sixteen different ways to add support for a new board right now, with lots of redundancy in the CODEC side between them and each with a different set of features and limitations in how you can customise for your board (especially if you want to get code upstream). That wouldn't be terribly pleasant, it'd put Linux's audio support right back where all the other embedded OSs are.
OS exists for supporting the hardware. Yes, we have now a great framework, and h/w vendors should support it. But, why we must restrict ourselves by this, and don't allow us to use the full hardware features at all? The hardware encoding/decoding are nice and long-wanted features, indeed.
Please bear in mind that we've already seen similar stacks from other vendors (Marvell and TI being the main ones I've been aware of) getting replaced as part of mainlining, and a couple of others I'm aware of but NDAed with doing the same thing. If you're saying you'll accept this approach and bypass the existing embedded audio stack then the pressure on vendors to do the right thing and move over to the Linux embedded audio stack is greatly reduced.
Well, think about it from the user's POV. "Why can't I use the h/w audio decoding feature on linux?" "Because it doesn't match with the philosophy of the existing linux audio framework."
- it can be taken into the main sound tree as an EXPERIMENTAL one, instead of keeping it in staging tree forever
I definitely think EXPERIMENTAL is too weak a pushback - I can't see that stopping anyone going out there and saying they have standard audio drivers in mainline which is the check box people are looking for. Even distributors routinely enable EXPERIMENTAL.
Ditto for staging tree :)
Like I say, this is not just a concern for this driver it's also a big worry with other CPU vendors and with future Intel chips.
These don't mean that the current driver will be necessarily regarded as the standard. We might need to end up developing the new better standard framework for similar devices. It doesn't conflict with merging the merge action.
We already have an existing framework for embedded audio devices. We may want to extend or improve it but I don't see a pressing need to develop a new one. Whenever we end up with two different ways of doing the same thing it's not great, it just makes for confusion and redundancy.
So, are you saying that we can extend ASoC to provide the support this kind of hardware feature? If yes, then we can work on it first.
thanks,
Takashi
On Sun, Oct 17, 2010 at 11:36:13PM +0200, Takashi Iwai wrote:
Mark Brown wrote:
OK, but what makes it different from keeping the stuff in sound tree rather than staging tree? Or you are suggesting to remove the driver from staging tree, either?
I do have some qualms about staging but given Greg's active policing of it and work to push things into the actual trees the warnings about its quality and general mainlineness are much more meaningful than those for experimental. There's also the tainting of kernels, which is really useful.
So, what I'm trying to say is:
- we can accept the present solution in a certain level, rather then just refusing it
I think this is the wrong approach. I think this says to embedded CPU vendors that they can go off and reinvent the wheel with their own embedded audio stacks.
If this brings really benefit to _user_, why not?
We've got multiple classes of users here - system integrators are involved too - and there's also the issue of how we maintain stuff going forward if users get shiny new machines and we have to work out what the patched audio drivers that come with them actually mean.
If all the CPU vendors already in mainline were to have gone down this route we'd have sixteen different ways to add support for a new board right now, with lots of redundancy in the CODEC
OS exists for supporting the hardware. Yes, we have now a great framework, and h/w vendors should support it. But, why we must restrict ourselves by this, and don't allow us to use the full hardware features at all? The hardware encoding/decoding are nice and long-wanted features, indeed.
I'm sorry I don't really understand what you're saying here. The encoder offload stuff is all totally orthogonal to the issue of using the ASoC APIs. On the CPU side the ASoC API is mostly a thin wrapper around the standard ALSA API so anything that works in ALSA should slot fairly readily into ASoC. What I'd expect to see happen is that the CODEC and board stuff would get pulled out and everything else would stay pretty much as-is, including the DSP.
CPUs like OMAP (which are obviously already in the ASoC framework) also support this sort of stuff, the issues with mainline for them have been around the API to the DSP, though it looks like the TI stuff in staging is getting sorted out now.
Please bear in mind that we've already seen similar stacks from other vendors (Marvell and TI being the main ones I've been aware of) getting replaced as part of mainlining, and a couple of others I'm aware of but NDAed with doing the same thing. If you're saying you'll accept this approach and bypass the existing embedded audio stack then the pressure on vendors to do the right thing and move over to the Linux embedded audio stack is greatly reduced.
Well, think about it from the user's POV. "Why can't I use the h/w audio decoding feature on linux?" "Because it doesn't match with the philosophy of the existing linux audio framework."
Like I say I'm having real trouble connecting the above with what I wrote.
We already have an existing framework for embedded audio devices. We may want to extend or improve it but I don't see a pressing need to develop a new one. Whenever we end up with two different ways of doing the same thing it's not great, it just makes for confusion and redundancy.
So, are you saying that we can extend ASoC to provide the support this kind of hardware feature? If yes, then we can work on it first.
I'm saying I see nothing about this hardware which should prevent it working with ASoC right now. As I have said in terms of the overall structure of the device it's not really doing anything that hasn't been done elsewhere. There's stuff we'd have to bodge in the short term, but only fairly small bits that can be handled fairly straightforwardly.
At Sun, 17 Oct 2010 23:11:26 +0100, Mark Brown wrote:
On Sun, Oct 17, 2010 at 11:36:13PM +0200, Takashi Iwai wrote:
Mark Brown wrote:
OK, but what makes it different from keeping the stuff in sound tree rather than staging tree? Or you are suggesting to remove the driver from staging tree, either?
I do have some qualms about staging but given Greg's active policing of it and work to push things into the actual trees the warnings about its quality and general mainlineness are much more meaningful than those for experimental. There's also the tainting of kernels, which is really useful.
But who of the end user would care about it? It's distributor and developer who care.
And, is the code quality is in question, which is supposed to be fixable there? Your argument against this driver doesn't sound like fixable.
So, what I'm trying to say is:
- we can accept the present solution in a certain level, rather then just refusing it
I think this is the wrong approach. I think this says to embedded CPU vendors that they can go off and reinvent the wheel with their own embedded audio stacks.
If this brings really benefit to _user_, why not?
We've got multiple classes of users here - system integrators are involved too - and there's also the issue of how we maintain stuff going forward if users get shiny new machines and we have to work out what the patched audio drivers that come with them actually mean.
So then why you do you think keeping that stuff in staging tree is OK while sound tree is not? It's a question to be or not to be. End-users never hesitate using the staging if it works. They don't notice the difference; they don't notice kernel taint either.
The staging driver is also in upstream; distribution enables it almost always, no matter how the quality of each driver is. For developers, the distinction is clear, of course. But for the rest, it's not.
Thus, to my eyes, there is no difference between keeping it in staging and other tree. It's nothing but a question where to cook.
So, if you really don't want this style of implementation, you'll have to take an action for getting rid of the driver from staging tree. Otherwise others will submit the similar drivers, and they'll land there, too. Then we can keep the stuff in sound or other git tree in another branch until the things gets sorted out.
If all the CPU vendors already in mainline were to have gone down this route we'd have sixteen different ways to add support for a new board right now, with lots of redundancy in the CODEC
OS exists for supporting the hardware. Yes, we have now a great framework, and h/w vendors should support it. But, why we must restrict ourselves by this, and don't allow us to use the full hardware features at all? The hardware encoding/decoding are nice and long-wanted features, indeed.
I'm sorry I don't really understand what you're saying here. The encoder offload stuff is all totally orthogonal to the issue of using the ASoC APIs. On the CPU side the ASoC API is mostly a thin wrapper around the standard ALSA API so anything that works in ALSA should slot fairly readily into ASoC. What I'd expect to see happen is that the CODEC and board stuff would get pulled out and everything else would stay pretty much as-is, including the DSP.
CPUs like OMAP (which are obviously already in the ASoC framework) also support this sort of stuff, the issues with mainline for them have been around the API to the DSP, though it looks like the TI stuff in staging is getting sorted out now.
Well, what I don't understand in your argument is why we must stop it being deployed. Because it doesn't match with the current design of the framework? If so, a straight question must come next: can't the framework itself be extended to suit with such hardware?
Please bear in mind that we've already seen similar stacks from other vendors (Marvell and TI being the main ones I've been aware of) getting replaced as part of mainlining, and a couple of others I'm aware of but NDAed with doing the same thing. If you're saying you'll accept this approach and bypass the existing embedded audio stack then the pressure on vendors to do the right thing and move over to the Linux embedded audio stack is greatly reduced.
Well, think about it from the user's POV. "Why can't I use the h/w audio decoding feature on linux?" "Because it doesn't match with the philosophy of the existing linux audio framework."
Like I say I'm having real trouble connecting the above with what I wrote.
We already have an existing framework for embedded audio devices. We may want to extend or improve it but I don't see a pressing need to develop a new one. Whenever we end up with two different ways of doing the same thing it's not great, it just makes for confusion and redundancy.
So, are you saying that we can extend ASoC to provide the support this kind of hardware feature? If yes, then we can work on it first.
I'm saying I see nothing about this hardware which should prevent it working with ASoC right now. As I have said in terms of the overall structure of the device it's not really doing anything that hasn't been done elsewhere. There's stuff we'd have to bodge in the short term, but only fairly small bits that can be handled fairly straightforwardly.
OK, then we should fix this first.
thanks,
Takashi
On Mon, Oct 18, 2010 at 08:14:16AM +0200, Takashi Iwai wrote:
Mark Brown wrote:
I do have some qualms about staging but given Greg's active policing of it and work to push things into the actual trees the warnings about its quality and general mainlineness are much more meaningful than those for experimental. There's also the tainting of kernels, which is really useful.
But who of the end user would care about it? It's distributor and developer who care.
End users I'm not so worried about in the immediate instance, it's mostly developers and in the embedded space there's way more of them than end users. A lot of the time end users don't get sufficent access to the hardware to worry about kernels (though in this case that's less likely to be the case).
And, is the code quality is in question, which is supposed to be fixable there? Your argument against this driver doesn't sound like fixable.
What makes you think these problems are unfixable? It should be fairly straightforward to pull out the CODEC and board parts of the driver and factor them out - as I said last night the ASoC CPU side support is a very thin wrapper around vanilla ALSA. The code and board sections of the code look fairly basic and straightforward to redo within the ASoC APIs. I don't think this is an unreasonably difficult thing to do and it's certainly nowhere near starting from scratch.
So then why you do you think keeping that stuff in staging tree is OK while sound tree is not? It's a question to be or not to be. End-users never hesitate using the staging if it works. They don't notice the difference; they don't notice kernel taint either.
Like I say, I have some qualms about staging but they're pretty minor. The idea of accepting embedded drivers which don't use ASoC seems like a massive step back compared to where what we've got right now, and is going down the path of accepting drivers that don't even sit within ALSA.
Currently with Linux we're able to say that if a CPU is supported and a CODEC is supported then we can use the two together. That's great, and it's good for all concerned. If we're going to start accepting vendor specific audio stacks we'll no longer be in a position to say that. We will be back to having to talk about specific combinations of devices working together.
The staging driver is also in upstream; distribution enables it almost always, no matter how the quality of each driver is. For developers, the distinction is clear, of course. But for the rest, it's not.
The biggest issues here surround developers.
So, if you really don't want this style of implementation, you'll have to take an action for getting rid of the driver from staging tree. Otherwise others will submit the similar drivers, and they'll land there, too. Then we can keep the stuff in sound or other git tree in another branch until the things gets sorted out.
One of the nice things about staging (indeed the key thing from my point of view) is that Greg does from time to time remove drivers from staging which look like they're not making any progress towards leaving staging. In contrast drivers in the main tree generally sit there indefinitely with minimal impetus to improve them.
I'm sorry I don't really understand what you're saying here. The encoder offload stuff is all totally orthogonal to the issue of using the ASoC APIs. On the CPU side the ASoC API is mostly a thin wrapper around the standard ALSA API so anything that works in ALSA should slot fairly readily into ASoC. What I'd expect to see happen is that the CODEC and board stuff would get pulled out and everything else would stay pretty much as-is, including the DSP.
CPUs like OMAP (which are obviously already in the ASoC framework) also support this sort of stuff, the issues with mainline for them have been around the API to the DSP, though it looks like the TI stuff in staging is getting sorted out now.
Well, what I don't understand in your argument is why we must stop it being deployed.
Because it's not using the relevant framework at all, it's gone and reinvented the wheel without a pressing reason to do so and this will be very likely to create problems if the part is at all successful. It will also set a really bad precedent for other vendors which is even more worrying than the specific driver.
Throughout the kernel we regularly have to push back on embedded vendors (not just CPU vendors) who produce drivers outside the standard frameworks and get them converted to the standard frameworks so that they can be deployed without having to go into driver specific APIs all the time. Due to the fact that everything gets merged into one mainline tree Linux has much higher standards all round with this sort of thing than most other OSs but it takes work to enforce that.
This does include a bunch of the recently posted Intel drivers, including this one, but there's nothing particularly unusual here.
Because it doesn't match with the current design of
the framework? If so, a straight question must come next: can't the framework itself be extended to suit with such hardware?
As I wrote several times I can see no problem supporting this hardware with current ASoC. I'm not sure how I can be any clearer on this point.
I'm saying I see nothing about this hardware which should prevent it working with ASoC right now. As I have said in terms of the overall structure of the device it's not really doing anything that hasn't been done elsewhere. There's stuff we'd have to bodge in the short term, but only fairly small bits that can be handled fairly straightforwardly.
OK, then we should fix this first.
This is all driver specific stuff, there's nothing that needs doing here immediately outside of the driver that I can spot right now.
Hi,
Building a linux audio driver is like putting lego blocks together. That is the major convenience in driver development and other non-opensource systems cannot offer such luxury to driver developers. I am in no position to make any statements but please keep this luxury alive and thriving and do not let it be ruined by vendor-specific ways if there is a standardized framework to employ, if the framework is suitable for the task, and if modifying the existing driver is not overly complicated.
Thanks for the great work you are doing.
Pavel.
Mark Brown napsal(a):
On Mon, Oct 18, 2010 at 08:14:16AM +0200, Takashi Iwai wrote:
Mark Brown wrote:
I do have some qualms about staging but given Greg's active policing of it and work to push things into the actual trees the warnings about its quality and general mainlineness are much more meaningful than those for experimental. There's also the tainting of kernels, which is really useful.
But who of the end user would care about it? It's distributor and developer who care.
End users I'm not so worried about in the immediate instance, it's mostly developers and in the embedded space there's way more of them than end users. A lot of the time end users don't get sufficent access to the hardware to worry about kernels (though in this case that's less likely to be the case).
And, is the code quality is in question, which is supposed to be fixable there? Your argument against this driver doesn't sound like fixable.
What makes you think these problems are unfixable? It should be fairly straightforward to pull out the CODEC and board parts of the driver and factor them out - as I said last night the ASoC CPU side support is a very thin wrapper around vanilla ALSA. The code and board sections of the code look fairly basic and straightforward to redo within the ASoC APIs. I don't think this is an unreasonably difficult thing to do and it's certainly nowhere near starting from scratch.
So then why you do you think keeping that stuff in staging tree is OK while sound tree is not? It's a question to be or not to be. End-users never hesitate using the staging if it works. They don't notice the difference; they don't notice kernel taint either.
Like I say, I have some qualms about staging but they're pretty minor. The idea of accepting embedded drivers which don't use ASoC seems like a massive step back compared to where what we've got right now, and is going down the path of accepting drivers that don't even sit within ALSA.
Currently with Linux we're able to say that if a CPU is supported and a CODEC is supported then we can use the two together. That's great, and it's good for all concerned. If we're going to start accepting vendor specific audio stacks we'll no longer be in a position to say that. We will be back to having to talk about specific combinations of devices working together.
The staging driver is also in upstream; distribution enables it almost always, no matter how the quality of each driver is. For developers, the distinction is clear, of course. But for the rest, it's not.
The biggest issues here surround developers.
So, if you really don't want this style of implementation, you'll have to take an action for getting rid of the driver from staging tree. Otherwise others will submit the similar drivers, and they'll land there, too. Then we can keep the stuff in sound or other git tree in another branch until the things gets sorted out.
One of the nice things about staging (indeed the key thing from my point of view) is that Greg does from time to time remove drivers from staging which look like they're not making any progress towards leaving staging. In contrast drivers in the main tree generally sit there indefinitely with minimal impetus to improve them.
I'm sorry I don't really understand what you're saying here. The encoder offload stuff is all totally orthogonal to the issue of using the ASoC APIs. On the CPU side the ASoC API is mostly a thin wrapper around the standard ALSA API so anything that works in ALSA should slot fairly readily into ASoC. What I'd expect to see happen is that the CODEC and board stuff would get pulled out and everything else would stay pretty much as-is, including the DSP.
CPUs like OMAP (which are obviously already in the ASoC framework) also support this sort of stuff, the issues with mainline for them have been around the API to the DSP, though it looks like the TI stuff in staging is getting sorted out now.
Well, what I don't understand in your argument is why we must stop it being deployed.
Because it's not using the relevant framework at all, it's gone and reinvented the wheel without a pressing reason to do so and this will be very likely to create problems if the part is at all successful. It will also set a really bad precedent for other vendors which is even more worrying than the specific driver.
Throughout the kernel we regularly have to push back on embedded vendors (not just CPU vendors) who produce drivers outside the standard frameworks and get them converted to the standard frameworks so that they can be deployed without having to go into driver specific APIs all the time. Due to the fact that everything gets merged into one mainline tree Linux has much higher standards all round with this sort of thing than most other OSs but it takes work to enforce that.
This does include a bunch of the recently posted Intel drivers, including this one, but there's nothing particularly unusual here.
Because it doesn't match with the current design of
the framework? If so, a straight question must come next: can't the framework itself be extended to suit with such hardware?
As I wrote several times I can see no problem supporting this hardware with current ASoC. I'm not sure how I can be any clearer on this point.
I'm saying I see nothing about this hardware which should prevent it working with ASoC right now. As I have said in terms of the overall structure of the device it's not really doing anything that hasn't been done elsewhere. There's stuff we'd have to bodge in the short term, but only fairly small bits that can be handled fairly straightforwardly.
OK, then we should fix this first.
This is all driver specific stuff, there's nothing that needs doing here immediately outside of the driver that I can spot right now. _______________________________________________ Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
At Mon, 18 Oct 2010 08:20:17 +0100, Mark Brown wrote:
On Mon, Oct 18, 2010 at 08:14:16AM +0200, Takashi Iwai wrote:
Mark Brown wrote:
I do have some qualms about staging but given Greg's active policing of it and work to push things into the actual trees the warnings about its quality and general mainlineness are much more meaningful than those for experimental. There's also the tainting of kernels, which is really useful.
But who of the end user would care about it? It's distributor and developer who care.
End users I'm not so worried about in the immediate instance, it's mostly developers and in the embedded space there's way more of them than end users. A lot of the time end users don't get sufficent access to the hardware to worry about kernels (though in this case that's less likely to be the case).
And, is the code quality is in question, which is supposed to be fixable there? Your argument against this driver doesn't sound like fixable.
What makes you think these problems are unfixable? It should be fairly straightforward to pull out the CODEC and board parts of the driver and factor them out - as I said last night the ASoC CPU side support is a very thin wrapper around vanilla ALSA. The code and board sections of the code look fairly basic and straightforward to redo within the ASoC APIs. I don't think this is an unreasonably difficult thing to do and it's certainly nowhere near starting from scratch.
So then why you do you think keeping that stuff in staging tree is OK while sound tree is not? It's a question to be or not to be. End-users never hesitate using the staging if it works. They don't notice the difference; they don't notice kernel taint either.
Like I say, I have some qualms about staging but they're pretty minor. The idea of accepting embedded drivers which don't use ASoC seems like a massive step back compared to where what we've got right now, and is going down the path of accepting drivers that don't even sit within ALSA.
Currently with Linux we're able to say that if a CPU is supported and a CODEC is supported then we can use the two together. That's great, and it's good for all concerned. If we're going to start accepting vendor specific audio stacks we'll no longer be in a position to say that. We will be back to having to talk about specific combinations of devices working together.
The staging driver is also in upstream; distribution enables it almost always, no matter how the quality of each driver is. For developers, the distinction is clear, of course. But for the rest, it's not.
The biggest issues here surround developers.
So, if you really don't want this style of implementation, you'll have to take an action for getting rid of the driver from staging tree. Otherwise others will submit the similar drivers, and they'll land there, too. Then we can keep the stuff in sound or other git tree in another branch until the things gets sorted out.
One of the nice things about staging (indeed the key thing from my point of view) is that Greg does from time to time remove drivers from staging which look like they're not making any progress towards leaving staging. In contrast drivers in the main tree generally sit there indefinitely with minimal impetus to improve them.
I'm sorry I don't really understand what you're saying here. The encoder offload stuff is all totally orthogonal to the issue of using the ASoC APIs. On the CPU side the ASoC API is mostly a thin wrapper around the standard ALSA API so anything that works in ALSA should slot fairly readily into ASoC. What I'd expect to see happen is that the CODEC and board stuff would get pulled out and everything else would stay pretty much as-is, including the DSP.
CPUs like OMAP (which are obviously already in the ASoC framework) also support this sort of stuff, the issues with mainline for them have been around the API to the DSP, though it looks like the TI stuff in staging is getting sorted out now.
Well, what I don't understand in your argument is why we must stop it being deployed.
Because it's not using the relevant framework at all, it's gone and reinvented the wheel without a pressing reason to do so and this will be very likely to create problems if the part is at all successful. It will also set a really bad precedent for other vendors which is even more worrying than the specific driver.
Throughout the kernel we regularly have to push back on embedded vendors (not just CPU vendors) who produce drivers outside the standard frameworks and get them converted to the standard frameworks so that they can be deployed without having to go into driver specific APIs all the time. Due to the fact that everything gets merged into one mainline tree Linux has much higher standards all round with this sort of thing than most other OSs but it takes work to enforce that.
This does include a bunch of the recently posted Intel drivers, including this one, but there's nothing particularly unusual here.
Because it doesn't match with the current design of
the framework? If so, a straight question must come next: can't the framework itself be extended to suit with such hardware?
As I wrote several times I can see no problem supporting this hardware with current ASoC. I'm not sure how I can be any clearer on this point.
I'm saying I see nothing about this hardware which should prevent it working with ASoC right now. As I have said in terms of the overall structure of the device it's not really doing anything that hasn't been done elsewhere. There's stuff we'd have to bodge in the short term, but only fairly small bits that can be handled fairly straightforwardly.
OK, then we should fix this first.
This is all driver specific stuff, there's nothing that needs doing here immediately outside of the driver that I can spot right now.
... so the whole messages can be reduced to a sentense: "Rewrite the driver based on ASoC!"
Then it's a question to Intel guys. If they would like to support for it, we can happily wait for it before merging. If they don't want but keep as is, then questions are: - whether to keep it in upstream or not - where to keep it, in staging or in sound tree
and, the latter question is nothing but a bikeshedding; in anyway it's a second choice, after all. We should spend time for more useful things :)
Takashi
This is all driver specific stuff, there's nothing that needs doing here immediately outside of the driver that I can spot right now.
... so the whole messages can be reduced to a sentense: "Rewrite the driver based on ASoC!"
Rework rather than rewrite I think. Plus fix the abuse of the jack inputs etc. Our intent is to do the job properly.
- whether to keep it in upstream or not
I think so - it gets more visiblity and it is already benefitting from that
- where to keep it, in staging or in sound tree
I would prefer staging as again it gets more visibility. The second reason I would prefer that visibility is that separate to the plain ALSA driver there needs to be a bigger discussion around audio offload interfaces and standardizing them.
Alan
At Mon, 18 Oct 2010 11:34:00 +0100, Alan Cox wrote:
This is all driver specific stuff, there's nothing that needs doing here immediately outside of the driver that I can spot right now.
... so the whole messages can be reduced to a sentense: "Rewrite the driver based on ASoC!"
Rework rather than rewrite I think. Plus fix the abuse of the jack inputs etc. Our intent is to do the job properly.
- whether to keep it in upstream or not
I think so - it gets more visiblity and it is already benefitting from that
- where to keep it, in staging or in sound tree
I would prefer staging as again it gets more visibility. The second reason I would prefer that visibility is that separate to the plain ALSA driver there needs to be a bigger discussion around audio offload interfaces and standardizing them.
OK, if the current status is so, there is no need to move it to sound tree. I just wanted to make this thing not being laid on staging forever with only coding-style clean-ups here and there.
thanks,
Takashi
On Mon, 2010-10-18 at 10:10 +0200, Takashi Iwai wrote:
At Mon, 18 Oct 2010 08:20:17 +0100, Mark Brown wrote:
On Mon, Oct 18, 2010 at 08:14:16AM +0200, Takashi Iwai wrote:
Mark Brown wrote:
I do have some qualms about staging but given Greg's active policing of it and work to push things into the actual trees the warnings about its quality and general mainlineness are much more meaningful than those for experimental. There's also the tainting of kernels, which is really useful.
But who of the end user would care about it? It's distributor and developer who care.
End users I'm not so worried about in the immediate instance, it's mostly developers and in the embedded space there's way more of them than end users. A lot of the time end users don't get sufficent access to the hardware to worry about kernels (though in this case that's less likely to be the case).
And, is the code quality is in question, which is supposed to be fixable there? Your argument against this driver doesn't sound like fixable.
What makes you think these problems are unfixable? It should be fairly straightforward to pull out the CODEC and board parts of the driver and factor them out - as I said last night the ASoC CPU side support is a very thin wrapper around vanilla ALSA. The code and board sections of the code look fairly basic and straightforward to redo within the ASoC APIs. I don't think this is an unreasonably difficult thing to do and it's certainly nowhere near starting from scratch.
So then why you do you think keeping that stuff in staging tree is OK while sound tree is not? It's a question to be or not to be. End-users never hesitate using the staging if it works. They don't notice the difference; they don't notice kernel taint either.
Like I say, I have some qualms about staging but they're pretty minor. The idea of accepting embedded drivers which don't use ASoC seems like a massive step back compared to where what we've got right now, and is going down the path of accepting drivers that don't even sit within ALSA.
Currently with Linux we're able to say that if a CPU is supported and a CODEC is supported then we can use the two together. That's great, and it's good for all concerned. If we're going to start accepting vendor specific audio stacks we'll no longer be in a position to say that. We will be back to having to talk about specific combinations of devices working together.
The staging driver is also in upstream; distribution enables it almost always, no matter how the quality of each driver is. For developers, the distinction is clear, of course. But for the rest, it's not.
The biggest issues here surround developers.
So, if you really don't want this style of implementation, you'll have to take an action for getting rid of the driver from staging tree. Otherwise others will submit the similar drivers, and they'll land there, too. Then we can keep the stuff in sound or other git tree in another branch until the things gets sorted out.
One of the nice things about staging (indeed the key thing from my point of view) is that Greg does from time to time remove drivers from staging which look like they're not making any progress towards leaving staging. In contrast drivers in the main tree generally sit there indefinitely with minimal impetus to improve them.
I'm sorry I don't really understand what you're saying here. The encoder offload stuff is all totally orthogonal to the issue of using the ASoC APIs. On the CPU side the ASoC API is mostly a thin wrapper around the standard ALSA API so anything that works in ALSA should slot fairly readily into ASoC. What I'd expect to see happen is that the CODEC and board stuff would get pulled out and everything else would stay pretty much as-is, including the DSP.
CPUs like OMAP (which are obviously already in the ASoC framework) also support this sort of stuff, the issues with mainline for them have been around the API to the DSP, though it looks like the TI stuff in staging is getting sorted out now.
Well, what I don't understand in your argument is why we must stop it being deployed.
Because it's not using the relevant framework at all, it's gone and reinvented the wheel without a pressing reason to do so and this will be very likely to create problems if the part is at all successful. It will also set a really bad precedent for other vendors which is even more worrying than the specific driver.
Throughout the kernel we regularly have to push back on embedded vendors (not just CPU vendors) who produce drivers outside the standard frameworks and get them converted to the standard frameworks so that they can be deployed without having to go into driver specific APIs all the time. Due to the fact that everything gets merged into one mainline tree Linux has much higher standards all round with this sort of thing than most other OSs but it takes work to enforce that.
This does include a bunch of the recently posted Intel drivers, including this one, but there's nothing particularly unusual here.
Because it doesn't match with the current design of
the framework? If so, a straight question must come next: can't the framework itself be extended to suit with such hardware?
As I wrote several times I can see no problem supporting this hardware with current ASoC. I'm not sure how I can be any clearer on this point.
I'm saying I see nothing about this hardware which should prevent it working with ASoC right now. As I have said in terms of the overall structure of the device it's not really doing anything that hasn't been done elsewhere. There's stuff we'd have to bodge in the short term, but only fairly small bits that can be handled fairly straightforwardly.
OK, then we should fix this first.
This is all driver specific stuff, there's nothing that needs doing here immediately outside of the driver that I can spot right now.
... so the whole messages can be reduced to a sentense: "Rewrite the driver based on ASoC!"
Then it's a question to Intel guys. If they would like to support for it, we can happily wait for it before merging. If they don't want but keep as is, then questions are:
- whether to keep it in upstream or not
- where to keep it, in staging or in sound tree
I've just had a look at the code and my fear is here that this will be dumped in it's current state and forgotten about when Intel moves onto their next big thing(tm). I hope I'm wrong though....
I also have to echo Mark and Pavel's concerns here that re-inventing the wheel in some areas is both unnecessary and divisive. Subsystem frameworks are one of the kernels key strengths and this lessens that to a degree.
Liam
Because it's not using the relevant framework at all, it's gone and reinvented the wheel without a pressing reason to do so and this will be very likely to create problems if the part is at all successful.
Its more a case of predating the wheel as far as I can tell. In terms of frameworks I don't think it matters as of itself - but once that means you have to write two different versions of the same codec chip driver for example yes it matters.
This is all driver specific stuff, there's nothing that needs doing here immediately outside of the driver that I can spot right now.
Well our agenda right now is to strip out the crap, polish up the various bugs found in review (and in staging its already getting a trickle of very useful community input swatting silly error path bugs, signed/unsigned typing etc) and then assign someone in Intel OTC (hopefully if things go to plan someone with prior ALSA dev experience) to make it a good ALSA citizen which probably means using ASOC or similar.
Putting it in staging allows that work to be done in public in a meaningful way where the code and changes get review.
Alan
On Mon, Oct 18, 2010 at 11:24:18AM +0100, Alan Cox wrote:
Because it's not using the relevant framework at all, it's gone and reinvented the wheel without a pressing reason to do so and this will be very likely to create problems if the part is at all successful.
Its more a case of predating the wheel as far as I can tell. In terms
ASoC has been in mainline since 2.6.21 which I imagine predates the Moorestown code.
of frameworks I don't think it matters as of itself - but once that means you have to write two different versions of the same codec chip driver for example yes it matters.
Yes, exactly - this is the big problem. It's often much worse than just two different versions, with these things it's easy to end up having to have versions per board.
Putting it in staging allows that work to be done in public in a meaningful way where the code and changes get review.
I agree with the approach you're outlining here.
participants (5)
-
Alan Cox
-
Liam Girdwood
-
Mark Brown
-
Pavel Hofman
-
Takashi Iwai