[alsa-devel] ASOC - Codecs : Renaming of spdif_tranceiver.c
Hi,
In my project I am feeling of need to have generic_codec. This will provide the required codec_dais for the usecases like
-HDMI/SPDIF where there is no codec involved. -For audio over MOST, where CPLD/glue logic is used -For complex audio DSPs connected which will be mostly programmed in the user-space. -SOC manufactures who would like to ship their drivers without having to know what will be the final codec that will be present in the system.
Currently there is spdif_tranceiver.c which serves the purpose for spdif, but using this for other usecases will create confusion while shipping the code.
What is the best way to carry forward?
1) Use spdif_tranceiver.c for all the purpose. disadvantage: this will create more confusion while shipping.
2)Deprecate spdif_transciever.c and call it generic_codec. Change the dai names and the function names.
3)Create another generic_codec.c and keep the spdif_transceiver.c and deprecate the latter in next version.
4) Any other ideas ?
Thanks, Nitin
CCing the maintainers to get attention.
---------- Forwarded message ---------- From: Nitin PAI nitinmpai@gmail.com Date: Wed, Mar 7, 2012 at 12:15 PM Subject: ASOC - Codecs : Renaming of spdif_tranceiver.c To: alsa-devel@alsa-project.org Cc: swarren@nvidia.com
Hi,
In my project I am feeling of need to have generic_codec. This will provide the required codec_dais for the usecases like
-HDMI/SPDIF where there is no codec involved. -For audio over MOST, where CPLD/glue logic is used -For complex audio DSPs connected which will be mostly programmed in the user-space. -SOC manufactures who would like to ship their drivers without having to know what will be the final codec that will be present in the system.
Currently there is spdif_tranceiver.c which serves the purpose for spdif, but using this for other usecases will create confusion while shipping the code.
What is the best way to carry forward?
1) Use spdif_tranceiver.c for all the purpose. disadvantage: this will create more confusion while shipping.
2)Deprecate spdif_transciever.c and call it generic_codec. Change the dai names and the function names.
3)Create another generic_codec.c and keep the spdif_transceiver.c and deprecate the latter in next version.
4) Any other ideas ?
Thanks, Nitin
On Wed, Mar 07, 2012 at 12:15:07PM +0530, Nitin PAI wrote:
Adding a CC to Liam. *ALWAYS* CC the maintainers on mails, I know you followed up adding Liam but you still missed me off.
In my project I am feeling of need to have generic_codec. This will provide the required codec_dais for the usecases like
-HDMI/SPDIF where there is no codec involved. -For audio over MOST, where CPLD/glue logic is used
These still need something to say what the capabilities of the device are - ideally users could just specify the part name and not worry about that stuff. If you look at how thin the drivers are it's really not clear to me what we'd save here, though if someone did the work and it looked useful... Another option is to just embed all the data structures in a single driver which binds to them all and selects the right data structure based on the probe information, that might be useful.
-For complex audio DSPs connected which will be mostly programmed in the user-space.
These will need in kernel drivers, if only to export the interfaces required by the application layer to userspace. They'll also have difficulty integrating well with the power management without an in kernel representation of the device, especially for bypass cases like in call audio.
-SOC manufactures who would like to ship their drivers without having to know what will be the final codec that will be present in the system.
The CPU side drivers have no dependency on the CODEC drivers, these are integrated by the machine. The SoC vendor can easily ship drivers which will work with any CODEC which works well on Linux.
If you actually mean machine drivers it's not really possible to write a generic machine driver which will actually work for modern CODECs, especially those used in smartphones and tablets, the clocking architectures are far too varied and flexible for this to work usefully. This is before you get into things that are genuine hardware choices and need to be enumerated at runtime, and of course most modern devices will require software to power them on so you need to match them up with the relevant driver to do that.
In general pluggable reference boards really need to have some means of identifying the connected boards if they want to support running with different setups without recompilation. For example, the Wolfson reference systems have ID chips on each board which are queried during boot to allow us to instantiate things based on what's physically connected. We can swap boards over and have a single system image boot successfully without software changes.
- Any other ideas ?
I don't think you've fully thought through what you're suggesting, it doesn't seem practical for most things.
Hi Mark,
The usecase for the DSP was specifically for automotive where there is no power requirements and the audio functionality is always on. The programming of the codecs usually from the userspace. However ALSA is used for ease of using the alsalib for its various functions (applications standpoint).
The CPU side drivers have no dependency on the CODEC drivers, these are integrated by the machine. The SoC vendor can easily ship drivers which will work with any CODEC which works well on Linux.
The problem with just writing the machine driver and shipping it is, without any codec driver the cards wont be enumerated. Having a thin codec driver to enumerate the cards and ship it will help. (from debug and development standpoint).
Another option is to just embed all the data structures in a single driver which binds to them all and selects the right data structure based on the probe information, that might be useful.
Are you suggesting to have this in the machine driver? I didnt understand this, please clarify.
Thanks, Nitin
On Wed, Mar 7, 2012 at 7:13 PM, Mark Brown < broonie@opensource.wolfsonmicro.com> wrote:
On Wed, Mar 07, 2012 at 12:15:07PM +0530, Nitin PAI wrote:
Adding a CC to Liam. *ALWAYS* CC the maintainers on mails, I know you followed up adding Liam but you still missed me off.
In my project I am feeling of need to have generic_codec. This will provide the required codec_dais for the usecases like
-HDMI/SPDIF where there is no codec involved. -For audio over MOST, where CPLD/glue logic is used
These still need something to say what the capabilities of the device are - ideally users could just specify the part name and not worry about that stuff. If you look at how thin the drivers are it's really not clear to me what we'd save here, though if someone did the work and it looked useful... Another option is to just embed all the data structures in a single driver which binds to them all and selects the right data structure based on the probe information, that might be useful.
-For complex audio DSPs connected which will be mostly programmed in the user-space.
These will need in kernel drivers, if only to export the interfaces required by the application layer to userspace. They'll also have difficulty integrating well with the power management without an in kernel representation of the device, especially for bypass cases like in call audio.
-SOC manufactures who would like to ship their drivers without having to know what will be the final codec that will be present in the system.
The CPU side drivers have no dependency on the CODEC drivers, these are integrated by the machine. The SoC vendor can easily ship drivers which will work with any CODEC which works well on Linux.
If you actually mean machine drivers it's not really possible to write a generic machine driver which will actually work for modern CODECs, especially those used in smartphones and tablets, the clocking architectures are far too varied and flexible for this to work usefully. This is before you get into things that are genuine hardware choices and need to be enumerated at runtime, and of course most modern devices will require software to power them on so you need to match them up with the relevant driver to do that.
In general pluggable reference boards really need to have some means of identifying the connected boards if they want to support running with different setups without recompilation. For example, the Wolfson reference systems have ID chips on each board which are queried during boot to allow us to instantiate things based on what's physically connected. We can swap boards over and have a single system image boot successfully without software changes.
- Any other ideas ?
I don't think you've fully thought through what you're suggesting, it doesn't seem practical for most things.
On Wed, Mar 07, 2012 at 07:45:24PM +0530, Nitin PAI wrote:
Don't top post, and please don't do things like reordering content.
The usecase for the DSP was specifically for automotive where there is no power requirements and the audio functionality is always on. The programming of the codecs usually from the userspace. However ALSA is used for ease of using the alsalib for its various functions (applications standpoint).
Even with automotive there's *some* power limits and obviously the power control also includes things like syncing startup of the algorithms with the data path
The CPU side drivers have no dependency on the CODEC drivers, these are integrated by the machine. The SoC vendor can easily ship drivers which will work with any CODEC which works well on Linux.
The problem with just writing the machine driver and shipping it is, without any codec driver the cards wont be enumerated. Having a thin codec driver to enumerate the cards and ship it will help. (from debug and development standpoint).
Well, yes. Unless someone writes a card driver the CPU driver won't do anything useful but then without the card driver you've no idea how the system is actually wired up and the chances of it doing anything useful are close to zero. There's no getting out of the machine driver.
Another option is to just embed all the data structures in a single driver which binds to them all and selects the right data structure based on the probe information, that might be useful.
Are you suggesting to have this in the machine driver? I didnt understand this, please clarify.
No, obviously CODEC drivers should be handling by CODECs...
Hi Mark,
Even with automotive there's *some* power limits and obviously the power control also includes things like syncing startup of the algorithms with the data path
I dont think that needs to be absolutely done as a part of kernel layer.
Well, yes. Unless someone writes a card driver the CPU driver won't do anything useful but then without the card driver you've no idea how the system is actually wired up and the chances of it doing anything useful are close to zero. There's no getting out of the machine driver.
This is the point. Why write a card driver? For example audio over MOST or for HDMI audio? Or when I have some utility in userspace to test the functionality. Wont it be better to atleast enumerate the driver and show the capabilities it supports? This can help in many phases of design like emulation, bringup (doing loopback between 2 interfaces at SOC level)?
So far I have been using ALSA for the userspace utilites it supports (alsalib), for codec support I dont feel need that its absolutely needed (spdif/hdmi for instance) and this can be applied to even other machine drivers.
Cant we have support for all this, if not codec, somewhere else?
Thanks, Nitin
On Wed, Mar 07, 2012 at 08:48:09PM +0530, Nitin PAI wrote:
Even with automotive there's *some* power limits and obviously the power control also includes things like syncing startup of the algorithms with the data path
I dont think that needs to be absolutely done as a part of kernel layer.
In general it really does - things get sensitive about their clocks and algorithms get sensitive about their inputs. You can often get something together that isn't joined together but usually there's a stack of simplifying assumptions in there which can break easily enough. Besides, you still need something there which exposes whatever physical interfaces userspace needs to use to program things. If userspace can just randomly interact with hardware that's a bit of a failure.
Well, yes. Unless someone writes a card driver the CPU driver won't do anything useful but then without the card driver you've no idea how the system is actually wired up and the chances of it doing anything useful are close to zero. There's no getting out of the machine driver.
This is the point. Why write a card driver? For example audio over MOST or for HDMI audio?
If the same driver works on all systems then you can just write the driver and it'll work for everyone. Nothing at the CODEC level is going to make this more or less easy, you're actually saying you've got some hardware for which you can write a generic machine driver. If that's the case you should just do that, there's some examples of this already like the fsi-hdmi driver.
Or when I have some utility in userspace to test the functionality. Wont it be better to atleast enumerate the driver and show the capabilities it supports?
You can add debugfs information to dump the capabilities and whatnot if that's useful to you...
This can help in many phases of design like emulation, bringup (doing loopback between 2 interfaces at SOC level)?
If you're doing stuff like this you're probably more than capable of tweaking things to suit your needs, and probably be prepared to accept a level of brokenness while you're at it. For example, if you've not got clocks then you won't transfer data and userspace tends to get upset with you. You'd still need to do things like set up the clocking even for the SoC loopback case, everything is going to need to agree on where the clocks come from and how they flow.
Cant we have support for all this, if not codec, somewhere else?
Honestly it just sounds like you want to write some machine drivers for your systems.
You can add debugfs information to dump the capabilities and whatnot if
that's useful to you...
If the cards are not enumerated all these dont make any sense.
You'd still need to do things like set up the clocking even for the SoC loopback case, everything is going to need to agree on where the clocks come from and how they flow.
Clocks need not come from the codec, it can come from on of the other masters in the system.
Honestly it just sounds like you want to write some machine drivers for your systems.
Yes, thats the purpose, but I want to ship them to for the reasons I mentioned above. Since the enumeration of the machine driver depends on the linkage with the codec driver its not possible for me to write. I wish that the spdif_tranceiver was written for more generalized cases and not just spdif.
Thanks, --Nitin
On Wed, Mar 07, 2012 at 09:58:56PM +0530, Nitin PAI wrote:
You can add debugfs information to dump the capabilities and whatnot if
that's useful to you...
If the cards are not enumerated all these dont make any sense.
The DAI and so on drivers come up all by themselves and need to do so prior to the cards actually instantiating so we can create debugfs stuff for them as soon as they register if we want to.
You'd still need to do things like set up the clocking even for the SoC loopback case, everything is going to need to agree on where the clocks come from and how they flow.
Clocks need not come from the codec, it can come from on of the other masters in the system.
Right, exactly - the point is that the machine driver makes the decision about the clocking architecture of the given system, the individual drivers can't reasonably make that decision themselves.
Honestly it just sounds like you want to write some machine drivers for your systems.
Yes, thats the purpose, but I want to ship them to for the reasons I mentioned above. Since the enumeration of the machine driver depends on the linkage with the codec driver its not possible for me to write. I wish that the spdif_tranceiver was written for more generalized cases and not just spdif.
Well, just write a CODEC driver that matches what you've got down on your boards. Usually the driver does need to enforce some kind of limits (on input format and sample rate normally) even for a simple device with no software control otherwise the device can get driven out of spec.
Well, just write a CODEC driver that matches what you've got down on your boards. Usually the driver does need to enforce some kind of limits (on input format and sample rate normally) even for a simple device with no software control otherwise the device can get driven out of spec.
The CODEC is part of the system integration process. From SOC standpoint the machine driver is what needs to be delivered. I dont know finally what the final codec will be, its upto the integrator to write. May be he can pick the ones from the asoc:codecs list.
I dont like the dependency of the machine driver with the codec driver for enumeration. Machine driver can self exist without having for the codec driver. [USB, PCi, HDMI, S/PDIF] However the vice-versa is not true and is of no use. Binding them together will make the usecases easy (power for instance, user interaction), but should not have been a hardlimit. Given the thought process, ALSA should allow dummy_codec drivers which it does (from spdif_tranceiver.c), but why not make it completely generic? (remove the s/pdif keywords).
Thanks for your ideas about the hdmi driver, will check how is that handling this dependency on codec driver.
--Nitin
Usually the driver does need to enforce some kind of limits (on input format and sample rate normally) even for a simple device with no software control otherwise the device can get driven out of spec.
Agreed. This is more of a system integration problem then the machine driver problem. Most of these enforcements are already present in the machine driver.
--Nitin
On Wed, Mar 07, 2012 at 10:34:21PM +0530, Nitin PAI wrote:
Usually the driver does need to enforce some kind of limits (on input format and sample rate normally) even for a simple device with no software control otherwise the device can get driven out of spec.
Agreed. This is more of a system integration problem then the machine driver problem. Most of these enforcements are already present in the machine driver.
Perhaps your BSP is doing things differently to mainline here; it's vanishingly rare in mainline for machine drivers to enforce any limits themselves. Usually the CODEC and SoC drivers do this.
Perhaps your BSP is doing things differently to mainline here; it's vanishingly rare in mainline for machine drivers to enforce any limits themselves. Usually the CODEC and SoC drivers do this.
I meant the SoC drivers can still enforce these limits, and need not rely on CODEC driver.
--Nitin
On Wed, Mar 07, 2012 at 11:28:49PM +0530, Nitin PAI wrote:
Perhaps your BSP is doing things differently to mainline here; it's vanishingly rare in mainline for machine drivers to enforce any limits themselves. Usually the CODEC and SoC drivers do this.
I meant the SoC drivers can still enforce these limits, and need not rely on CODEC driver.
Usually the CODEC will be way more limited than the SoC, for example the WM8727 (which is one of these dumb drivers) only supports sample rates down to 32kHz but I'd be astonished if a SoC didn't do standard audio rates from 8kHz up.
On Wed, Mar 07, 2012 at 10:23:59PM +0530, Nitin PAI wrote:
Please fix your quoting, you're not attributing and you're double quoting things.
Well, just write a CODEC driver that matches what you've got down on your boards. Usually the driver does need to enforce some kind of limits (on input format and sample rate normally) even for a simple device with no software control otherwise the device can get driven out of spec.
The CODEC is part of the system integration process. From SOC standpoint the machine driver is what needs to be delivered. I dont know finally what the final codec will be, its upto the integrator to write. May be he can pick the ones from the asoc:codecs list.
Well, quite - if you're working with a system then you need a machine driver.
I dont like the dependency of the machine driver with the codec driver for enumeration. Machine driver can self exist without having for the codec driver. [USB, PCi, HDMI, S/PDIF] However the vice-versa is not true and is
In the USB case we're going to be going nowhere near ASoC anyway, there's a totally orthogonal hardware design and set of drivers at the ALSA level. Similarly for PCI, though obviously it's possible to make a PCI card which is decomposed enough to make sense to support via ASoC.
In the S/PDIF case you do actually have a CODEC - while it's common for this to be a fixed function hardware CODEC (in which case you need a stub driver saying what it looks like within the system as I said above) this isn't always the case. For example, the WM8804 in mainline is a S/PDIF transciever with register control. The effort required to bolt on fixed function drivers (or try to parameterise a generic one if someone wants to do that) is so trivial it really doesn't seem worth caring about. HDMI is broadly similar to S/PDIF here.
of no use. Binding them together will make the usecases easy (power for instance, user interaction), but should not have been a hardlimit. Given the thought process, ALSA should allow dummy_codec drivers which it does (from spdif_tranceiver.c), but why not make it completely generic? (remove the s/pdif keywords).
Allow me to suggest again (as I did earlier in this thread) munging all these dumb drivers together so you just have a bunch of things in the one driver which the driver distinguishes by ID. To repeat once more you're always going to need to put in at least some information about the limits the device has.
participants (2)
-
Mark Brown
-
Nitin PAI