[alsa-devel] Using HDA on an SoC without PCI
Hi,
I'm working on an system with an HDMI codec on an HDA interface, but that interface doesn't use PCI. I want to re-use as much of the code as possible, but I want to avoid scattering ifdefs all over hda_intel.c.
Would gathering the pci specific functions from hda_intel.c into an interface struct of some kind make sense? If azx was probed from pci, then a pci interface would be used, if it was a platform driver, then the correct interface for the platform would be used. It would add some overhead to operations such as azx_writel, but that would be measured to make sure it isn't detrimental to performance.
Any other ideas on how to approach this?
Thanks,
Dylan
On 02/14/2014 10:47 AM, Dylan Reid wrote:
Hi,
I'm working on an system with an HDMI codec on an HDA interface, but that interface doesn't use PCI. I want to re-use as much of the code as possible, but I want to avoid scattering ifdefs all over hda_intel.c.
I think the relevant APIs stub themselves out when appropriate, so you can do this all without compile-time ifdefs.
Would gathering the pci specific functions from hda_intel.c into an interface struct of some kind make sense? If azx was probed from pci, then a pci interface would be used, if it was a platform driver, then the correct interface for the platform would be used. It would add some overhead to operations such as azx_writel, but that would be measured to make sure it isn't detrimental to performance.
Our downstream trees do have patches to allow the HDA driver to be instantiated from either PCI or from platform devices. You can find the code in branch:
git://nv-tegra.nvidia.com/linux-2.6.git rel-roth-r3 e.g. ef8491346266 "ALSA: hda: Add hda platform driver support"
There are many other patches to sound/pci/hda after that. I don't recall exactly how clean taht first patch was, and probably never saw anything after it, but it might be a good start...
We've only recently had HDMI support upstream on Tegra, otherwise it's possible I may have found time to try to upstream those patches before. Not that I have spare time:-)
On Fri, Feb 14, 2014 at 10:01 AM, Stephen Warren swarren@wwwdotorg.org wrote:
On 02/14/2014 10:47 AM, Dylan Reid wrote:
Hi,
I'm working on an system with an HDMI codec on an HDA interface, but that interface doesn't use PCI. I want to re-use as much of the code as possible, but I want to avoid scattering ifdefs all over hda_intel.c.
I think the relevant APIs stub themselves out when appropriate, so you can do this all without compile-time ifdefs.
Would gathering the pci specific functions from hda_intel.c into an interface struct of some kind make sense? If azx was probed from pci, then a pci interface would be used, if it was a platform driver, then the correct interface for the platform would be used. It would add some overhead to operations such as azx_writel, but that would be measured to make sure it isn't detrimental to performance.
Our downstream trees do have patches to allow the HDA driver to be instantiated from either PCI or from platform devices. You can find the code in branch:
git://nv-tegra.nvidia.com/linux-2.6.git rel-roth-r3 e.g. ef8491346266 "ALSA: hda: Add hda platform driver support"
There are many other patches to sound/pci/hda after that. I don't recall exactly how clean taht first patch was, and probably never saw anything after it, but it might be a good start...
We've only recently had HDMI support upstream on Tegra, otherwise it's possible I may have found time to try to upstream those patches before. Not that I have spare time:-)
That is a good start, even better that it's also in 3.10 Tegra kernel. That first patch looks pretty good. The following changes to enable tegra hdmi are a little more interesting, but don't look impossible to clean up. The register access is tricky, since the azx_write* defines need to be different. I'll play with it for a while and see how it goes.
Thanks Stephen,
Dylan
At Fri, 14 Feb 2014 11:01:14 -0700, Stephen Warren wrote:
On 02/14/2014 10:47 AM, Dylan Reid wrote:
Hi,
I'm working on an system with an HDMI codec on an HDA interface, but that interface doesn't use PCI. I want to re-use as much of the code as possible, but I want to avoid scattering ifdefs all over hda_intel.c.
I think the relevant APIs stub themselves out when appropriate, so you can do this all without compile-time ifdefs.
Would gathering the pci specific functions from hda_intel.c into an interface struct of some kind make sense? If azx was probed from pci, then a pci interface would be used, if it was a platform driver, then the correct interface for the platform would be used. It would add some overhead to operations such as azx_writel, but that would be measured to make sure it isn't detrimental to performance.
Our downstream trees do have patches to allow the HDA driver to be instantiated from either PCI or from platform devices. You can find the code in branch:
git://nv-tegra.nvidia.com/linux-2.6.git rel-roth-r3 e.g. ef8491346266 "ALSA: hda: Add hda platform driver support"
Interesting, I'll take a look later.
There are many other patches to sound/pci/hda after that. I don't recall exactly how clean taht first patch was, and probably never saw anything after it, but it might be a good start...
I haven't looked at your patches yet, but the HD-audio codec drivers (hda_codec.c, patch_*.c, etc) are basically independent from the h/w access details, and the whole PCI and DMA implementations are isolated in hda_intel.c. So, it shouldn't be too hard to implement on other buses, I suppose.
If I were to implement, I'd rather write a new controller driver from the scratch instead of hacking hda_intel.c. The controller driver needs to create hda_bus and hda_codecs with proper ops. Some code must be duplicated, but this might become cleaner in the end. Let's see...
We've only recently had HDMI support upstream on Tegra, otherwise it's possible I may have found time to try to upstream those patches before. Not that I have spare time:-)
Heh, famous last words ;)
thanks,
Takashi
participants (3)
-
Dylan Reid
-
Stephen Warren
-
Takashi Iwai