[alsa-devel] OSS emulation and hardware configuration
Hi folks,
Apologies if this is obvious, but how does the OSS emulation layer set hardware parameters at the codec level?
We are working in an embedded environment, and using a custom codec driver. When we access it via the ALSA API, it works fine. We have a bf5xx_wm8990_hw_params() function with configures the critical hardware parameters of the codec, and I have diagnostics code in the driver which does kernel prints when the driver changes key codec parameters (such as bit rate, timing, etc).
When my OSS application opens it's device and changes the audio rate, I don't see that function getting called. In core/oss/pcm_oss.c, I see the snd_pcm_oss_set_rate() function getting called, but I don't understand how it changes parameters in the codec. My bf5xx_wm8990_hw_params() never gets called.
Can somebody please point me in the right direction? I have two applications who need to access the audio driver, and at different bit rates. I have to makes sure they are getting set correctly.
Mike
Mike Crowe wrote:
Apologies if this is obvious, but how does the OSS emulation layer set hardware parameters at the codec level?
Just like any other client, by calling the driver's hw_params function.
When my OSS application opens it's device and changes the audio rate, I don't see that function getting called.
Does the changing of the rate actually succeed?
Regards, Clemens
Does the changing of the rate actually succeed?
Surprisingly, that's hard to answer. Normally, most of our audio is at 48K (we control the sound files), so the device stays at that rate. However, a new library we added (for voip communications) is hard-coded to 8K sample rate via OSS, and I'm now trying to work out dual usage by our ALSA code and this library's OSS interface.
Before I added these kernel printouts, I was convinced the system was switching rates properly (because I added 8K rate support to the hardware driver to get the voip library working). However, my current debug load shows the rate changing in the pcm_oss module, but I never see that rate hit the hardware the way I expected.
If it isn't hitting the hardware, I suppose it could be up/down converted by ALSA, but I don't have any plugins installed. I thought I needed a plugin for ALSA to convert between the two rates, right?
Thanks! Mike
Hi,
On Wed, Sep 14, 2011 at 11:20 AM, Mike Crowe drmikecrowe@gmail.com wrote:
Does the changing of the rate actually succeed?
Surprisingly, that's hard to answer. Normally, most of our audio is at 48K (we control the sound files), so the device stays at that rate. However, a new library we added (for voip communications) is hard-coded to 8K sample rate via OSS, and I'm now trying to work out dual usage by our ALSA code and this library's OSS interface.
Before I added these kernel printouts, I was convinced the system was switching rates properly (because I added 8K rate support to the hardware driver to get the voip library working). However, my current debug load shows the rate changing in the pcm_oss module, but I never see that rate hit the hardware the way I expected.
If it isn't hitting the hardware, I suppose it could be up/down converted by ALSA, but I don't have any plugins installed. I thought I needed a plugin for ALSA to convert between the two rates, right?
Right, but if you're using the in-kernel OSS emulation, it bypasses the userspace ALSA-lib plug layer! You'll have to resample it in userspace within the OSS application, probably.
Or, you can use osspd (sometimes referred to as ossp or oss-proxy): http://sourceforge.net/projects/osspd/
osspd has the advantage that most of the ugly stuff is done in userspace, and you also get to take advantage of any software mixing or resampling you may have, whether it's at the alsa-lib plug layer, or a pulseaudio daemon. ossp can re-route audio from a "real" /dev/dsp character device to either alsa-lib or pulseaudio in userspace. Pretty amazing, huh? :)
I'm not an embedded developer, but from my point of view, I don't see how anyone should want to use the in-kernel OSS emulation as it stands in any production applications, because (1) it hogs the sound device completely, preventing any other apps from interfacing with the hardware simultaneously; (2) you can't apply any user-level transformations or remixing of the audio, via e.g. alsa-lib plugins, nor can you control the audio using the use-case framework (UCM) being developed for pulseaudio; (3) the user (and corresponding developer) base for the in-kernel OSS driver is somewhat low, relative to using ALSA in userland, and emulating OSS in userland as well.
Depending on just *how* embedded your device is (is it a high-end tablet/smartphone or a tiny industrial control, etc) you may or may not like osspd as a solution, because it involves some extra context switches and overhead, which does not occur if you use e.g. a native OSS audio stack or the OSS in-kernel driver.
I just find myself personally unable to use a system without software mixing, so unless you're willing to permanently live without that feature for the lifetime of your platform architecture, you may want to look at osspd. I can certainly see a case where a simple embedded device might not need mixing though, so you may have a valid use case for snd_pcm_oss after all (and I don't have to like it, hehe).
HTH,
Sean
Thanks! Mike _______________________________________________ Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
Sean,
Right, but if you're using the in-kernel OSS emulation, it bypasses the userspace ALSA-lib plug layer! You'll have to resample it in userspace within the OSS application, probably.
Or, you can use osspd (sometimes referred to as ossp or oss-proxy): http://sourceforge.net/projects/osspd/
I had started to look at aoss. Any thoughts on comparing the two? Memory is tight, so I'm looking for the smallest memory footprint possible.
Otherwise, thanks for all your comments. Very helpful. A next task on my list was tackling mixing, and it sounds like removing OSS Emulation and moving to one of the above is a better solution.
Thanks! Mike
Hi,
On Wed, Sep 14, 2011 at 12:20 PM, Mike Crowe drmikecrowe@gmail.com wrote:
Sean,
Right, but if you're using the in-kernel OSS emulation, it bypasses the userspace ALSA-lib plug layer! You'll have to resample it in userspace within the OSS application, probably.
Or, you can use osspd (sometimes referred to as ossp or oss-proxy): http://sourceforge.net/projects/osspd/
I had started to look at aoss. Any thoughts on comparing the two? Memory is tight, so I'm looking for the smallest memory footprint possible.
aoss is a terrible hack. From what I understand (after glancing at the code years ago), it seems that it intercepts libc system calls to the standard functions, and overrides them with an LD_PRELOADed library, so that when you call open() or write() or ioctl() on a device, it checks the device path for string equality with "/dev/dsp" or similar and reacts accordingly (e.g. by translating your written data buffer into an alsa-lib call).
Downside 1 is that binaries that don't allow themselves to be LD_PRELOADed (I think that applies to binaries that are setuid root) can't use aoss at all. Downside 2 is that it's an ugly hack, and it doesn't work with a lot of programs, even those that allow LD_PRELOAD.
That is to say, for any given OSS program, the probability of it working with aoss is somewhere around 35% (based on my having previously tested this in ~2008-9 with about a dozen OSS apps, each from distinct proprietary and free software development teams). The probability of it working *properly*, with low latency and playing nicely with your alsa-lib plug layer (dmix, resampling, etc) is somewhere lower around 20%.
If you have the source and ability to change the OSS program(s) you need to run, and have direct control over their release cycle, you may be able to code them up so that they work properly with aoss... but it's a pretty restrictive environment to be doing sound I/O. If I heard of a production device on the market that was shipped to rely on aoss (or the in-kernel OSS emulation, for that matter), my face would be stuck to my palm for several weeks. ;-)
I don't think anything out there is quite like osspd, both in terms of robustness and compatibility with existing OSS applications. If you can handle the added overhead of user -> kernel -> user, you should definitely use it.
Here's the path laid out for you in full, so you can decide for yourself. Here I'm going to walk you through the layers that the audio will be passed through for a simple case of playback only (no capture) for osspd.
1. An OSS application uses the OSS "API" to syscall open("/dev/dsp", ...) which is a real character device, owned by the CUSE kernel module (more info: http://lwn.net/Articles/308445/ )
2. The open() call goes through the native libc library (no userspace hacks like aoss, so it works with setuid binaries), which ultimately results in a syscall, and a context switch to the kernel.
3. The kernel routes the call to CUSE, which owns the character device, and receives the notification that someone in userspace is trying to open() its device.
4. CUSE reacts to the open() call mostly as you'd expect: it sets up some state, allocates some memory, and reaches out to userspace to tickle the *userspace* osspd daemon (running as a child of pid 1, usually) to let it know that it has a client.
5. The userspace daemon establishes a connection with its "backend", whatever that might be. Right now osspd has two supported backends: alsa-lib (regular old userspace ALSA, which goes through the plug layer via libasound2), and the native pulseaudio protocol via libpulse. The backend itself is a separate process in userspace.
6. The backend itself does whatever's necessary to get at the hardware. Understanding the functionality of alsa-lib or pulseaudio is a separate discussion, but once the backend is created, it is just an ordinary alsa-lib or pulseaudio client.
7. This whole process repeats itself when you call write() or ioctl() on the /dev/dsp device, but depending on what exactly you're doing (compliant with the OSS API, of course!), your call follows this path again, doing the right thing at each layer. So a write() call would pass a data buffer to the kernel through a syscall, then back to userspace to osspd, then from osspd to its backend, then from the backend it'd go back into the kernel eventually for playback against your driver.
You may be thinking that this is an ugly process, but it's not much of a CPU hog even on a dual core laptop with 2006 specs. I can imagine that it would have a larger footprint relative to certain small embedded devices though. The best way to know for sure is to test it out :)
Otherwise, thanks for all your comments. Very helpful. A next task on my list was tackling mixing, and it sounds like removing OSS Emulation and moving to one of the above is a better solution.
Thanks! Mike
At Wed, 14 Sep 2011 12:08:57 -0400, Sean McNamara wrote:
Hi,
On Wed, Sep 14, 2011 at 11:20 AM, Mike Crowe drmikecrowe@gmail.com wrote:
Does the changing of the rate actually succeed?
Surprisingly, that's hard to answer. Normally, most of our audio is at 48K (we control the sound files), so the device stays at that rate. However, a new library we added (for voip communications) is hard-coded to 8K sample rate via OSS, and I'm now trying to work out dual usage by our ALSA code and this library's OSS interface.
Before I added these kernel printouts, I was convinced the system was switching rates properly (because I added 8K rate support to the hardware driver to get the voip library working). However, my current debug load shows the rate changing in the pcm_oss module, but I never see that rate hit the hardware the way I expected.
If it isn't hitting the hardware, I suppose it could be up/down converted by ALSA, but I don't have any plugins installed. I thought I needed a plugin for ALSA to convert between the two rates, right?
Right, but if you're using the in-kernel OSS emulation, it bypasses the userspace ALSA-lib plug layer! You'll have to resample it in userspace within the OSS application, probably.
Kernel OSS emulation has a rate converter, too, when CONFIG_SND_PCM_OSS_PLUGINS is enabled. But no soft-mixing or other complex stuff.
Takashi
On Wed, Sep 14, 2011 at 09:37:59AM -0400, Mike Crowe wrote:
Apologies if this is obvious, but how does the OSS emulation layer set hardware parameters at the codec level?
It calls hw_params() repeatedly.
We are working in an embedded environment, and using a custom codec driver. When we access it via the ALSA API, it works fine. We have a bf5xx_wm8990_hw_params() function with configures the critical
Hrm. The WM8990 has been supported in mainline for a considerable time, unless you're using an exceptionally old kernel you really use the driver.
When my OSS application opens it's device and changes the audio rate, I don't see that function getting called. In core/oss/pcm_oss.c, I see the snd_pcm_oss_set_rate() function getting called, but I don't understand how it changes parameters in the codec. My bf5xx_wm8990_hw_params() never gets called.
It should go through the ASoC core; it's possible that you've not hooked things up properly there.
Mark,
On Wed, Sep 14, 2011 at 12:13 PM, Mark Brown broonie@opensource.wolfsonmicro.com wrote:
It calls hw_params() repeatedly.
I have debug statements in core/oss/pcm_oss.c and in soc/codecs/wm8990.c. When I use ALSA to play a sound, I see the wm8990 debug statements. However, when the OSS application changes the rate in snd_pcm_oss_set_rate(), I never see the wm8990 debug statements for the different rate.
Hrm. The WM8990 has been supported in mainline for a considerable time, unless you're using an exceptionally old kernel you really use the driver.
This is using the wm8990 driver, but with a custom hardware interface (so we did have to write custom I2S drivers to get to the codec).
It should go through the ASoC core; it's possible that you've not hooked things up properly there.
Very possible. Based on Sean's reply, I may want to disable OSS emulation and look at a userspace solution.
Mike
On Wed, Sep 14, 2011 at 12:27:05PM -0400, Mike Crowe wrote:
On Wed, Sep 14, 2011 at 12:13 PM, Mark Brown
It should go through the ASoC core; it's possible that you've not hooked things up properly there.
Very possible. Based on Sean's reply, I may want to disable OSS emulation and look at a userspace solution.
Yes, in general OSS emulation is a very bad move and less and less supported by the kernel.
participants (5)
-
Clemens Ladisch
-
Mark Brown
-
Mike Crowe
-
Sean McNamara
-
Takashi Iwai