[alsa-devel] Writing a new sound driver
I am the developer of the work-in-progress MyHD/TL880 Linux driver. The TL880 chip can act as a sound input and output device, but it doesn't seem to support DMA for sound. Is it possible to write a sound driver for ALSA that uses a polling mechanism? Also, how would I integrate this driver with my core driver for accessing the TL880? The chip's features span four separate kernel subsystems: v4l, dvb, sound, and fb.
I guess my main question is this: what is the bare minimum of code and hardware functionality required to write a sound driver?
Thanks, Mike Bourgeous http://myhd.sourceforge.net/
Hi Mike,
On Sunday 12 August 2007 10:46:07 Michael Bourgeous wrote:
I am the developer of the work-in-progress MyHD/TL880 Linux driver. The TL880 chip can act as a sound input and output device, but it doesn't seem to support DMA for sound. Is it possible to write a sound driver for ALSA that uses a polling mechanism?
Yes.
Read about the copy and silence callbacks here: http://www-old.alsa-project.org/~iwai/writing-an-alsa-driver/x1409.htm
Get the alsa-driver source code and grep -r ".copy" to find drivers that use this method. E.g. RME, asihpi
Does it support interrupts? If so see http://www-old.alsa-project.org/~iwai/writing-an-alsa-driver/x773.htm
otherwise you need to use a timer to poll the transfer status and call snd_pcm_period_elapsed appropriately similar to the High-frequent timer interrupts section of the above reference.
Also, how would I integrate this driver with my core driver for accessing the TL880? The chip's features span four separate kernel subsystems: v4l, dvb, sound, and fb.
What are dependencies if any between these functions on the chip?
I guess my main question is this: what is the bare minimum of code and hardware functionality required to write a sound driver?
The "dummy" driver provides a starting point to which you can add your own functionality.
For alsa driver code implementing mixer controls is almost completely independent of audio I/O (apart from very top level where they belong to the same card object), so you can leave it out to start with, unless you need to set some controls to get audio out!
-- Eliot
Programming alsa is confusing at the point that there is basicly no documentation. So since I get more and more confused I hope someone can help me out here.
First - I didn't find any complete description about interleaved and non-interleaved transport. What is actually (in a nutshell) the difference and what the consequences of either approach?
Second - when capturing I do the following:
snd_pcm_hw_params_set_format: So this is setting some kind of format!? I mean first off - which format should one set? There are a few dozens to choose from and the hardware will most likely not support all of them. So how does one retrieve a list of the hardware supported formats? And which one should be considered superior? I assume it does affect the buffer size eventually needed!?
snd_pcm_hw_params_get_channels_max: This one should IMO output the channels - but for a capture device (where often there is just a single stereo input) this would normally just return 2, right?
snd_pcm_hw_params_set_channels: This is pretty self explanatory - aside from the effect this one has. Eventually there will be gotten some data via a buffer. Would there be 6 channels (dolby stuff) does a filled buffer look like the following?
Channel1 Channel2 Channel3 Channel4 Channel5 Channel6 Channel1 Channel2 . . .
snd_pcm_hw_params_set_rate: This one is also pretty self explanatory - aside from pure digital devices - which I'm lucky not to have.
snd_pcm_hw_params_set_periods: And we are at the REAL confusing stuff. Here one can set "periods". But what is meant by a period? Is it a frame or how would sane people call this? And there's a count of periods set - what is this count used for?
snd_pcm_hw_params_get_period_size: Ok from the previous periods I can get the size. I assume this is somewhat usefull to estimate the buffer size but since I don't know exactly what periods are or do - I dunno.
snd_pcm_hw_params_set_buffer_size: This one sets the buffer size. I'm kind of confused here because you have to set the periods AND the buffer size. So are the periods just the slices for the buffer? So I think the buffer can be calculated as following!?:
<periods> * <periodSize> * <formatTypeWTF> * <channels> So is the buffer size the actual byte size, a frame size the period size or whatever?
snd_pcm_readi: Since the buffer is a void* I assume that the format does affect the data needed for the buffer. But having a buffer in userspace and a buffer set in alsa means it does copy at least twice. I fear for direct copying one has to get in touch with this mmap method nobody wants to talk about :(
If you read to this point thank you ;)
LCID Fire wrote:
First - I didn't find any complete description about interleaved and non-interleaved transport. What is actually (in a nutshell) the difference and what the consequences of either approach?
When using interleaved access, the sample values that are to be sent at the same to each channel arestored right after another in the buffer, i.e., like this:
Channel1 Channel2 Channel3 Channel4 Channel5 Channel6 Channel1 Channel2 . . .
In ALSA, the set of samples for all channels is called a frame. In this example, a frame would consist of six samples.
When using non-interleaved access, you have effectively one buffer for each channel, i.e., the entire buffer would look like this:
c1 c1 c1 ..... c2 c2 c2 ..... c3 c3 c3 .......
snd_pcm_hw_params_set_format: So this is setting some kind of format!? I mean first off - which format should one set? There are a few dozens to choose from and the hardware will most likely not support all of them.
When using the "default" device that can automatically convert sample formats, it will support all (well, most) of them.
So how does one retrieve a list of the hardware supported formats?
snd_pcm_hw_params_test_format() or snd_pcm_hw_params_get_format_mask()
And which one should be considered superior?
Whatever is easiest to handle in your program.
I assume it does affect the buffer size eventually needed!?
Yes, if you measure the buffer size in bytes.
snd_pcm_hw_params_get_channels_max: This one should IMO output the channels - but for a capture device (where often there is just a single stereo input) this would normally just return 2, right?
The "default" device can give you as many channels as you want.
snd_pcm_hw_params_set_periods: And we are at the REAL confusing stuff. Here one can set "periods". But what is meant by a period?
It's a part of the buffer that gets transferred between two interrupts. The OSS API calls this "fragments".
And there's a count of periods set - what is this count used for?
It's a different method of setting the period size. The three parameters are related like this:
periods = buffer_size / period_size
The buffer and period sizes are measured in frames.
You could also set the buffer and period _length_, which are the same values, but measured in microseconds.
snd_pcm_readi: Since the buffer is a void* I assume that the format does affect the data needed for the buffer.
Yes.
But having a buffer in userspace and a buffer set in alsa means it does copy at least twice.
ALSA's buffer is the hardware DMA buffer; there is only one copy involved.
HTH Clemens
It looks like a good idea to have these as a real devel-FAQ on Wiki...
Takashi
At Mon, 13 Aug 2007 09:03:59 +0200, Clemens Ladisch wrote:
LCID Fire wrote:
First - I didn't find any complete description about interleaved and non-interleaved transport. What is actually (in a nutshell) the difference and what the consequences of either approach?
When using interleaved access, the sample values that are to be sent at the same to each channel arestored right after another in the buffer, i.e., like this:
Channel1 Channel2 Channel3 Channel4 Channel5 Channel6 Channel1 Channel2 . . .
In ALSA, the set of samples for all channels is called a frame. In this example, a frame would consist of six samples.
When using non-interleaved access, you have effectively one buffer for each channel, i.e., the entire buffer would look like this:
c1 c1 c1 ..... c2 c2 c2 ..... c3 c3 c3 .......
snd_pcm_hw_params_set_format: So this is setting some kind of format!? I mean first off - which format should one set? There are a few dozens to choose from and the hardware will most likely not support all of them.
When using the "default" device that can automatically convert sample formats, it will support all (well, most) of them.
So how does one retrieve a list of the hardware supported formats?
snd_pcm_hw_params_test_format() or snd_pcm_hw_params_get_format_mask()
And which one should be considered superior?
Whatever is easiest to handle in your program.
I assume it does affect the buffer size eventually needed!?
Yes, if you measure the buffer size in bytes.
snd_pcm_hw_params_get_channels_max: This one should IMO output the channels - but for a capture device (where often there is just a single stereo input) this would normally just return 2, right?
The "default" device can give you as many channels as you want.
snd_pcm_hw_params_set_periods: And we are at the REAL confusing stuff. Here one can set "periods". But what is meant by a period?
It's a part of the buffer that gets transferred between two interrupts. The OSS API calls this "fragments".
And there's a count of periods set - what is this count used for?
It's a different method of setting the period size. The three parameters are related like this:
periods = buffer_size / period_size
The buffer and period sizes are measured in frames.
You could also set the buffer and period _length_, which are the same values, but measured in microseconds.
snd_pcm_readi: Since the buffer is a void* I assume that the format does affect the data needed for the buffer.
Yes.
But having a buffer in userspace and a buffer set in alsa means it does copy at least twice.
ALSA's buffer is the hardware DMA buffer; there is only one copy involved.
HTH Clemens _______________________________________________ Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
On 8/11/07, Eliot Blennerhassett linux@audioscience.com wrote:
Hi Mike,
On Sunday 12 August 2007 10:46:07 Michael Bourgeous wrote:
doesn't seem to support DMA for sound. Is it possible to write a sound driver for ALSA that uses a polling mechanism?
Does it support interrupts? If so see http://www-old.alsa-project.org/~iwai/writing-an-alsa-driver/x773.htm
Yes, it supports interrupts. However, there is one caveat, which I'll describe below. I believe the Windows driver tells the card to trigger an interrupt when playback reaches the start of or middle of the audio buffer.
Also, how would I integrate this driver with my core driver for accessing the TL880? The chip's features span four separate kernel subsystems: v4l, dvb, sound, and fb.
What are dependencies if any between these functions on the chip?
The caveat I mentioned above is that all interrupts from the card are stored in a single register. If this register is read, its contents are cleared. So, only one kernel module can be allowed to handle interrupts. Additionally, the card has a very large register space, and I'd prefer to have a single interface that helps to abstract that register space. My eventual goal (which may be a very long time away) is to have a core module that just handles PCI interaction, interrupts, register reads/writes, etc., with another module for each of video capture, MPEG capture and playback, framebuffer, and audio.
Thanks for the info, Mike Bourgeous
participants (5)
-
Clemens Ladisch
-
Eliot Blennerhassett
-
LCID Fire
-
Michael Bourgeous
-
Takashi Iwai