[alsa-devel] Using dmix/dsnoop/dshare to access individual channels
I am using a 4-input, 8-output codec (AD1938) in an application similar to a live sound mixing board, where a combination of the input signals get mixed for each output. I can mix four inputs to four outputs with a command like:
arecord --file-type raw --channels=4 --format=S32_LE --rate=16000 \ | crosspoint \ | aplay --file-type raw --channels=4 --format=S32_LE --rate=16000
where crosspoint is a simple program that reads from stdin, does some mixing, and outputs to stdout.
My next step is to play and record wave files from the various channels, while the crosspoint is running. For example, I might need to play message1.wav to channel 2 and message2.wav to channel 3 while recording channel 1 to message3.wav. I am looking for advice about the best way to accomplish that.
It seems like it should be possible using a combination of dmix, snoop, and dshare, as follows: - Use dsnoop to split the 4-channel input to the crosspoint and another virtual sound input, call it "A". - Use dshare to split "A" into four mono sources. Then can use arecord normally to capture wave files. - Use dmix and possibly dshare to similarly let the crosspoint continue to use the multi-channel output while also providing a mono sound device for each channel that I can use with aplay.
Am I on the right track, or would I be better off adding functionality to the crosspoint application to read and write wave files? Thanks for any suggestions.
Steve
--- Steve Strobel Link Communications, Inc. 1035 Cerise Rd Billings, MT 59101-7378 (406) 245-5002 ext 102 (406) 245-4889 (fax) WWW: http://www.link-comm.com MailTo:steve.strobel@link-comm.com
At Wed, 12 Dec 2007 16:04:37 -0700, Steve Strobel wrote:
I am using a 4-input, 8-output codec (AD1938) in an application similar to a live sound mixing board, where a combination of the input signals get mixed for each output. I can mix four inputs to four outputs with a command like:
arecord --file-type raw --channels=4 --format=S32_LE --rate=16000 \ | crosspoint \ | aplay --file-type raw --channels=4 --format=S32_LE --rate=16000
where crosspoint is a simple program that reads from stdin, does some mixing, and outputs to stdout.
My next step is to play and record wave files from the various channels, while the crosspoint is running. For example, I might need to play message1.wav to channel 2 and message2.wav to channel 3 while recording channel 1 to message3.wav. I am looking for advice about the best way to accomplish that.
It seems like it should be possible using a combination of dmix, snoop, and dshare, as follows:
- Use dsnoop to split the 4-channel input to the crosspoint and another virtual sound input, call it "A".
- Use dshare to split "A" into four mono sources. Then can use arecord normally to capture wave files.
- Use dmix and possibly dshare to similarly let the crosspoint continue to use the multi-channel output while also providing a mono sound device for each channel that I can use with aplay.
Am I on the right track, or would I be better off adding functionality to the crosspoint application to read and write wave files? Thanks for any suggestions.
Unfortunately this won't work. d* plugins can have only hw type slave PCM. So, dshare cannot have dsnoop as its slave.
I recommend you to use simply JACK for such a purpose. It's exactly designed for this kind of work.
Takashi
Steve Strobel wrote:
I am using a 4-input, 8-output codec (AD1938) in an application
similar to a live sound mixing ...snip...
Am I on the right track, or would I be better off adding
functionality to the crosspoint application
to read and write wave files? Thanks for any suggestions.
Takashi Iwai tiwai@suse.de wrote:
Unfortunately this won't work. d* plugins can have only hw type slave PCM. So, dshare cannot have dsnoop as its slave.
I recommend you to use simply JACK for such a purpose. It's exactly designed for this kind of work.
Thanks for the reply. I think I understand at least the basics of how Jack works, and I can see how it would work well in general. Unfortunately, I am working on an embedded Blackfin system running uClinux and I don't find any evidence that Jack has been ported to that platform. Also, all of the (simple command-line) utilities that I hoped to use like aplay/record, mp3play, etc. are set up for ALSA; I suppose there might be Jack equivalents or a way of using an adapter of some sort. That sounds rather involved, and defeats at least some of the advantages of native Jack apps.
Would it be a reasonable design to make small executables to do jobs similar to dmix/dsnoop/dshare that do their I/O on named pipes (fifos), then run aplay/arecord... on those pipes?
Steve
--- Steve Strobel Link Communications, Inc. 1035 Cerise Rd Billings, MT 59101-7378 (406) 245-5002 ext 102 (406) 245-4889 (fax) WWW: http://www.link-comm.com MailTo:steve.strobel@link-comm.com
On Dec 19, 2007 4:49 PM, Steve Strobel steve.strobel@link-comm.com wrote:
Thanks for the reply. I think I understand at least the basics of how Jack works, and I can see how it would work well in general. Unfortunately, I am working on an embedded Blackfin system running uClinux and I don't find any evidence that Jack has been ported to that platform. Also, all of the (simple command-line) utilities that I hoped to use like aplay/record, mp3play, etc. are set up for ALSA; I suppose there might be Jack equivalents or a way of using an adapter of some sort. That sounds rather involved, and defeats at least some of the advantages of native Jack apps.
Would it be a reasonable design to make small executables to do jobs similar to dmix/dsnoop/dshare that do their I/O on named pipes (fifos), then run aplay/arecord... on those pipes?
Once you do this you'll have re-implemented 90% of JACK.
JACK should work on your platform - it uses the same POSIX APIs (shared memory and FIFOs) as ALSA to mix and route audio. Actually dmix/dsnoop/dshare have more requirements (SysV IPC). You should at least try it.
There are JACKified equivalents to all the apps you mention. aplay and arecord aren't designed for everyday use anyway; they're really just for demonstrating ALSA features and testing drivers.
An added bonus is that you'll be using a system that's already been proven in the field, with VERY demanding professional audio apps, on everything from embedded systems to top of the line workstations.
Trust me, I've done Linux audio development for embedded systems, and once you go JACK you'll never go back ;-)
Lee
participants (3)
-
Lee Revell
-
Steve Strobel
-
Takashi Iwai