[alsa-devel] Compressed Audio Playback/Capture through ALSA framework
Hi,
Is there a precedent for playback/capture compressed audio stream through ALSA playback/capture interface if underlying hardware supports decoder + sink & encoder + source capabilities?
Which source file can I find an example?
Thanks Patrick
On 16/03/11 07:31, Patrick Lai wrote:
Hi,
Is there a precedent for playback/capture compressed audio stream through ALSA playback/capture interface if underlying hardware supports decoder + sink & encoder + source capabilities?
No. (Apart from AC3 passthrough to spdif output)
ALSA might be extended to cope with compressed audio with fixed frame size and bitrate, but dealing with any VBR encoding is likely to be even more problematic.
This is because alsa assumes a fixed relationship between samplerate and data rate, and also assumes that samples fit in an integer number of bytes.
AFAIK dealing with hardware enc/dec has been done using gstreamer plugins. E.g. http://www.lca2010.org.nz/slides/50315.pdf
Given that audioscience makes cards that support mp3 encode/decode as well as pcm, I'd like it if alsa *did* support compressed audio...
Which source file can I find an example?
Thanks Patrick
regards
Given that audioscience makes cards that support mp3 encode/decode as well as pcm, I'd like it if alsa *did* support compressed audio...
You can use the IEC format for MPEG and use an ALSA driver if you wanted and can afford the bandwidth waste and payload overhead. We've been also playing with the notion of a new API, similar to ALSA (ring buffer, periods, etc), but where references to samples/time would be removed (all byte based). If there's enough interest in the community maybe we can share this. -Pierre
On 3/15/2011 1:40 PM, Eliot Blennerhassett wrote:
On 16/03/11 07:31, Patrick Lai wrote:
Hi,
Is there a precedent for playback/capture compressed audio stream through ALSA playback/capture interface if underlying hardware supports decoder + sink& encoder + source capabilities?
No. (Apart from AC3 passthrough to spdif output)
I presume you are referring to IEC958 format. Are there ASoC drivers that support IEC958 format already? Why is AC3 passthrough acceptable but not other compressed audio format?
ALSA might be extended to cope with compressed audio with fixed frame size and bitrate,
but dealing with any VBR encoding is likely to be even
more problematic.
This is because alsa assumes a fixed relationship between samplerate and data rate, and also assumes that samples fit in an integer number of bytes.
AFAIK dealing with hardware enc/dec has been done using gstreamer plugins. E.g. http://www.lca2010.org.nz/slides/50315.pdf
Given that audioscience makes cards that support mp3 encode/decode as well as pcm, I'd like it if alsa *did* support compressed audio...
Which source file can I find an example?
Thanks Patrick
regards
Patrick Lai wrote:
Why is AC3 passthrough acceptable
Because it pretends to be a normal 16-bit stereo stream at 48 kHz, the only difference is that the IEC958 non-audio bit is set.
but not other compressed audio format?
Other formats like DTS or WMA can be transported in the same way.
Eliot Blennerhassett wrote:
ALSA might be extended to cope with compressed audio with fixed frame size and bitrate, but dealing with any VBR encoding is likely to be even more problematic.
Regards, Clemens
On Tue, 15 Mar 2011, Patrick Lai wrote:
On 3/15/2011 1:40 PM, Eliot Blennerhassett wrote:
On 16/03/11 07:31, Patrick Lai wrote:
Hi,
Is there a precedent for playback/capture compressed audio stream through ALSA playback/capture interface if underlying hardware supports decoder + sink& encoder + source capabilities?
No. (Apart from AC3 passthrough to spdif output)
I presume you are referring to IEC958 format. Are there ASoC drivers that support IEC958 format already? Why is AC3 passthrough acceptable but not other compressed audio format?
I agree. AC3, DTS and also MPEG2 formats should work. The PCM parameters for VBR streams are fixed (at least I don't know about any format which changes the rate/bits/channels inside the stream). So we can use IEC958 ( IEC 61937) and for other formats we can create similar extensions.
The only part which we can improve in the ALSA layer may be more better synchronization handling in case of xruns. Because these compressed streams are transferred in blocks, the whole block might be dropped.
Jaroslav
----- Jaroslav Kysela perex@perex.cz Linux Kernel Sound Maintainer ALSA Project, Red Hat, Inc.
I presume you are referring to IEC958 format. Are there ASoC drivers that support IEC958 format already?
Where can I find user-space code snippet to see how ALSA is configured for IEC958 playback?
Patrick Lai wrote:
Where can I find user-space code snippet to see how ALSA is configured for IEC958 playback?
Use the ALSA device name "iec958" (or its alias "spdif"). For another than the default sound card, use "iec958:x".
For compressed formats, you have to set the non-audio bit. Thise device has four parameters AES0..3 for the four bytes of S/PDIF metainformation. The non-audio bit can be set by adding the parameter "AES0=6"; the complete device name then looks like "spdif:AES0=6" or "spdif:other=parameters,AES0=6" or "spdif:{ other-parameters ... AES0=6 }".
The try_open_device function of mplayer tries to do this: http://git.mplayerhq.hu/?p=mplayer;a=blob;hb=HEAD;f=libao2/ao_alsa.c
Regards, Clemens
On Tue, Mar 15, 2011 at 11:31:18AM -0700, Patrick Lai wrote:
Is there a precedent for playback/capture compressed audio stream through ALSA playback/capture interface if underlying hardware supports decoder + sink & encoder + source capabilities?
I'm not sure if ALSA is the best API to use for this - the ALSA APIs are strongly oriented around data where the size is consistent in time while most compressed audio formats don't do that. There's also existing APIs in userspace like gstreamer and the various OpenMAXish things to slot in with, everything below those is usually black boxed per implementation.
Within ASoC the current tunneled audio stuff I've seen does something like representing the decompressor output as a DAPM input and showing that as active when there's a stream being decoded. Portions of the implementation for Moorestown are in mainline in sound/soc/mid-x86, though I'm not sure if it's all fully hooked up yet or not.
Nobody's really tried to do more yet but this may end up being the best choice overall as there's substantial variation in how the DSPs are structured both physically and OS wise which make the abstractions below the userspace API level less clear.
On Wed, 2011-03-16 at 16:08 +0530, Mark Brown wrote:
On Tue, Mar 15, 2011 at 11:31:18AM -0700, Patrick Lai wrote:
Is there a precedent for playback/capture compressed audio stream through ALSA playback/capture interface if underlying hardware supports decoder + sink & encoder + source capabilities?
I'm not sure if ALSA is the best API to use for this - the ALSA APIs are strongly oriented around data where the size is consistent in time while most compressed audio formats don't do that. There's also existing APIs in userspace like gstreamer and the various OpenMAXish things to slot in with, everything below those is usually black boxed per implementation.
I would agree with Mark, our best approach would be to do a clean design of a new API set which takes care of both CBR, VBR and be generic enough for any format.
Within ASoC the current tunneled audio stuff I've seen does something like representing the decompressor output as a DAPM input and showing that as active when there's a stream being decoded. Portions of the implementation for Moorestown are in mainline in sound/soc/mid-x86, though I'm not sure if it's all fully hooked up yet or not.
The current implementation in soc/mid-x86 is for PCM only. The compressed path offload bits are in staging/intel_sst. I will be working to make these bits move to soc/mid-x86 and also to make them more suited for generic frameworks.
Nobody's really tried to do more yet but this may end up being the best choice overall as there's substantial variation in how the DSPs are structured both physically and OS wise which make the abstractions below the userspace API level less clear.
I was thinking more on having a generic framework which coexists with alsa, asoc (dapm), and provided a way to write driver for your dsp to do decoder, sink + decoder and other variations. The implementation of these can be specific to DSP in question, but framework should be able to push and pull data, timing information around with a standard way which coexists with current frameworks
Thoughts....?
On Wed, Mar 16, 2011 at 04:29:22PM +0530, Koul, Vinod wrote:
On Wed, 2011-03-16 at 16:08 +0530, Mark Brown wrote:
Nobody's really tried to do more yet but this may end up being the best choice overall as there's substantial variation in how the DSPs are structured both physically and OS wise which make the abstractions below the userspace API level less clear.
I was thinking more on having a generic framework which coexists with alsa, asoc (dapm), and provided a way to write driver for your dsp to do decoder, sink + decoder and other variations. The implementation of these can be specific to DSP in question, but framework should be able to push and pull data, timing information around with a standard way which coexists with current frameworks
It would be nice to have a standard userspace API for this but I'm not aware of anyone who's looked at it in detail and you start having to also take into account other algorithms that are running on the device so there's nothing to point people at right now and no real prospect of there being. I don't know if it's something we can resolve entirely in the kernel as I'm aware that some of the DSP implementations have non-trivial management code in userspace that they talk to which may mean that the standard API has to be a userspace one. There's also the difference between memory to memory implementations (which fit into a userspace chain much more readily) and tunneled implementations (which do need new infrastructure).
I think this'll get substantially easier to look at once the media controller API is merged and starts to be used in the audio subsystem.
On Wed, 2011-03-16 at 17:26 +0530, Mark Brown wrote:
It would be nice to have a standard userspace API for this but I'm not aware of anyone who's looked at it in detail and you start having to also take into account other algorithms that are running on the device so there's nothing to point people at right now and no real prospect of there being. I don't know if it's something we can resolve entirely in the kernel as I'm aware that some of the DSP implementations have non-trivial management code in userspace that they talk to which may mean that the standard API has to be a userspace one. There's also the difference between memory to memory implementations (which fit into a userspace chain much more readily) and tunneled implementations (which do need new infrastructure).
Yes, I was also thinking this needs to have both the user land component for applications to use and kernel land for DSP implementations. Agreed, this should also comprehend algorithms.
It would be nice to have a standard userspace API for this but I'm not aware of anyone who's looked at it in detail and you start having to also take into account other algorithms that are running on the device so there's nothing to point people at right now and no real prospect of there being. I don't know if it's something we can resolve entirely in the kernel as I'm aware that some of the DSP implementations have non-trivial management code in userspace that they talk to which may mean that the standard API has to be a userspace one. There's also the difference between memory to memory implementations (which fit into a userspace chain much more readily) and tunneled implementations (which do need new infrastructure).
I think this'll get substantially easier to look at once the media controller API is merged and starts to be used in the audio subsystem.
Mark: I must be missing something here. Why would this media controller API make things simpler? Based on what was presented at LPC, I don't understand this last part. -Pierre
On Wed, Mar 16, 2011 at 12:52:23PM -0500, pl bossart wrote:
I must be missing something here. Why would this media controller API make things simpler? Based on what was presented at LPC, I don't understand this last part.
It'd make the tie up with algorithms part much easier as we could have an interface for transferring the compressed data alone and then externally describe how that's plumbed into any other DSP that's going on and the physical outputs - it'd help with treating the data transfer as a standalone problem.
On Wed, Mar 16, 2011 at 12:53 PM, Mark Brown broonie@opensource.wolfsonmicro.com wrote:
On Wed, Mar 16, 2011 at 12:52:23PM -0500, pl bossart wrote:
I must be missing something here. Why would this media controller API make things simpler? Based on what was presented at LPC, I don't understand this last part.
It'd make the tie up with algorithms part much easier as we could have an interface for transferring the compressed data alone and then externally describe how that's plumbed into any other DSP that's going on and the physical outputs - it'd help with treating the data transfer as a standalone problem.
Still not convinced. Why would you need to 'externally describe' how compressed data is linked to post-processing. It's all part of DSP firmware, why should anyone care how the decoder provides data to post-processes? You can control post-processes with ALSA controls as for regular PCM.
On Wed, Mar 16, 2011 at 01:00:09PM -0500, pl bossart wrote:
On Wed, Mar 16, 2011 at 12:53 PM, Mark Brown
It'd make the tie up with algorithms part much easier as we could have an interface for transferring the compressed data alone and then externally describe how that's plumbed into any other DSP that's going on and the physical outputs - it'd help with treating the data transfer as a standalone problem.
Still not convinced. Why would you need to 'externally describe' how compressed data is linked to post-processing. It's all part of DSP firmware, why should anyone care how the decoder provides data to post-processes? You can control post-processes with ALSA controls as for regular PCM.
The problem is figuring out which controls are where and what can be joined up with what. This is a problem with regular PCM too but it gets much worse when everything is virtual. Media controller should provide a route to allowing applications to figure out what's going on in the hardware.
On Wed, 2011-03-16 at 23:38 +0530, Mark Brown wrote:
On Wed, Mar 16, 2011 at 01:00:09PM -0500, pl bossart wrote:
On Wed, Mar 16, 2011 at 12:53 PM, Mark Brown
It'd make the tie up with algorithms part much easier as we could have an interface for transferring the compressed data alone and then externally describe how that's plumbed into any other DSP that's going on and the physical outputs - it'd help with treating the data transfer as a standalone problem.
Still not convinced. Why would you need to 'externally describe' how compressed data is linked to post-processing. It's all part of DSP firmware, why should anyone care how the decoder provides data to post-processes? You can control post-processes with ALSA controls as for regular PCM.
The problem is figuring out which controls are where and what can be joined up with what. This is a problem with regular PCM too but it gets much worse when everything is virtual. Media controller should provide a route to allowing applications to figure out what's going on in the hardware.
Wouldn't a virtual sound card solve this? I was thinking of representing all DSP elements in a virtual card. From userland, we know when decoder, algorithms etc are active and can control them. The virtual card output gets connected to actual sound card.
On 3/16/2011 7:21 PM, Koul, Vinod wrote:
On Wed, 2011-03-16 at 23:38 +0530, Mark Brown wrote:
On Wed, Mar 16, 2011 at 01:00:09PM -0500, pl bossart wrote:
On Wed, Mar 16, 2011 at 12:53 PM, Mark Brown
It'd make the tie up with algorithms part much easier as we could have an interface for transferring the compressed data alone and then externally describe how that's plumbed into any other DSP that's going on and the physical outputs - it'd help with treating the data transfer as a standalone problem.
Still not convinced. Why would you need to 'externally describe' how compressed data is linked to post-processing. It's all part of DSP firmware, why should anyone care how the decoder provides data to post-processes? You can control post-processes with ALSA controls as for regular PCM.
The problem is figuring out which controls are where and what can be joined up with what. This is a problem with regular PCM too but it gets much worse when everything is virtual. Media controller should provide a route to allowing applications to figure out what's going on in the hardware.
Wouldn't a virtual sound card solve this? I was thinking of representing all DSP elements in a virtual card.
I think it would work to certain extend but what if DSP can instantiate elements at run-time, how can we deal with this use case under current ALSA framework?
On Wed, Mar 16, 2011 at 10:00:01PM -0700, Patrick Lai wrote:
On 3/16/2011 7:21 PM, Koul, Vinod wrote:
Wouldn't a virtual sound card solve this? I was thinking of representing all DSP elements in a virtual card.
I think it would work to certain extend but what if DSP can instantiate elements at run-time, how can we deal with this use case under current ALSA framework?
Yes, exactly - handing runtime changes is a big issue, as is discoverability.
Wouldn't a virtual sound card solve this? I was thinking of representing all DSP elements in a virtual card.
I think it would work to certain extend but what if DSP can instantiate elements at run-time, how can we deal with this use case under current ALSA framework?
Yes, exactly - handing runtime changes is a big issue, as is discoverability.
Wow. Run-time changes and discoverability? This sounds wild. what type of solutions are we talking about here? All the DSP implementations I've seen are pretty dumb, the firmware is downloaded at the request of the host when a specific service is requested; discoverability isn't an issue since the driver and possibly user-space know what's been downloaded and how the downloaded parts interact with the rest of the firmware.
On Thu, Mar 17, 2011 at 09:27:02AM -0500, pl bossart wrote:
Wow. Run-time changes and discoverability? This sounds wild. what type of solutions are we talking about here? All the DSP implementations I've seen are pretty dumb, the firmware is downloaded at the request of the host when a specific service is requested; discoverability isn't an issue since the driver and possibly user-space know what's been downloaded and how the downloaded parts interact with the rest of the firmware.
Having the driver figure out what's going on isn't usually much of an issue, it's letting the application know about the configuration that's available. In situations where the DSP can support flexible routing (eg, if it's got multiple audio interfaces and can route or mix between them and also to and from the CPU) with per-flow algorithm selection it gets unmanagable if you try to show everything possible via the current ALSA APIs. Things get worse the more algorithms and so on the DSP can support.
Wow. Run-time changes and discoverability? This sounds wild. what type of solutions are we talking about here? All the DSP implementations I've seen are pretty dumb, the firmware is downloaded at the request of the host when a specific service is requested; discoverability isn't an issue since the driver and possibly user-space know what's been downloaded and how the downloaded parts interact with the rest of the firmware.
Having the driver figure out what's going on isn't usually much of an issue, it's letting the application know about the configuration that's available. In situations where the DSP can support flexible routing (eg, if it's got multiple audio interfaces and can route or mix between them and also to and from the CPU) with per-flow algorithm selection it gets unmanagable if you try to show everything possible via the current ALSA APIs. Things get worse the more algorithms and so on the DSP can support.
Ok I get your point and agree with the analysis. Still, isn't UCM going to address some of the complexity by abstracting the routing/agorithms with some predefined configurations? Or are you talking about going beyond static UCM configurations into something more flexible based on the Media Controller API?
On Thu, Mar 17, 2011 at 02:16:23PM -0500, pl bossart wrote:
Still, isn't UCM going to address some of the complexity by abstracting the routing/agorithms with some predefined configurations?
UCM definitely helps at runtime but in an ideal world we'd still have the nice pointy clicky tools that provide a beautiful GUI visualising the system audio setup so people could use those when configuring the setup for UCM.
Or are you talking about going beyond static UCM configurations into something more flexible based on the Media Controller API?
I can see us still needing something dynamic for things like tunneled audio streams where you're likely to have them being instantiated dynamically (with associated control), you'd want UCM to be able to work with this to tell the application about how these should be joined up to the rest of the system and so on.
I can see us still needing something dynamic for things like tunneled audio streams where you're likely to have them being instantiated dynamically (with associated control), you'd want UCM to be able to work with this to tell the application about how these should be joined up to the rest of the system and so on.
My impression about UCM is that scenario rules are written based on statically defined elements. How does it discover run-time instantiated elements and know what to do about it?
On Thu, Mar 17, 2011 at 02:19:27PM -0700, Patrick Lai wrote:
My impression about UCM is that scenario rules are written based on statically defined elements. How does it discover run-time instantiated elements and know what to do about it?
It doesn't at present - it'll need to be enhanced to do so, but it's hardly alone in this.
UCM definitely helps at runtime but in an ideal world we'd still have the nice pointy clicky tools that provide a beautiful GUI visualising the system audio setup so people could use those when configuring the setup for UCM.
Or are you talking about going beyond static UCM configurations into something more flexible based on the Media Controller API?
I can see us still needing something dynamic for things like tunneled audio streams where you're likely to have them being instantiated dynamically (with associated control), you'd want UCM to be able to work with this to tell the application about how these should be joined up to the rest of the system and so on.
I don't disagree, but I view tunneling as very difficult to implement when you have something like PulseAudio or AudioFlinger handling all the routing, policy and volume control. The approach based on passthrough makes things simpler in terms of links/routing/policy. Instead of tunneling you push compressed data as far as possible into the sink. The application doesn't need to know what the connections are, this can be handled at the driver/fw level
On Fri, Mar 18, 2011 at 11:03:57AM -0500, pl bossart wrote:
I don't disagree, but I view tunneling as very difficult to implement when you have something like PulseAudio or AudioFlinger handling all the routing, policy and volume control. The approach based on
I don't think it's an intractible problem - I think we can get to a point where UCM when asked for an output stream hands back either an object of some type which wraps up a sink and any controls for that sink has well enough for PulseAudio or AudioFlinger to figure out if it wants to layer anything on top to provide functionality like per-stream volume control.
passthrough makes things simpler in terms of links/routing/policy. Instead of tunneling you push compressed data as far as possible into the sink. The application doesn't need to know what the connections are, this can be handled at the driver/fw level
The application doesn't need this but the audio daemon does so that we can support multiple output paths depending on the audio type.
Hi all,
I recently purchased Asus e35M1-M Pro and I would like to get souround sound going from my 5.1 system. I have it connected accordingly to the manual (line in - rear speakers, line out - front speaker, mic in - bass/center) but anywy I try that is using guides like http://www.gentoo-wiki.info/HOWTO_Surround_Sound I am not able to get sound from any other speaker then front left and right one. Alsamixer does not show any devices as shared to be available to switch to the output from input
speaker-test plays every file in the left and right front speaker at the same time
Here is my system information using 2.6.38-rc8-git3 http://www.alsa-project.org/db/?f=0dd55b4640a8afff1552c4d0be0105e416ad1cf3 and 2.6.38-rc8-next-20110314 http://www.alsa-project.org/db/?f=a8733bf0083929540948f51b5f33d9c8a62d9994
My question is what should I try next?
Regards, Wojciech
Dear Wojciech,
please do not hijack threads [1]. Using Mozilla Thunderbird you can easily compose a new message to a certain addressee by clicking on the address alsa-devel@alsa-project.org in any of the messages.
Am Donnerstag, den 17.03.2011, 06:32 +0100 schrieb Wojciech Myrda:
I recently purchased Asus e35M1-M Pro
OT: How do you like it? Any reviews you found useful on the WWW?
and I would like to get souround sound going from my 5.1 system. I have it connected accordingly to the manual (line in - rear speakers, line out - front speaker, mic in - bass/center) but anywy I try that is using guides like http://www.gentoo-wiki.info/HOWTO_Surround_Sound I am not able to get sound from any other speaker then front left and right one. Alsamixer does not show any devices as shared to be available to switch to the output from input
speaker-test plays every file in the left and right front speaker at the same time
Here is my system information using 2.6.38-rc8-git3 http://www.alsa-project.org/db/?f=0dd55b4640a8afff1552c4d0be0105e416ad1cf3 and 2.6.38-rc8-next-20110314 http://www.alsa-project.org/db/?f=a8733bf0083929540948f51b5f33d9c8a62d9994
Please additionally attach those files next time, since the developers (I think at least Takashi) prefer it.
My question is what should I try next?
If I understood it correctly it does not work yet [2]. If this explains the issue for you please update [3] and you could also submit a report to the Gentoo bug tracking system (BTS). Maybe some developer will implement this. You could also figure out if there exists a ticket in the Kernel BTS and submit a feature request or, if it already exists, follow up there.
Thanks and good luck,
Paul
[1] http://en.opensuse.org/openSUSE:Mailing_list_netiquette [2] http://sourceforge.net/mailarchive/forum.php?thread_name=4D3F1702.1090309%40... [3] http://www.gentoo-wiki.info/HOWTO_Surround_Sound
W dniu 17.03.2011 22:55, Paul Menzel pisze:
Dear Wojciech,
please do not hijack threads [1]. Using Mozilla Thunderbird you can easily compose a new message to a certain addressee by clicking on the address alsa-devel@alsa-project.org in any of the messages.
Dear Paul, I am using Mozilla Thunderbird for a while, but this e-mail was first I have send to the mailing list and it may be that I have done something out of ordinary, but step You have mentioned are actually once I have made. It was composing a new message by clicking on the adress
Am Donnerstag, den 17.03.2011, 06:32 +0100 schrieb Wojciech Myrda:
I recently purchased Asus e35M1-M Pro
OT: How do you like it? Any reviews you found useful on the WWW?
I was deciding for some time which platform use for my HTPC and this board was the first with low voltage processor and several PCI ports (2xPCIe, 2xPCI) that allowed me to place in it 2xDVB-S2 cards and 1xDVB card opossing to only 1 PCIe slot with Atom boards so choice was clear ;) As far as Linux experience with the board there is number of rough corners but level of support is already increasing by the day
and I would like to get souround sound going from my 5.1 system. I have it connected accordingly to the manual (line in - rear speakers, line out - front speaker, mic in - bass/center) but anywy I try that is using guides like http://www.gentoo-wiki.info/HOWTO_Surround_Sound I am not able to get sound from any other speaker then front left and right one. Alsamixer does not show any devices as shared to be available to switch to the output from input
speaker-test plays every file in the left and right front speaker at the same time
Here is my system information using 2.6.38-rc8-git3 http://www.alsa-project.org/db/?f=0dd55b4640a8afff1552c4d0be0105e416ad1cf3 and 2.6.38-rc8-next-20110314 http://www.alsa-project.org/db/?f=a8733bf0083929540948f51b5f33d9c8a62d9994
Please additionally attach those files next time, since the developers (I think at least Takashi) prefer it.
I was not familliar with that policy. Will remeber it now
My question is what should I try next?
If I understood it correctly it does not work yet [2]. If this explains the issue for you please update [3] and you could also submit a report to the Gentoo bug tracking system (BTS). Maybe some developer will implement this. You could also figure out if there exists a ticket in the Kernel BTS and submit a feature request or, if it already exists, follow up there.
It does not work yet as it should. I have the stereo sound in few apps which allow me to pick the second audio device Realtek AL887-VD in the configuration menu. First is HDMI Audio which does not work yet with Open Source driver on Evergreen chipset [4]. I am bugging xf86-video-ati guys about it to get that supported as well :P .However sourround does not work in any configuration as card is not detected properly. Alsamixer does not make "Line In" and "Mic In" as shared devices and I am not able to configure them as output. Board user guide says that "Line In" should work as "Rear Speakers Out" and "Mic In" should work as "Bass/Center"
I'll create bug report with Gentoo bug tracking system, but looking at the matter historicly in case of drivers Gentoo it is very depended on the upstream to resolve bugs in them.
I must say I didn't know I should create bug with Kernel BTS. Alsa-devel mailing list seemed most appropriate place to seek help after forums and IRC
Thanks and good luck,
Paul
[2] was a great read and will definely get back to it once video driver allows it but for now I would like to concenrate on proper mini jacks configuration
Regards, Wojciech
[1] http://en.opensuse.org/openSUSE:Mailing_list_netiquette [2] http://sourceforge.net/mailarchive/forum.php?thread_name=4D3F1702.1090309%40... [3] http://www.gentoo-wiki.info/HOWTO_Surround_Sound
2011/3/16 Koul, Vinod vinod.koul@intel.com
On Wed, 2011-03-16 at 16:08 +0530, Mark Brown wrote:
On Tue, Mar 15, 2011 at 11:31:18AM -0700, Patrick Lai wrote:
Is there a precedent for playback/capture compressed audio stream through ALSA playback/capture interface if underlying hardware supports decoder + sink & encoder + source capabilities?
I'm not sure if ALSA is the best API to use for this - the ALSA APIs are strongly oriented around data where the size is consistent in time while most compressed audio formats don't do that. There's also existing APIs in userspace like gstreamer and the various OpenMAXish things to slot in with, everything below those is usually black boxed per implementation.
I would agree with Mark, our best approach would be to do a clean design of a new API set which takes care of both CBR, VBR and be generic enough for any format.
For software decode, the application can start , pause and stop at any PCM frames
Does the hardware decoder has any limitation on the compressed stream ?
participants (10)
-
Clemens Ladisch
-
Eliot Blennerhassett
-
Jaroslav Kysela
-
Koul, Vinod
-
Mark Brown
-
Patrick Lai
-
Paul Menzel
-
pl bossart
-
Raymond Yau
-
Wojciech Myrda