Re: [alsa-devel] Enhance support for SigmaDSP chips
On 04/11/2013 11:56 AM, Daniel Mack wrote:
Hi Lars, Hi Cliff,
I'm looking into the SigmaDSP AD1701 support, as a new prospective project will use it. What's going to be used on this chip is not the DAC/ADC features though, but the internal, programmable DSP things, and designers will use the SigmaStudio IDE to generate the firmware.
So I'm wondering how to support this kind of chip properly in ALSA, which is not straight-forward, as the controls exported by it are specific to the loaded firmware of course. Also, the IDE is free to re-allocate every sub-addresses of its controls when something changes in the layout of the building blocks.
One idea I have in mind is to have a little parser script that reads the generated sources of the IDE and dump a device-tree node snippet which can then be put into the final DT. The driver would need to learn about how to interpret that DT nodes, which will most likely just contain a name, a sub-address, along with some mask and shift values.
I wanted to ask for your opinion on that before I start. Have you ever used any of the DSP features from ALSA/ASoC? Are there any pitfalls I should be aware of? Is anyone actively working on improvements on the driver?
An alternative approach is to write a userspace library of course, but that's going to be problematic once anybody wants to use the DSP features in parallel to the DAC/ADC.
Hi,
I just yesterday wrote a small post how you can program the DSP registers. http://ez.analog.com/message/83094#83094
I'd like to be able to put the information about the different algorithms into the DSP firmware file itself, so we can auto instantiate the controls from the driver. The problem is that it is not so easy to generalize the export of the algorithms from SigmaStudio.
Another problem is that you don't know how to calculate the algorithm parameters at runtime for some of the more complex algorithms.
I don't think it's a good idea to put the control info into the devicetree and would rather prefer to see them go into the firmware. But as I said auto-generating this from exported SigmaStudio files is not so easy. So maybe manually creating the control info table might be an option.
Btw. in case you haven't seen it the SigmaTcp tools is quite useful during the development of the firmware since it allows you to connect SigmaStudio directly to the DSP on the board. http://wiki.analog.com/resources/tools-software/linux-software/sigmatcp
- Lars
Hi Lars,
(taking off Cliff's email address, as it's apparently not valid any more).
On 11.04.2013 12:51, Lars-Peter Clausen wrote:
On 04/11/2013 11:56 AM, Daniel Mack wrote:
Hi Lars, Hi Cliff,
I'm looking into the SigmaDSP AD1701 support, as a new prospective project will use it. What's going to be used on this chip is not the DAC/ADC features though, but the internal, programmable DSP things, and designers will use the SigmaStudio IDE to generate the firmware.
So I'm wondering how to support this kind of chip properly in ALSA, which is not straight-forward, as the controls exported by it are specific to the loaded firmware of course. Also, the IDE is free to re-allocate every sub-addresses of its controls when something changes in the layout of the building blocks.
One idea I have in mind is to have a little parser script that reads the generated sources of the IDE and dump a device-tree node snippet which can then be put into the final DT. The driver would need to learn about how to interpret that DT nodes, which will most likely just contain a name, a sub-address, along with some mask and shift values.
I wanted to ask for your opinion on that before I start. Have you ever used any of the DSP features from ALSA/ASoC? Are there any pitfalls I should be aware of? Is anyone actively working on improvements on the driver?
An alternative approach is to write a userspace library of course, but that's going to be problematic once anybody wants to use the DSP features in parallel to the DAC/ADC.
Hi,
I just yesterday wrote a small post how you can program the DSP registers. http://ez.analog.com/message/83094#83094
Good timing :) Yes, I would add something like this, but in a generic manner of course.
I'd like to be able to put the information about the different algorithms into the DSP firmware file itself, so we can auto instantiate the controls from the driver. The problem is that it is not so easy to generalize the export of the algorithms from SigmaStudio.
But just to understand this right: the firmware exported by SigmaStudio is already what the driver loads via the kernel firmware interface, right?
Another problem is that you don't know how to calculate the algorithm parameters at runtime for some of the more complex algorithms.
I would just want to set the controls as specified by the exported header file, whatever kind of control that is. I haven't done real-world tests yet, but the output of a example project doesn't seem very complicated. Which 'more complex algorithms' are you referring to?
I don't think it's a good idea to put the control info into the devicetree and would rather prefer to see them go into the firmware.
I thought about this too, but after all, the DT describes the hardware, and the hardware in this case is something that can be programmed. The idea would be to have that script that generates the DT fragment, and leave it up to the user to decide which controls to make accessible by the driver.
From the driver's perspective, I don't know if defining a binary
interface format inside the firmware blob and parsing it at run-time is really easier to achieve than using the DT, especially as we have all convenient functions to read properties and iterate over nodes already prepared. But as I don't have the full picture of the DSPs yet, I might overlook the subtle details.
But as I said auto-generating this from exported SigmaStudio files is not so easy. So maybe manually creating the control info table might be an option.
Btw. in case you haven't seen it the SigmaTcp tools is quite useful during the development of the firmware since it allows you to connect SigmaStudio directly to the DSP on the board. http://wiki.analog.com/resources/tools-software/linux-software/sigmatcp
Yes, I've seen this. That's a very nice option for the development process. Thanks for mentioning it!
Daniel
On 04/11/2013 02:02 PM, Daniel Mack wrote:
Hi Lars,
(taking off Cliff's email address, as it's apparently not valid any more).
On 11.04.2013 12:51, Lars-Peter Clausen wrote:
On 04/11/2013 11:56 AM, Daniel Mack wrote:
Hi Lars, Hi Cliff,
I'm looking into the SigmaDSP AD1701 support, as a new prospective project will use it. What's going to be used on this chip is not the DAC/ADC features though, but the internal, programmable DSP things, and designers will use the SigmaStudio IDE to generate the firmware.
So I'm wondering how to support this kind of chip properly in ALSA, which is not straight-forward, as the controls exported by it are specific to the loaded firmware of course. Also, the IDE is free to re-allocate every sub-addresses of its controls when something changes in the layout of the building blocks.
One idea I have in mind is to have a little parser script that reads the generated sources of the IDE and dump a device-tree node snippet which can then be put into the final DT. The driver would need to learn about how to interpret that DT nodes, which will most likely just contain a name, a sub-address, along with some mask and shift values.
I wanted to ask for your opinion on that before I start. Have you ever used any of the DSP features from ALSA/ASoC? Are there any pitfalls I should be aware of? Is anyone actively working on improvements on the driver?
An alternative approach is to write a userspace library of course, but that's going to be problematic once anybody wants to use the DSP features in parallel to the DAC/ADC.
Hi,
I just yesterday wrote a small post how you can program the DSP registers. http://ez.analog.com/message/83094#83094
Good timing :) Yes, I would add something like this, but in a generic manner of course.
I'd like to be able to put the information about the different algorithms into the DSP firmware file itself, so we can auto instantiate the controls from the driver. The problem is that it is not so easy to generalize the export of the algorithms from SigmaStudio.
But just to understand this right: the firmware exported by SigmaStudio is already what the driver loads via the kernel firmware interface, right?
Kind of, there is no support for generating the firmware from within SigmaStudio. But there is an external tool which takes data which has been exported form SigmaStudio and generates the firmware:
http://wiki.analog.com/resources/tools-software/linux-software/sigmadsp_genf...
Another problem is that you don't know how to calculate the algorithm parameters at runtime for some of the more complex algorithms.
I would just want to set the controls as specified by the exported header file, whatever kind of control that is. I haven't done real-world tests yet, but the output of a example project doesn't seem very complicated. Which 'more complex algorithms' are you referring to?
Basically anything that is not a push button or a gain selector. E.g. the coefficients for a compressor. You could expose these as SOC_BYTES control, but you'd still need a userspace application that is able to recalculate them at runtime.
I don't think it's a good idea to put the control info into the devicetree and would rather prefer to see them go into the firmware.
I thought about this too, but after all, the DT describes the hardware, and the hardware in this case is something that can be programmed. The idea would be to have that script that generates the DT fragment, and leave it up to the user to decide which controls to make accessible by the driver.
From the driver's perspective, I don't know if defining a binary interface format inside the firmware blob and parsing it at run-time is really easier to achieve than using the DT, especially as we have all convenient functions to read properties and iterate over nodes already prepared. But as I don't have the full picture of the DSPs yet, I might overlook the subtle details.
Well the description in the device tree needs to match the firmware and the firmware is usually stored in userspace and can easily be replaced. It is in my opinion better to have both the DSP firmware and the control description in the same place.
- Lars
I'm trying to use "my" hardware at a 204800 sample rate. I've changed the DAI and codec limits to match. If I limit them to anything below 192kHz, everything behaves as expected and I can record at up to that rate, also if that rate is not a multiple of 48k or 44k1 (using "SNDRV_PCM_RATE_KNOT" constant).
When I specifiy anything over 192000, I bump onto some ceiling, and I cannot figure out what's causing it. I patched alsa-utils to allow
192000 already (there is a silly check on that in aplay.c which bails
out early without asking the driver).
# arecord -D hw:ADC8 --duration=5 -f S32_LE -c 2 -r 204800 /tmp/recording.wav Recording WAVE '/tmp/recording.wav' : Signed 32 bit Little Endian, Rate 204800 Hz, Stereo Warning: rate is not accurate (requested = 204800Hz, got = 192000Hz) please, try the plug plugin # # arecord -D hw:ADC8 --duration=5 -f S32_LE -c 2 -r 192000 /tmp/recording.wav Recording WAVE '/tmp/recording.wav' : Signed 32 bit Little Endian, Rate 192000 Hz, Stereo #
I also tried a "grep" on the kernel (i'm still on 2.3.7 though) source files on "192000" but found nothing that would limit the sample rate to that. I'm running out of ideas. How can I break this limit? Or where in the kernel can I find the part that calculates the max rate?
Mike.
Found the cause: snd_pcm_limit_hw_rates()
soc-code.c:soc_pcm_open() calculates the min/max rates for the given link, then passes the result to snd_pcm_limit_hw_rates which discards this and replaces it with incorrect data derived from the PCM flags.
The 3.8 kernel soc-core looks totally different, so I have no clue if this still applies.
If a codec defines rate_min=16000, rate_max=96000 and the DAI specifies flags for 8000..192000 Hz, then the resulting limits that it will apply are rate_min=8000 rate_max=192000. One would expect the more restrictive values to prevail. Similar, if both codec and dai specify rate_max=216000 then the resulting rate_max will surprisingly be set to 192000.
I worked around the issue by simply adding a 204800 rate to the table of known rates.
Mike.
On 04/17/2013 10:21 AM, Mike Looijmans wrote:
I'm trying to use "my" hardware at a 204800 sample rate. I've changed the DAI and codec limits to match. If I limit them to anything below 192kHz, everything behaves as expected and I can record at up to that rate, also if that rate is not a multiple of 48k or 44k1 (using "SNDRV_PCM_RATE_KNOT" constant).
When I specifiy anything over 192000, I bump onto some ceiling, and I cannot figure out what's causing it. I patched alsa-utils to allow
192000 already (there is a silly check on that in aplay.c which bails
out early without asking the driver).
# arecord -D hw:ADC8 --duration=5 -f S32_LE -c 2 -r 204800 /tmp/recording.wav Recording WAVE '/tmp/recording.wav' : Signed 32 bit Little Endian, Rate 204800 Hz, Stereo Warning: rate is not accurate (requested = 204800Hz, got = 192000Hz) please, try the plug plugin # # arecord -D hw:ADC8 --duration=5 -f S32_LE -c 2 -r 192000 /tmp/recording.wav Recording WAVE '/tmp/recording.wav' : Signed 32 bit Little Endian, Rate 192000 Hz, Stereo #
I also tried a "grep" on the kernel (i'm still on 2.3.7 though) source files on "192000" but found nothing that would limit the sample rate to that. I'm running out of ideas. How can I break this limit? Or where in the kernel can I find the part that calculates the max rate?
Mike.
participants (3)
-
Daniel Mack
-
Lars-Peter Clausen
-
Mike Looijmans