[PATCH] ASoC: snd_soc_dai_set_fmt add substream independence.
This patch is aims to start a stronger discussion on allowing both CPU and Codec dais to set formats independently based on direction.
Currently it is very difficult to change stream formats because there is no way to specify it independently.
A previous discussion w.r.t. a codec's stream dependent format setting required a codec's dai to be split. See here : https://mailman.alsa-project.org/pipermail/alsa-devel/2020-March/164492.html
A previous discussion w.r.t. changing format based on the stream direction w.r.t. a CPU's dai showed the difficulty in setting fmt based on direection in thw hw_params function. See here : https://mailman.alsa-project.org/pipermail/alsa-devel/2020-March/165101.html
It seems that it may be necessary to allow the stream direction as an input argument to snd_soc_dai_set_fmt.
This patch is not complete, as there are many codecs and cpus which would require this change. However this patch is aimed as a discussion point.
One example of a sound card which requires independent stream formats is an isolated sound card, such as the Audio Injector Isolated sound card. The magnetic isolation chips on the sound card require stream fomats to be different because of digital latency variations on the I2S lines.
Signed-off-by: Matt Flax flatmax@flatmax.org --- include/sound/soc-dai.h | 4 ++-- sound/soc/soc-core.c | 5 +++-- sound/soc/soc-dai.c | 4 ++-- 3 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/include/sound/soc-dai.h b/include/sound/soc-dai.h index eaaeb00e9e84..5071e006389b 100644 --- a/include/sound/soc-dai.h +++ b/include/sound/soc-dai.h @@ -123,7 +123,7 @@ int snd_soc_dai_set_pll(struct snd_soc_dai *dai, int snd_soc_dai_set_bclk_ratio(struct snd_soc_dai *dai, unsigned int ratio);
/* Digital Audio interface formatting */ -int snd_soc_dai_set_fmt(struct snd_soc_dai *dai, unsigned int fmt); +int snd_soc_dai_set_fmt(struct snd_soc_dai *dai, unsigned int fmt, int stream);
int snd_soc_dai_set_tdm_slot(struct snd_soc_dai *dai, unsigned int tx_mask, unsigned int rx_mask, int slots, int slot_width); @@ -186,7 +186,7 @@ struct snd_soc_dai_ops { * DAI format configuration * Called by soc_card drivers, normally in their hw_params. */ - int (*set_fmt)(struct snd_soc_dai *dai, unsigned int fmt); + int (*set_fmt)(struct snd_soc_dai *dai, unsigned int fmt, int stream); int (*xlate_tdm_slot_mask)(unsigned int slots, unsigned int *tx_mask, unsigned int *rx_mask); int (*set_tdm_slot)(struct snd_soc_dai *dai, diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c index 068d809c349a..27a6f01dc2d3 100644 --- a/sound/soc/soc-core.c +++ b/sound/soc/soc-core.c @@ -1474,10 +1474,11 @@ int snd_soc_runtime_set_dai_fmt(struct snd_soc_pcm_runtime *rtd, struct snd_soc_dai *cpu_dai = rtd->cpu_dai; struct snd_soc_dai *codec_dai; unsigned int i; - int ret; + int ret, stream;
for_each_rtd_codec_dai(rtd, i, codec_dai) { - ret = snd_soc_dai_set_fmt(codec_dai, dai_fmt); + int stream = 0; + ret = snd_soc_dai_set_fmt(codec_dai, dai_fmt, stream); if (ret != 0 && ret != -ENOTSUPP) { dev_warn(codec_dai->dev, "ASoC: Failed to set DAI format: %d\n", ret); diff --git a/sound/soc/soc-dai.c b/sound/soc/soc-dai.c index 51031e330179..1f8ee7875656 100644 --- a/sound/soc/soc-dai.c +++ b/sound/soc/soc-dai.c @@ -94,11 +94,11 @@ EXPORT_SYMBOL_GPL(snd_soc_dai_set_bclk_ratio); * * Configures the DAI hardware format and clocking. */ -int snd_soc_dai_set_fmt(struct snd_soc_dai *dai, unsigned int fmt) +int snd_soc_dai_set_fmt(struct snd_soc_dai *dai, unsigned int fmt, int stream) { if (dai->driver->ops->set_fmt == NULL) return -ENOTSUPP; - return dai->driver->ops->set_fmt(dai, fmt); + return dai->driver->ops->set_fmt(dai, fmt, stream); } EXPORT_SYMBOL_GPL(snd_soc_dai_set_fmt);
On Sat, Mar 28, 2020 at 12:58:31PM +1100, Matt Flax wrote:
This patch is aims to start a stronger discussion on allowing both CPU and Codec dais to set formats independently based on direction.
If the DAIs support completely separate formats they're not a single DAI and should be represented as two DAIs.
One example of a sound card which requires independent stream formats is an isolated sound card, such as the Audio Injector Isolated sound card. The magnetic isolation chips on the sound card require stream fomats to be different because of digital latency variations on the I2S lines.
Honestly I'm not convinced this is ever going to work reliably - there is in general an assumption in the code that the formats on both ends of the link are the same, it seems you'll have to break that as well as having an asymmetric configuration. You probably need to represent these isolators as a CODEC and do a CODEC to CODEC link and even then it seems worrying.
/* Digital Audio interface formatting */ -int snd_soc_dai_set_fmt(struct snd_soc_dai *dai, unsigned int fmt); +int snd_soc_dai_set_fmt(struct snd_soc_dai *dai, unsigned int fmt, int stream);
This will break the build.
On 30/3/20 9:32 pm, Mark Brown wrote:
On Sat, Mar 28, 2020 at 12:58:31PM +1100, Matt Flax wrote:
This patch is aims to start a stronger discussion on allowing both CPU and Codec dais to set formats independently based on direction.
If the DAIs support completely separate formats they're not a single DAI and should be represented as two DAIs.
I understand, however having two DAIs produces subdevices and pushes the overhead of managing registers to the end user in the form of two sub devices.
Is everyone firm on the concept that a DAI's playback and capture stream has to have the same format in the same DAI ?
I can see a much better solution (then the one I posted here) which is also very simple to solve this problem in the same DAI.
One example of a sound card which requires independent stream formats is an isolated sound card, such as the Audio Injector Isolated sound card. The magnetic isolation chips on the sound card require stream fomats to be different because of digital latency variations on the I2S lines.
Honestly I'm not convinced this is ever going to work reliably - there is in general an assumption in the code that the formats on both ends of the link are the same, it seems you'll have to break that as well as having an asymmetric configuration. You probably need to represent these isolators as a CODEC and do a CODEC to CODEC link and even then it seems worrying.
I like to think of isolation as innovative, not worrying :)
However w.r.t. the codec to codec link approach, I will take your advice and not go down that route.
thanks
Matt
On Mon, Mar 30, 2020 at 11:28:26PM +1100, Matt Flax wrote:
On 30/3/20 9:32 pm, Mark Brown wrote:
On Sat, Mar 28, 2020 at 12:58:31PM +1100, Matt Flax wrote:
This patch is aims to start a stronger discussion on allowing both CPU and Codec dais to set formats independently based on direction.
If the DAIs support completely separate formats they're not a single DAI and should be represented as two DAIs.
I understand, however having two DAIs produces subdevices and pushes the overhead of managing registers to the end user in the form of two sub devices.
I think that's a swings and roundabouts thing where it really depends on your use case - for example if the DAIs can be organized however people like then people can come up with creative ways to wire things that don't pair things in the way that makes sense for userspace. Ideally we'd be able to match up any playback only stream with any capture only stream which would help a much wider range of systems.
Is everyone firm on the concept that a DAI's playback and capture stream has to have the same format in the same DAI ?
I can see a much better solution (then the one I posted here) which is also very simple to solve this problem in the same DAI.
It does push a requirement for dealing with asymmetric setups including validation that nobody did anything that can't be supported onto all users to at least some extent, even if standard stuff were factored out into the core (which didn't happen yet). This is for a *very* unusual requiremenet.
having an asymmetric configuration. You probably need to represent these isolators as a CODEC and do a CODEC to CODEC link and even then it seems worrying.
I like to think of isolation as innovative, not worrying :)
However w.r.t. the codec to codec link approach, I will take your advice and not go down that route.
No, my advice is to go down that route if you are doing this. I'm just not convinced that it's going to work reliably since this all sounds rather shaky.
On 31/3/20 3:31 am, Mark Brown wrote:
On Mon, Mar 30, 2020 at 11:28:26PM +1100, Matt Flax wrote:
On 30/3/20 9:32 pm, Mark Brown wrote:
On Sat, Mar 28, 2020 at 12:58:31PM +1100, Matt Flax wrote:
This patch is aims to start a stronger discussion on allowing both CPU and Codec dais to set formats independently based on direction.
If the DAIs support completely separate formats they're not a single DAI and should be represented as two DAIs.
I understand, however having two DAIs produces subdevices and pushes the overhead of managing registers to the end user in the form of two sub devices.
I think that's a swings and roundabouts thing where it really depends on your use case - for example if the DAIs can be organized however people like then people can come up with creative ways to wire things that don't pair things in the way that makes sense for userspace. Ideally we'd be able to match up any playback only stream with any capture only stream which would help a much wider range of systems.
Is everyone firm on the concept that a DAI's playback and capture stream has to have the same format in the same DAI ?
It does push a requirement for dealing with asymmetric setups including validation that nobody did anything that can't be supported onto all users to at least some extent, even if standard stuff were factored out into the core (which didn't happen yet). This is for a *very* unusual requiremenet.
ok, I don't quite follow you, but I think what you are saying is that there is a trade off between simplifying things for the end user and core complexity leading to developer sloppiness ?
I believe you are saying that it is a rare case to require format asymmetry in the same DAI link. For that reason introducing extra functionality into the core may not be worth while ?
having an asymmetric configuration. You probably need to represent these isolators as a CODEC and do a CODEC to CODEC link and even then it seems worrying.
I like to think of isolation as innovative, not worrying :) However w.r.t. the codec to codec link approach, I will take your advice and not go down that route.
No, my advice is to go down that route if you are doing this. I'm just not convinced that it's going to work reliably since this all sounds rather shaky.
OK - I can try to go down that route. However I would like to confirm that having a codec to codec link doesn't require the original codec or CPU drivers to have separate DAIs for playback and capture ? In my case both the CPU and Codec drivers have single DAIs for both streams.
For example, my virtual codec DAI would have symmetric formats with the original CPU and asymmetric formats with the target codec. However the target codec only has one DAI defined, and thus I can't understand how to setup the virtual codec DAI to have an asymmetric link with the actual single codec DAI !
Matt
On Tue, Mar 31, 2020 at 06:40:26PM +1100, Matt Flax wrote:
ok, I don't quite follow you, but I think what you are saying is that there is a trade off between simplifying things for the end user and core complexity leading to developer sloppiness ?
I believe you are saying that it is a rare case to require format asymmetry in the same DAI link. For that reason introducing extra functionality into the core may not be worth while ?
What I am saying is that we already have a perfectly good way of representing separate TX and RX DAIs which is far more flexible than your bodge and that if there is something to be improved on the userspace interface we should improve that rather than add this which seems like a bodge.
No, my advice is to go down that route if you are doing this. I'm just not convinced that it's going to work reliably since this all sounds rather shaky.
OK - I can try to go down that route. However I would like to confirm that having a codec to codec link doesn't require the original codec or CPU drivers to have separate DAIs for playback and capture ? In my case both the CPU and Codec drivers have single DAIs for both streams.
OK, that probably won't help then :/
On 31/3/20 10:13 pm, Mark Brown wrote:
On Tue, Mar 31, 2020 at 06:40:26PM +1100, Matt Flax wrote:
No, my advice is to go down that route if you are doing this. I'm just not convinced that it's going to work reliably since this all sounds rather shaky.
OK - I can try to go down that route. However I would like to confirm that having a codec to codec link doesn't require the original codec or CPU drivers to have separate DAIs for playback and capture ? In my case both the CPU and Codec drivers have single DAIs for both streams.
OK, that probably won't help then :/
OK - well, the codec approach was worth a try. My concerns with the codec API is that it chews resources and clock cycles - if it is possible to work around it, then it is a good idea.
I feel that the best way to reduce resource consumption, complexity and overhead is to allow link formats to be defined based on the stream direction.
I can see why we would want different DAIs if the sample rate is asymmetric, however for word alignment perhaps we should let the stream direction seep into the snd_soc_dai opertions. These days CPU and Codec designers seem to treat both streams independently, which is why their registers can be independently configured.
thanks for the discussion
Matt
participants (2)
-
Mark Brown
-
Matt Flax