[alsa-devel] [PATCH] ALSA: core: Allow drivers to set R/W wait time.
Currently ALSA core blocks userspace for about 10 seconds for PCM R/W IO. This needs to be configurable for modern hardware like DSPs where no pointer update in milliseconds can indicate terminal DSP errors.
Add a substream variable to set the wait time in ms. This allows userspace and drivers to recover more quickly from terminal DSP errors.
Signed-off-by: Liam Girdwood liam.r.girdwood@linux.intel.com --- include/sound/pcm.h | 6 ++++++ sound/core/pcm_lib.c | 15 ++++++++++----- 2 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/include/sound/pcm.h b/include/sound/pcm.h index e054c583d3b3..e4694684c524 100644 --- a/include/sound/pcm.h +++ b/include/sound/pcm.h @@ -462,6 +462,7 @@ struct snd_pcm_substream { /* -- timer section -- */ struct snd_timer *timer; /* timer */ unsigned timer_running: 1; /* time is running */ + unsigned wait_time; /* time in ms for R/W to wait for avail */ /* -- next substream -- */ struct snd_pcm_substream *next; /* -- linked substreams -- */ @@ -579,6 +580,11 @@ int snd_pcm_start(struct snd_pcm_substream *substream); int snd_pcm_stop(struct snd_pcm_substream *substream, snd_pcm_state_t status); int snd_pcm_drain_done(struct snd_pcm_substream *substream); int snd_pcm_stop_xrun(struct snd_pcm_substream *substream); +static inline void snd_pcm_wait_time(struct snd_pcm_substream *substream, + unsigned wait_time) +{ + substream->wait_time = wait_time; +} #ifdef CONFIG_PM int snd_pcm_suspend(struct snd_pcm_substream *substream); int snd_pcm_suspend_all(struct snd_pcm *pcm); diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c index a83152e7d387..2ee76c70f55f 100644 --- a/sound/core/pcm_lib.c +++ b/sound/core/pcm_lib.c @@ -1839,12 +1839,17 @@ static int wait_for_avail(struct snd_pcm_substream *substream, if (runtime->no_period_wakeup) wait_time = MAX_SCHEDULE_TIMEOUT; else { - wait_time = 10; - if (runtime->rate) { - long t = runtime->period_size * 2 / runtime->rate; - wait_time = max(t, wait_time); + /* use wait time from substream if available */ + if (substream->wait_time) { + wait_time = msecs_to_jiffies(substream->wait_time); + } else { + wait_time = 10; + if (runtime->rate) { + long t = runtime->period_size * 2 / runtime->rate; + wait_time = max(t, wait_time); + } + wait_time = msecs_to_jiffies(wait_time * 1000); } - wait_time = msecs_to_jiffies(wait_time * 1000); }
for (;;) {
On Wed, 14 Mar 2018 21:44:40 +0100, Liam Girdwood wrote:
Currently ALSA core blocks userspace for about 10 seconds for PCM R/W IO. This needs to be configurable for modern hardware like DSPs where no pointer update in milliseconds can indicate terminal DSP errors.
Add a substream variable to set the wait time in ms. This allows userspace and drivers to recover more quickly from terminal DSP errors.
Signed-off-by: Liam Girdwood liam.r.girdwood@linux.intel.com
The idea looks good, though, a bit of nitpicking:
include/sound/pcm.h | 6 ++++++ sound/core/pcm_lib.c | 15 ++++++++++----- 2 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/include/sound/pcm.h b/include/sound/pcm.h index e054c583d3b3..e4694684c524 100644 --- a/include/sound/pcm.h +++ b/include/sound/pcm.h @@ -462,6 +462,7 @@ struct snd_pcm_substream { /* -- timer section -- */ struct snd_timer *timer; /* timer */ unsigned timer_running: 1; /* time is running */
- unsigned wait_time; /* time in ms for R/W to wait for avail */
I'd name wait_timeout, which is slightly clearer.
/* -- next substream -- */ struct snd_pcm_substream *next; /* -- linked substreams -- */ @@ -579,6 +580,11 @@ int snd_pcm_start(struct snd_pcm_substream *substream); int snd_pcm_stop(struct snd_pcm_substream *substream, snd_pcm_state_t status); int snd_pcm_drain_done(struct snd_pcm_substream *substream); int snd_pcm_stop_xrun(struct snd_pcm_substream *substream); +static inline void snd_pcm_wait_time(struct snd_pcm_substream *substream,
- unsigned wait_time)
+{
- substream->wait_time = wait_time;
+}
IMO, it's a simple one parameter, and no need wrapping with an inline function.
#ifdef CONFIG_PM int snd_pcm_suspend(struct snd_pcm_substream *substream); int snd_pcm_suspend_all(struct snd_pcm *pcm); diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c index a83152e7d387..2ee76c70f55f 100644 --- a/sound/core/pcm_lib.c +++ b/sound/core/pcm_lib.c @@ -1839,12 +1839,17 @@ static int wait_for_avail(struct snd_pcm_substream *substream, if (runtime->no_period_wakeup) wait_time = MAX_SCHEDULE_TIMEOUT; else {
wait_time = 10;
if (runtime->rate) {
long t = runtime->period_size * 2 / runtime->rate;
wait_time = max(t, wait_time);
/* use wait time from substream if available */
if (substream->wait_time) {
wait_time = msecs_to_jiffies(substream->wait_time);
} else {
wait_time = 10;
if (runtime->rate) {
long t = runtime->period_size * 2 / runtime->rate;
wait_time = max(t, wait_time);
}
}wait_time = msecs_to_jiffies(wait_time * 1000);
wait_time = msecs_to_jiffies(wait_time * 1000);
This can go bad when wait_time is shorter than the period time. Some validation is needed?
Also, how is user-space supposed to set the new parameter?
thanks,
Takashi
On Wed, 2018-03-14 at 23:04 +0100, Takashi Iwai wrote:
On Wed, 14 Mar 2018 21:44:40 +0100, Liam Girdwood wrote:
Currently ALSA core blocks userspace for about 10 seconds for PCM R/W IO. This needs to be configurable for modern hardware like DSPs where no pointer update in milliseconds can indicate terminal DSP errors.
Add a substream variable to set the wait time in ms. This allows userspace and drivers to recover more quickly from terminal DSP errors.
Signed-off-by: Liam Girdwood liam.r.girdwood@linux.intel.com
The idea looks good, though, a bit of nitpicking:
include/sound/pcm.h | 6 ++++++ sound/core/pcm_lib.c | 15 ++++++++++----- 2 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/include/sound/pcm.h b/include/sound/pcm.h index e054c583d3b3..e4694684c524 100644 --- a/include/sound/pcm.h +++ b/include/sound/pcm.h @@ -462,6 +462,7 @@ struct snd_pcm_substream { /* -- timer section -- */ struct snd_timer *timer; /* timer */ unsigned timer_running: 1; /* time is running */
- unsigned wait_time; /* time in ms for R/W to wait for avail
*/
I'd name wait_timeout, which is slightly clearer.
/* -- next substream -- */ struct snd_pcm_substream *next; /* -- linked substreams -- */ @@ -579,6 +580,11 @@ int snd_pcm_start(struct snd_pcm_substream *substream); int snd_pcm_stop(struct snd_pcm_substream *substream, snd_pcm_state_t status); int snd_pcm_drain_done(struct snd_pcm_substream *substream); int snd_pcm_stop_xrun(struct snd_pcm_substream *substream); +static inline void snd_pcm_wait_time(struct snd_pcm_substream *substream,
- unsigned wait_time)
+{
- substream->wait_time = wait_time;
+}
IMO, it's a simple one parameter, and no need wrapping with an inline function.
This was for drivers to set the parameter according to HW capabilities.
#ifdef CONFIG_PM int snd_pcm_suspend(struct snd_pcm_substream *substream); int snd_pcm_suspend_all(struct snd_pcm *pcm); diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c index a83152e7d387..2ee76c70f55f 100644 --- a/sound/core/pcm_lib.c +++ b/sound/core/pcm_lib.c @@ -1839,12 +1839,17 @@ static int wait_for_avail(struct snd_pcm_substream *substream, if (runtime->no_period_wakeup) wait_time = MAX_SCHEDULE_TIMEOUT; else {
wait_time = 10;
if (runtime->rate) {
long t = runtime->period_size * 2 / runtime->rate;
wait_time = max(t, wait_time);
/* use wait time from substream if available */
if (substream->wait_time) {
wait_time = msecs_to_jiffies(substream->wait_time);
} else {
wait_time = 10;
if (runtime->rate) {
long t = runtime->period_size * 2 /
runtime->rate;
wait_time = max(t, wait_time);
}
}wait_time = msecs_to_jiffies(wait_time * 1000);
wait_time = msecs_to_jiffies(wait_time * 1000);
This can go bad when wait_time is shorter than the period time. Some validation is needed?
Yes, and we should also validate no irq mode somehow too, probably using buffer time.
Also, how is user-space supposed to set the new parameter?
It's not atm, as it was being set by the driver. Would probably mean an ABI change to PCM ops or a new ioctl ? The latter wont break the ABI and the default value would remain if the ioctl was not called.
Liam
thanks,
Takashi
On Thu, 15 Mar 2018 14:35:37 +0100, Liam Girdwood wrote:
On Wed, 2018-03-14 at 23:04 +0100, Takashi Iwai wrote:
On Wed, 14 Mar 2018 21:44:40 +0100, Liam Girdwood wrote:
Currently ALSA core blocks userspace for about 10 seconds for PCM R/W IO. This needs to be configurable for modern hardware like DSPs where no pointer update in milliseconds can indicate terminal DSP errors.
Add a substream variable to set the wait time in ms. This allows userspace and drivers to recover more quickly from terminal DSP errors.
Signed-off-by: Liam Girdwood liam.r.girdwood@linux.intel.com
The idea looks good, though, a bit of nitpicking:
include/sound/pcm.h | 6 ++++++ sound/core/pcm_lib.c | 15 ++++++++++----- 2 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/include/sound/pcm.h b/include/sound/pcm.h index e054c583d3b3..e4694684c524 100644 --- a/include/sound/pcm.h +++ b/include/sound/pcm.h @@ -462,6 +462,7 @@ struct snd_pcm_substream { /* -- timer section -- */ struct snd_timer *timer; /* timer */ unsigned timer_running: 1; /* time is running */
- unsigned wait_time; /* time in ms for R/W to wait for avail
*/
I'd name wait_timeout, which is slightly clearer.
/* -- next substream -- */ struct snd_pcm_substream *next; /* -- linked substreams -- */ @@ -579,6 +580,11 @@ int snd_pcm_start(struct snd_pcm_substream *substream); int snd_pcm_stop(struct snd_pcm_substream *substream, snd_pcm_state_t status); int snd_pcm_drain_done(struct snd_pcm_substream *substream); int snd_pcm_stop_xrun(struct snd_pcm_substream *substream); +static inline void snd_pcm_wait_time(struct snd_pcm_substream *substream,
- unsigned wait_time)
+{
- substream->wait_time = wait_time;
+}
IMO, it's a simple one parameter, and no need wrapping with an inline function.
This was for drivers to set the parameter according to HW capabilities.
My point is that there is little merit to provide an extra API function just for setting this parameter. Each driver knows the snd_pcm_substream content clearly, and it can run "substream->wait_time = xxx" explicitly.
Of course, the situation would become different if this is seen as an exposed feature to user-space.
#ifdef CONFIG_PM int snd_pcm_suspend(struct snd_pcm_substream *substream); int snd_pcm_suspend_all(struct snd_pcm *pcm); diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c index a83152e7d387..2ee76c70f55f 100644 --- a/sound/core/pcm_lib.c +++ b/sound/core/pcm_lib.c @@ -1839,12 +1839,17 @@ static int wait_for_avail(struct snd_pcm_substream *substream, if (runtime->no_period_wakeup) wait_time = MAX_SCHEDULE_TIMEOUT; else {
wait_time = 10;
if (runtime->rate) {
long t = runtime->period_size * 2 / runtime->rate;
wait_time = max(t, wait_time);
/* use wait time from substream if available */
if (substream->wait_time) {
wait_time = msecs_to_jiffies(substream->wait_time);
} else {
wait_time = 10;
if (runtime->rate) {
long t = runtime->period_size * 2 /
runtime->rate;
wait_time = max(t, wait_time);
}
}wait_time = msecs_to_jiffies(wait_time * 1000);
wait_time = msecs_to_jiffies(wait_time * 1000);
This can go bad when wait_time is shorter than the period time. Some validation is needed?
Yes, and we should also validate no irq mode somehow too, probably using buffer time.
Ah, now I see what you have in mind somehow...
Also, how is user-space supposed to set the new parameter?
It's not atm, as it was being set by the driver. Would probably mean an ABI change to PCM ops or a new ioctl ? The latter wont break the ABI and the default value would remain if the ioctl was not called.
Basically this timeout is merely for a safety, wasn't considered as a part of the real functionality.
So, with your plan, this is exposed as a real PCM feature, as a part of API? For what kind of scenario / purpose?
thanks,
Takashi
On Thu, 2018-03-15 at 15:30 +0100, Takashi Iwai wrote:
It's not atm, as it was being set by the driver. Would probably mean an ABI change to PCM ops or a new ioctl ? The latter wont break the ABI and the default value would remain if the ioctl was not called.
Basically this timeout is merely for a safety, wasn't considered as a part of the real functionality.
So, with your plan, this is exposed as a real PCM feature, as a part of API? For what kind of scenario / purpose?
Use case is XRUN handling, DMA failure or FW crash detection. The shortened timeout means we can recover far faster leaving a smaller gap in any audio.
Liam
Dne 15.3.2018 v 17:07 Liam Girdwood napsal(a):
On Thu, 2018-03-15 at 15:30 +0100, Takashi Iwai wrote:
It's not atm, as it was being set by the driver. Would probably mean an ABI change to PCM ops or a new ioctl ? The latter wont break the ABI and the default value would remain if the ioctl was not called.
Basically this timeout is merely for a safety, wasn't considered as a part of the real functionality.
So, with your plan, this is exposed as a real PCM feature, as a part of API? For what kind of scenario / purpose?
Use case is XRUN handling, DMA failure or FW crash detection. The shortened timeout means we can recover far faster leaving a smaller gap in any audio.
We have the non-blocking access mode for this purpose where the application can choose the state check time independently.
It seems to me, that you like to resolve something else with this. The 'error' checking should be handled at the driver level in my opinion.
I see only the possibility to reduce the timeout to a more appropriate value (we know all stream and buffering parameters to calculate the 'late' time limit). I agree that the current timeout is too big, but it's something outside the API as Takashi mentioned.
Jaroslav
On Thu, 2018-03-15 at 17:25 +0100, Jaroslav Kysela wrote:
Dne 15.3.2018 v 17:07 Liam Girdwood napsal(a):
On Thu, 2018-03-15 at 15:30 +0100, Takashi Iwai wrote:
It's not atm, as it was being set by the driver. Would probably mean an ABI change to PCM ops or a new ioctl ? The latter wont break the ABI and the default value would remain if the ioctl was not called.
Basically this timeout is merely for a safety, wasn't considered as a part of the real functionality.
So, with your plan, this is exposed as a real PCM feature, as a part of API? For what kind of scenario / purpose?
Use case is XRUN handling, DMA failure or FW crash detection. The shortened timeout means we can recover far faster leaving a smaller gap in any audio.
We have the non-blocking access mode for this purpose where the application can choose the state check time independently.
Yes, but unfortunately I'm coming across a lot of userspace who are using ALSA blocking IO as an audio synchronisation mechanism.
It seems to me, that you like to resolve something else with this. The 'error' checking should be handled at the driver level in my opinion.
It mostly is handled by the driver in this case, but userspace may care and is immediately notified by the IO failing. Fwiw, this is important in safety critical situations where any failure of audio could be deemed a safety risk (automotive use cases).
I see only the possibility to reduce the timeout to a more appropriate value (we know all stream and buffering parameters to calculate the 'late' time limit). I agree that the current timeout is too big, but it's something outside the API as Takashi mentioned.
ok, any objections if I change it directly (in the structure) from the SOF driver and have some core PCM hw_params() check that makes sure we are good for period/buffer time ?
Liam
Jaroslav
On Fri, 16 Mar 2018 13:35:06 +0100, Liam Girdwood wrote:
On Thu, 2018-03-15 at 17:25 +0100, Jaroslav Kysela wrote:
Dne 15.3.2018 v 17:07 Liam Girdwood napsal(a):
On Thu, 2018-03-15 at 15:30 +0100, Takashi Iwai wrote:
It's not atm, as it was being set by the driver. Would probably mean an ABI change to PCM ops or a new ioctl ? The latter wont break the ABI and the default value would remain if the ioctl was not called.
Basically this timeout is merely for a safety, wasn't considered as a part of the real functionality.
So, with your plan, this is exposed as a real PCM feature, as a part of API? For what kind of scenario / purpose?
Use case is XRUN handling, DMA failure or FW crash detection. The shortened timeout means we can recover far faster leaving a smaller gap in any audio.
We have the non-blocking access mode for this purpose where the application can choose the state check time independently.
Yes, but unfortunately I'm coming across a lot of userspace who are using ALSA blocking IO as an audio synchronisation mechanism.
It seems to me, that you like to resolve something else with this. The 'error' checking should be handled at the driver level in my opinion.
It mostly is handled by the driver in this case, but userspace may care and is immediately notified by the IO failing. Fwiw, this is important in safety critical situations where any failure of audio could be deemed a safety risk (automotive use cases).
I see only the possibility to reduce the timeout to a more appropriate value (we know all stream and buffering parameters to calculate the 'late' time limit). I agree that the current timeout is too big, but it's something outside the API as Takashi mentioned.
ok, any objections if I change it directly (in the structure) from the SOF driver and have some core PCM hw_params() check that makes sure we are good for period/buffer time ?
I'm fine with that.
thanks,
Takashi
participants (3)
-
Jaroslav Kysela
-
Liam Girdwood
-
Takashi Iwai