[PATCH 0/4] ALSA: pcm: Fix ioctl races
Hi,
this is a patch set to address the recently reported bug for the racy PCM ioctls. In short, the current ALSA PCM core doesn't take enough care of concurrent ioctl calls, and the concurrent calls may result in a use-after-free. The reported problem was the concurrent hw_free calls, but there can be similar cases with other code paths like hw_params, prepare, etc, too.
The patch set introduces the new runtime->buffer_mutex for protecting those. The first patch is the fix for the reported issue (the races with hw_free), while the rest three are more hardening for the other similar executions.
[ Note that the patch 3 was slightly modified from the version I sent to distros list, as I noticed possible lockdep (false-positive) warnings. The behavior is almost same, just written differently. ]
thanks,
Takashi
===
Takashi Iwai (4): ALSA: pcm: Fix races among concurrent hw_params and hw_free calls ALSA: pcm: Fix races among concurrent read/write and buffer changes ALSA: pcm: Fix races among concurrent prepare and hw_params/hw_free calls ALSA: pcm: Fix races among concurrent prealloc proc writes
include/sound/pcm.h | 1 + sound/core/pcm.c | 2 + sound/core/pcm_lib.c | 4 ++ sound/core/pcm_memory.c | 11 +++-- sound/core/pcm_native.c | 93 +++++++++++++++++++++++++---------------- 5 files changed, 71 insertions(+), 40 deletions(-)
Currently we have neither proper check nor protection against the concurrent calls of PCM hw_params and hw_free ioctls, which may result in a UAF. Since the existing PCM stream lock can't be used for protecting the whole ioctl operations, we need a new mutex to protect those racy calls.
This patch introduced a new mutex, runtime->buffer_mutex, and applies it to both hw_params and hw_free ioctl code paths. Along with it, the both functions are slightly modified (the mmap_count check is moved into the state-check block) for code simplicity.
Reported-by: Hu Jiahui kirin.say@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Takashi Iwai tiwai@suse.de --- include/sound/pcm.h | 1 + sound/core/pcm.c | 2 ++ sound/core/pcm_native.c | 61 ++++++++++++++++++++++++++--------------- 3 files changed, 42 insertions(+), 22 deletions(-)
diff --git a/include/sound/pcm.h b/include/sound/pcm.h index 36da42cd0774..314f2779cab5 100644 --- a/include/sound/pcm.h +++ b/include/sound/pcm.h @@ -401,6 +401,7 @@ struct snd_pcm_runtime { wait_queue_head_t tsleep; /* transfer sleep */ struct fasync_struct *fasync; bool stop_operating; /* sync_stop will be called */ + struct mutex buffer_mutex; /* protect for buffer changes */
/* -- private section -- */ void *private_data; diff --git a/sound/core/pcm.c b/sound/core/pcm.c index ba4a987ed1c6..edd9849210f2 100644 --- a/sound/core/pcm.c +++ b/sound/core/pcm.c @@ -969,6 +969,7 @@ int snd_pcm_attach_substream(struct snd_pcm *pcm, int stream, init_waitqueue_head(&runtime->tsleep);
runtime->status->state = SNDRV_PCM_STATE_OPEN; + mutex_init(&runtime->buffer_mutex);
substream->runtime = runtime; substream->private_data = pcm->private_data; @@ -1002,6 +1003,7 @@ void snd_pcm_detach_substream(struct snd_pcm_substream *substream) } else { substream->runtime = NULL; } + mutex_destroy(&runtime->buffer_mutex); kfree(runtime); put_pid(substream->pid); substream->pid = NULL; diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c index a056b3ef3c84..266895374b83 100644 --- a/sound/core/pcm_native.c +++ b/sound/core/pcm_native.c @@ -685,33 +685,40 @@ static int snd_pcm_hw_params_choose(struct snd_pcm_substream *pcm, return 0; }
+#if IS_ENABLED(CONFIG_SND_PCM_OSS) +#define is_oss_stream(substream) ((substream)->oss.oss) +#else +#define is_oss_stream(substream) false +#endif + static int snd_pcm_hw_params(struct snd_pcm_substream *substream, struct snd_pcm_hw_params *params) { struct snd_pcm_runtime *runtime; - int err, usecs; + int err = 0, usecs; unsigned int bits; snd_pcm_uframes_t frames;
if (PCM_RUNTIME_CHECK(substream)) return -ENXIO; runtime = substream->runtime; + mutex_lock(&runtime->buffer_mutex); snd_pcm_stream_lock_irq(substream); switch (runtime->status->state) { case SNDRV_PCM_STATE_OPEN: case SNDRV_PCM_STATE_SETUP: case SNDRV_PCM_STATE_PREPARED: + if (!is_oss_stream(substream) && + atomic_read(&substream->mmap_count)) + err = -EBADFD; break; default: - snd_pcm_stream_unlock_irq(substream); - return -EBADFD; + err = -EBADFD; + break; } snd_pcm_stream_unlock_irq(substream); -#if IS_ENABLED(CONFIG_SND_PCM_OSS) - if (!substream->oss.oss) -#endif - if (atomic_read(&substream->mmap_count)) - return -EBADFD; + if (err) + goto unlock;
snd_pcm_sync_stop(substream, true);
@@ -799,16 +806,21 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream, if (usecs >= 0) cpu_latency_qos_add_request(&substream->latency_pm_qos_req, usecs); - return 0; + err = 0; _error: - /* hardware might be unusable from this time, - so we force application to retry to set - the correct hardware parameter settings */ - snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN); - if (substream->ops->hw_free != NULL) - substream->ops->hw_free(substream); - if (substream->managed_buffer_alloc) - snd_pcm_lib_free_pages(substream); + if (err) { + /* hardware might be unusable from this time, + * so we force application to retry to set + * the correct hardware parameter settings + */ + snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN); + if (substream->ops->hw_free != NULL) + substream->ops->hw_free(substream); + if (substream->managed_buffer_alloc) + snd_pcm_lib_free_pages(substream); + } + unlock: + mutex_unlock(&runtime->buffer_mutex); return err; }
@@ -848,26 +860,31 @@ static int do_hw_free(struct snd_pcm_substream *substream) static int snd_pcm_hw_free(struct snd_pcm_substream *substream) { struct snd_pcm_runtime *runtime; - int result; + int result = 0;
if (PCM_RUNTIME_CHECK(substream)) return -ENXIO; runtime = substream->runtime; + mutex_lock(&runtime->buffer_mutex); snd_pcm_stream_lock_irq(substream); switch (runtime->status->state) { case SNDRV_PCM_STATE_SETUP: case SNDRV_PCM_STATE_PREPARED: + if (atomic_read(&substream->mmap_count)) + result = -EBADFD; break; default: - snd_pcm_stream_unlock_irq(substream); - return -EBADFD; + result = -EBADFD; + break; } snd_pcm_stream_unlock_irq(substream); - if (atomic_read(&substream->mmap_count)) - return -EBADFD; + if (result) + goto unlock; result = do_hw_free(substream); snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN); cpu_latency_qos_remove_request(&substream->latency_pm_qos_req); + unlock: + mutex_unlock(&runtime->buffer_mutex); return result; }
In the current PCM design, the read/write syscalls (as well as the equivalent ioctls) are allowed before the PCM stream is running, that is, at PCM PREPARED state. Meanwhile, we also allow to re-issue hw_params and hw_free ioctl calls at the PREPARED state that may change or free the buffers, too. The problem is that there is no protection against those mix-ups.
This patch applies the previously introduced runtime->buffer_mutex to the read/write operations so that the concurrent hw_params or hw_free call can no longer interfere during the operation. The mutex is unlocked before scheduling, so we don't take it too long.
Cc: stable@vger.kernel.org Signed-off-by: Takashi Iwai tiwai@suse.de --- sound/core/pcm_lib.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c index f2090025236b..a40a35e51fad 100644 --- a/sound/core/pcm_lib.c +++ b/sound/core/pcm_lib.c @@ -1906,9 +1906,11 @@ static int wait_for_avail(struct snd_pcm_substream *substream, if (avail >= runtime->twake) break; snd_pcm_stream_unlock_irq(substream); + mutex_unlock(&runtime->buffer_mutex);
tout = schedule_timeout(wait_time);
+ mutex_lock(&runtime->buffer_mutex); snd_pcm_stream_lock_irq(substream); set_current_state(TASK_INTERRUPTIBLE); switch (runtime->status->state) { @@ -2219,6 +2221,7 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
nonblock = !!(substream->f_flags & O_NONBLOCK);
+ mutex_lock(&runtime->buffer_mutex); snd_pcm_stream_lock_irq(substream); err = pcm_accessible_state(runtime); if (err < 0) @@ -2310,6 +2313,7 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream, if (xfer > 0 && err >= 0) snd_pcm_update_state(substream, runtime); snd_pcm_stream_unlock_irq(substream); + mutex_unlock(&runtime->buffer_mutex); return xfer > 0 ? (snd_pcm_sframes_t)xfer : err; } EXPORT_SYMBOL(__snd_pcm_lib_xfer);
Like the previous fixes to hw_params and hw_free ioctl races, we need to paper over the concurrent prepare ioctl calls against hw_params and hw_free, too.
This patch implements the locking with the existing runtime->buffer_mutex for prepare ioctls. Unlike the previous case for snd_pcm_hw_hw_params() and snd_pcm_hw_free(), snd_pcm_prepare() is performed to the linked streams, hence the lock can't be applied simply on the top. For tracking the lock in each linked substream, we modify snd_pcm_action_group() slightly and apply the buffer_mutex for the case stream_lock=false (formerly there was no lock applied) there.
Cc: stable@vger.kernel.org Signed-off-by: Takashi Iwai tiwai@suse.de --- sound/core/pcm_native.c | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-)
diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c index 266895374b83..0e4fbf5fd87b 100644 --- a/sound/core/pcm_native.c +++ b/sound/core/pcm_native.c @@ -1190,15 +1190,17 @@ struct action_ops { static int snd_pcm_action_group(const struct action_ops *ops, struct snd_pcm_substream *substream, snd_pcm_state_t state, - bool do_lock) + bool stream_lock) { struct snd_pcm_substream *s = NULL; struct snd_pcm_substream *s1; int res = 0, depth = 1;
snd_pcm_group_for_each_entry(s, substream) { - if (do_lock && s != substream) { - if (s->pcm->nonatomic) + if (s != substream) { + if (!stream_lock) + mutex_lock_nested(&s->runtime->buffer_mutex, depth); + else if (s->pcm->nonatomic) mutex_lock_nested(&s->self_group.mutex, depth); else spin_lock_nested(&s->self_group.lock, depth); @@ -1226,18 +1228,18 @@ static int snd_pcm_action_group(const struct action_ops *ops, ops->post_action(s, state); } _unlock: - if (do_lock) { - /* unlock streams */ - snd_pcm_group_for_each_entry(s1, substream) { - if (s1 != substream) { - if (s1->pcm->nonatomic) - mutex_unlock(&s1->self_group.mutex); - else - spin_unlock(&s1->self_group.lock); - } - if (s1 == s) /* end */ - break; + /* unlock streams */ + snd_pcm_group_for_each_entry(s1, substream) { + if (s1 != substream) { + if (!stream_lock) + mutex_unlock(&s1->runtime->buffer_mutex); + else if (s1->pcm->nonatomic) + mutex_unlock(&s1->self_group.mutex); + else + spin_unlock(&s1->self_group.lock); } + if (s1 == s) /* end */ + break; } return res; } @@ -1367,10 +1369,12 @@ static int snd_pcm_action_nonatomic(const struct action_ops *ops,
/* Guarantee the group members won't change during non-atomic action */ down_read(&snd_pcm_link_rwsem); + mutex_lock(&substream->runtime->buffer_mutex); if (snd_pcm_stream_linked(substream)) res = snd_pcm_action_group(ops, substream, state, false); else res = snd_pcm_action_single(ops, substream, state); + mutex_unlock(&substream->runtime->buffer_mutex); up_read(&snd_pcm_link_rwsem); return res; }
On 3/22/2022 6:07 PM, Takashi Iwai wrote:
Like the previous fixes to hw_params and hw_free ioctl races, we need to paper over the concurrent prepare ioctl calls against hw_params and hw_free, too.
This patch implements the locking with the existing runtime->buffer_mutex for prepare ioctls. Unlike the previous case for snd_pcm_hw_hw_params() and snd_pcm_hw_free(), snd_pcm_prepare() is performed to the linked streams, hence the lock can't be applied simply on the top. For tracking the lock in each linked substream, we modify snd_pcm_action_group() slightly and apply the buffer_mutex for the case stream_lock=false (formerly there was no lock applied) there.
Cc: stable@vger.kernel.org Signed-off-by: Takashi Iwai tiwai@suse.de
sound/core/pcm_native.c | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-)
diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c index 266895374b83..0e4fbf5fd87b 100644 --- a/sound/core/pcm_native.c +++ b/sound/core/pcm_native.c @@ -1190,15 +1190,17 @@ struct action_ops { static int snd_pcm_action_group(const struct action_ops *ops, struct snd_pcm_substream *substream, snd_pcm_state_t state,
bool do_lock)
bool stream_lock)
{ struct snd_pcm_substream *s = NULL; struct snd_pcm_substream *s1; int res = 0, depth = 1;
snd_pcm_group_for_each_entry(s, substream) {
if (do_lock && s != substream) {
if (s->pcm->nonatomic)
if (s != substream) {
if (!stream_lock)
mutex_lock_nested(&s->runtime->buffer_mutex, depth);
else if (s->pcm->nonatomic) mutex_lock_nested(&s->self_group.mutex, depth); else spin_lock_nested(&s->self_group.lock, depth);
Maybe if (!stream_lock) mutex_lock_nested(&s->runtime->buffer_mutex, depth); else snd_pcm_group_lock(&s->self_group, s->pcm->nonatomic); ?
@@ -1226,18 +1228,18 @@ static int snd_pcm_action_group(const struct action_ops *ops, ops->post_action(s, state); } _unlock:
- if (do_lock) {
/* unlock streams */
snd_pcm_group_for_each_entry(s1, substream) {
if (s1 != substream) {
if (s1->pcm->nonatomic)
mutex_unlock(&s1->self_group.mutex);
else
spin_unlock(&s1->self_group.lock);
}
if (s1 == s) /* end */
break;
- /* unlock streams */
- snd_pcm_group_for_each_entry(s1, substream) {
if (s1 != substream) {
if (!stream_lock)
mutex_unlock(&s1->runtime->buffer_mutex);
else if (s1->pcm->nonatomic)
mutex_unlock(&s1->self_group.mutex);
else
spin_unlock(&s1->self_group.lock);
And similarly to above, use snd_pcm_group_unlock() here?
}
if (s1 == s) /* end */
} return res; }break;
@@ -1367,10 +1369,12 @@ static int snd_pcm_action_nonatomic(const struct action_ops *ops,
/* Guarantee the group members won't change during non-atomic action */ down_read(&snd_pcm_link_rwsem);
- mutex_lock(&substream->runtime->buffer_mutex); if (snd_pcm_stream_linked(substream)) res = snd_pcm_action_group(ops, substream, state, false); else res = snd_pcm_action_single(ops, substream, state);
- mutex_unlock(&substream->runtime->buffer_mutex); up_read(&snd_pcm_link_rwsem); return res; }
On Wed, 23 Mar 2022 09:08:25 +0100, Amadeusz SX2awiX4ski wrote:
On 3/22/2022 6:07 PM, Takashi Iwai wrote:
Like the previous fixes to hw_params and hw_free ioctl races, we need to paper over the concurrent prepare ioctl calls against hw_params and hw_free, too.
This patch implements the locking with the existing runtime->buffer_mutex for prepare ioctls. Unlike the previous case for snd_pcm_hw_hw_params() and snd_pcm_hw_free(), snd_pcm_prepare() is performed to the linked streams, hence the lock can't be applied simply on the top. For tracking the lock in each linked substream, we modify snd_pcm_action_group() slightly and apply the buffer_mutex for the case stream_lock=false (formerly there was no lock applied) there.
Cc: stable@vger.kernel.org Signed-off-by: Takashi Iwai tiwai@suse.de
sound/core/pcm_native.c | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-)
diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c index 266895374b83..0e4fbf5fd87b 100644 --- a/sound/core/pcm_native.c +++ b/sound/core/pcm_native.c @@ -1190,15 +1190,17 @@ struct action_ops { static int snd_pcm_action_group(const struct action_ops *ops, struct snd_pcm_substream *substream, snd_pcm_state_t state,
bool do_lock)
{ struct snd_pcm_substream *s = NULL; struct snd_pcm_substream *s1; int res = 0, depth = 1; snd_pcm_group_for_each_entry(s, substream) {bool stream_lock)
if (do_lock && s != substream) {
if (s->pcm->nonatomic)
if (s != substream) {
if (!stream_lock)
mutex_lock_nested(&s->runtime->buffer_mutex, depth);
else if (s->pcm->nonatomic) mutex_lock_nested(&s->self_group.mutex, depth); else spin_lock_nested(&s->self_group.lock, depth);
Maybe if (!stream_lock) mutex_lock_nested(&s->runtime->buffer_mutex, depth); else snd_pcm_group_lock(&s->self_group, s->pcm->nonatomic); ?
No, it must be nested locks with the given subclass. That's why it has been the open code beforehand, too.
@@ -1226,18 +1228,18 @@ static int snd_pcm_action_group(const struct action_ops *ops, ops->post_action(s, state); } _unlock:
- if (do_lock) {
/* unlock streams */
snd_pcm_group_for_each_entry(s1, substream) {
if (s1 != substream) {
if (s1->pcm->nonatomic)
mutex_unlock(&s1->self_group.mutex);
else
spin_unlock(&s1->self_group.lock);
}
if (s1 == s) /* end */
break;
- /* unlock streams */
- snd_pcm_group_for_each_entry(s1, substream) {
if (s1 != substream) {
if (!stream_lock)
mutex_unlock(&s1->runtime->buffer_mutex);
else if (s1->pcm->nonatomic)
mutex_unlock(&s1->self_group.mutex);
else
spin_unlock(&s1->self_group.lock);
And similarly to above, use snd_pcm_group_unlock() here?
This side would be possible to use that macro but it's still better to have the consistent call pattern.
thanks,
Takashi
On Wed, 23 Mar 2022 09:15:19 +0100, Takashi Iwai wrote:
On Wed, 23 Mar 2022 09:08:25 +0100, Amadeusz SX2awiX4ski wrote:
On 3/22/2022 6:07 PM, Takashi Iwai wrote:
Like the previous fixes to hw_params and hw_free ioctl races, we need to paper over the concurrent prepare ioctl calls against hw_params and hw_free, too.
This patch implements the locking with the existing runtime->buffer_mutex for prepare ioctls. Unlike the previous case for snd_pcm_hw_hw_params() and snd_pcm_hw_free(), snd_pcm_prepare() is performed to the linked streams, hence the lock can't be applied simply on the top. For tracking the lock in each linked substream, we modify snd_pcm_action_group() slightly and apply the buffer_mutex for the case stream_lock=false (formerly there was no lock applied) there.
Cc: stable@vger.kernel.org Signed-off-by: Takashi Iwai tiwai@suse.de
sound/core/pcm_native.c | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-)
diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c index 266895374b83..0e4fbf5fd87b 100644 --- a/sound/core/pcm_native.c +++ b/sound/core/pcm_native.c @@ -1190,15 +1190,17 @@ struct action_ops { static int snd_pcm_action_group(const struct action_ops *ops, struct snd_pcm_substream *substream, snd_pcm_state_t state,
bool do_lock)
{ struct snd_pcm_substream *s = NULL; struct snd_pcm_substream *s1; int res = 0, depth = 1; snd_pcm_group_for_each_entry(s, substream) {bool stream_lock)
if (do_lock && s != substream) {
if (s->pcm->nonatomic)
if (s != substream) {
if (!stream_lock)
mutex_lock_nested(&s->runtime->buffer_mutex, depth);
else if (s->pcm->nonatomic) mutex_lock_nested(&s->self_group.mutex, depth); else spin_lock_nested(&s->self_group.lock, depth);
Maybe if (!stream_lock) mutex_lock_nested(&s->runtime->buffer_mutex, depth); else snd_pcm_group_lock(&s->self_group, s->pcm->nonatomic); ?
No, it must be nested locks with the given subclass.
FWIW, the reason is that lockdep would complain otherwise as if it were a deadlock. That is, this is a workaround for avoiding false lockdep warnings.
Takashi
We have no protection against concurrent PCM buffer preallocation changes via proc files, and it may potentially lead to UAF or some weird problem. This patch applies the PCM open_mutex to the proc write operation for avoiding the racy proc writes and the PCM stream open (and further operations).
Cc: stable@vger.kernel.org Signed-off-by: Takashi Iwai tiwai@suse.de --- sound/core/pcm_memory.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/sound/core/pcm_memory.c b/sound/core/pcm_memory.c index b70ce3b69ab4..8848d2f3160d 100644 --- a/sound/core/pcm_memory.c +++ b/sound/core/pcm_memory.c @@ -163,19 +163,20 @@ static void snd_pcm_lib_preallocate_proc_write(struct snd_info_entry *entry, size_t size; struct snd_dma_buffer new_dmab;
+ mutex_lock(&substream->pcm->open_mutex); if (substream->runtime) { buffer->error = -EBUSY; - return; + goto unlock; } if (!snd_info_get_line(buffer, line, sizeof(line))) { snd_info_get_str(str, line, sizeof(str)); size = simple_strtoul(str, NULL, 10) * 1024; if ((size != 0 && size < 8192) || size > substream->dma_max) { buffer->error = -EINVAL; - return; + goto unlock; } if (substream->dma_buffer.bytes == size) - return; + goto unlock; memset(&new_dmab, 0, sizeof(new_dmab)); new_dmab.dev = substream->dma_buffer.dev; if (size > 0) { @@ -189,7 +190,7 @@ static void snd_pcm_lib_preallocate_proc_write(struct snd_info_entry *entry, substream->pcm->card->number, substream->pcm->device, substream->stream ? 'c' : 'p', substream->number, substream->pcm->name, size); - return; + goto unlock; } substream->buffer_bytes_max = size; } else { @@ -201,6 +202,8 @@ static void snd_pcm_lib_preallocate_proc_write(struct snd_info_entry *entry, } else { buffer->error = -EINVAL; } + unlock: + mutex_unlock(&substream->pcm->open_mutex); }
static inline void preallocate_info_init(struct snd_pcm_substream *substream)
On 22. 03. 22 18:07, Takashi Iwai wrote:
Hi,
this is a patch set to address the recently reported bug for the racy PCM ioctls. In short, the current ALSA PCM core doesn't take enough care of concurrent ioctl calls, and the concurrent calls may result in a use-after-free. The reported problem was the concurrent hw_free calls, but there can be similar cases with other code paths like hw_params, prepare, etc, too.
The patch set introduces the new runtime->buffer_mutex for protecting those. The first patch is the fix for the reported issue (the races with hw_free), while the rest three are more hardening for the other similar executions.
Thank you Takashi.
Reviewed-by: Jaroslav Kysela perex@perex.cz
participants (3)
-
Amadeusz Sławiński
-
Jaroslav Kysela
-
Takashi Iwai