[alsa-devel] Issue in alsa when dma complete race with pcm release
Lars-Peter Clausen
lars at metafoo.de
Wed Jul 15 10:59:22 CEST 2015
On 07/15/2015 09:00 AM, Shengjiu Wang wrote:
> On Wed, Jul 15, 2015 at 09:13:47AM +0200, Lars-Peter Clausen wrote:
>> On 07/15/2015 03:37 AM, Shengjiu Wang wrote:
>>> On Tue, Jul 07, 2015 at 03:32:02PM +0200, Lars-Peter Clausen wrote:
>>>> On 07/07/2015 12:13 PM, Shengjiu Wang wrote:
>>>>> On Mon, Jul 06, 2015 at 02:27:01PM +0200, Lars-Peter Clausen wrote:
>>>>>> On 07/06/2015 11:01 AM, Shengjiu Wang wrote:
>>>>>>> On Fri, Jul 03, 2015 at 12:56:53PM +0200, Lars-Peter Clausen wrote:
>>>>>>>> On 07/03/2015 10:25 AM, Shengjiu Wang wrote:
>>>>>>>>> Hi alsa-devel
>>>>>>>>>
>>>>>>>>> There maybe a issue in ALSA when dma complete race with snd_pcm_release.
>>>>>>>>> The pcm release and dma complete are in different thread. There is occasion
>>>>>>>>> that dmaengine_pcm_dma_complete() is called too late, some memory has been
>>>>>>>>> freed, the prtd is null. Then there is kernel dump.
>>>>>>>>>
>>>>>>>>> Is there any solution for this issue? Thanks.
>>>>>>>>
>>>>>>>> We need to introduce a synchronization primitive that allows a
>>>>>>>> dmaengine client to synchronize to the execution of the complete
>>>>>>>> callbacks.
>>>>>>>>
>>>>>>>> terminate_all() unfortunately can't do this since terminate_all()
>>>>>>>> might be called from within one of the complete callbacks and so
>>>>>>>> would cause a deadlock if we'd wait for all complete callbacks to
>>>>>>>> finish before terminate_all() returns.
>>>>>>>>
>>>>>>>> So what is needed is a new function called dmaengine_sync() that
>>>>>>>> will wait until all scheduled complete callbacks have finished. A
>>>>>>>> call to this function needs to be put in snd_dmaengine_pcm_close()
>>>>>>>> before the prtd is closed.
>>>>>>>>
>>>>>>>> - Lars
>>>>>>>
>>>>>>> How to check " all scheduled complete callbacks have finished"?
>>>>>>
>>>>>> That will be up to the dmaengine driver. But it basically comes down
>>>>>> to two things:
>>>>>>
>>>>>> 1) The driver needs to make sure that tasklet_schedule() is no
>>>>>> longer called after terminate_all() has finished.
>>>>> Some driver can't make sure this. The dma interrupt may come later after
>>>>> terminate_all.
>>>>
>>>> Most drivers can't make sure that the interrupt routine is not
>>>> executed after terminate_all() has been called, since both are fully
>>>> asynchronous. But what the driver needs to take care of is to
>>>> synchronize the two e.g. using a spin_lock() and make sure that if
>>>> terminate_all() has been called tasklet_schedule() is not called,
>>>> even if the isr is executed. Most drivers get this rigght.
>>>>
>>>>>> 2) In the sync() callback call tasklet_kill() to make sure that it
>>>>>> has finished running
>>>>>>
>>>>>>>
>>>>>>> One concern is if add wait in snd_dmaengine_pcm_close(), which wil cause
>>>>>>> the snd_pcm_release is bound with dmaengine, when there is error in dma
>>>>>>> and no callback be called. Then the snd_pcm_release will not be released.
>>>>>>
>>>>>> The sync() function will only wait if there is a callback scheduled,
>>>>>> if there is non scheduled it will return immediately.
>>>>>
>>>>>
>>>>> Do you have other method to resolve this issue? or simple method?
>>>>
>>>> No, this is the way to go. You have two asynchronous processes and
>>>> you want to get deterministic execution ordering between the two you
>>>> need to add a synchronization primitive.
>>>>
>>>> - Lars
>>>>
>>> Thanks your advice. It is workable. But need to add new api in dmaengine.h
>>>
>>> I think about another method, can we add spin_lock in
>>> dmaengine_pcm_dma_complete() and snd_dmaengine_pcm_close()? and add a flag
>>> for pcm_close, if pcm closed, then just do nothing in dma complete.
>>> How do you think for this?
>>
>> Since you are freeing the memory that would store the flag that wont work.
>>
>> - Lars
>>
> It seems the snd_pcm_substream is not released, can we add flag in this struct?\
No, there is no guarantee that the substream hasn't been freed either.
More information about the Alsa-devel
mailing list