On 19/07/16 16:58, Pierre-Louis Bossart wrote:
In update_audio_tstamp() it either usedruntime->delay, if runtime->audio_tstamp_config.report_delay is true, or applies a delta - not both.
ah yes, I did miss it in the code. maybe a comment would be nice to avoid being thrown.
ok
I still have mixed feelings about the code, it seems to make sense for the case where the .pointer is updated at every period, but it's not using the regular BATCH definition. With the tests such as runtime->status->hw_ptr == runtime->hw_ptr_interrupt you could end-up modifying the results by a small amount for other hardware platforms depending on when the timestamp is taken (in other words scheduling latency would affect audio timestamps).
Yes, that could be true - there could be some jitter - but I think it will still result in more accurate results. Note that the adjustment to the reported audio_tstamp will only occur for the AUDIO_TSTAMP_TYPE_DEFAULT case and when the platform has not updated the (hw_ptr) position outside of the interrupt callback independent of whether the BATCH flag is used.
There is actually an argument for being less restrictive. Hardware platform updates to position, where they happen outside of an interrupt, may (generally will) be less accurate than the update mechanism that I propose because such position updates are mostly restricted to the level of DMA residue granularity, which is relatively coarse (usually).
if your timestamps are REALTIME since they can jump backwards. The expectation is to use MONOTONOUS (or better MONOTONOUS_RAW to avoid NTP corrections), but with the ALSA API the application can choose REALTIME.
Ok, I'll put in a check. Of course there are cases where one might actually want REALTIME.
Note: For my application, I only actually care about the changes implemented using update_delay(). The refinement to update_audio_tstamp() just seemed to me to be part of the same issue. If the update_audio_tstamp() change is considered too controversial then I'd be happy to drop it.