It'd really be better if you used a new timestamp (I added the LINK_ESTIMATED_ATIME that isn't used by anyone and could be reclaimed) and modified the delay estimation in your own driver rather than in the core.
Well, I'm not looking at a single driver here. I am looking at several that use large parts of the common soc framework in various ways.
I'll look at LINK_ESTIMATED_ATIME and see if I could adopt that. I'm not sure how much it will help with the delay calculation but I suspect that the right answer could be deduced.
The LINK timestamps are supposed to be read from hardware counters close to the interface for better accuracy. When I added the LINK_ESTIMATED part, I thought this could be useful when those counters are not supported, but realized that if the delay is accurate you can just as well use the default method of adding the delay information to the DMA position to get the timestamps - there is really no benefit in defining a new timestamp type. In the case where the delay is not accurate (on the platforms you are experimenting with) then this might be the right solution. And we could add this timestamp in the core rather than in each driver for simplicity. It would be interesting if you share results with different timestamps to see if there is any benefit (with e.g. alsa-lib/test/audio_time.c)
The more that I think about it the more it seems to me that using a time-based estimate for position (hw_ptr), outside of an interrupt callback, will always be more accurate than that returned by substream->ops->pointer(). Perhaps the results of that call should simply be ignored outside an interrupt callback - the call not even made, so as not to pollute the estimate with changed delay data.
then you'd have to account for drift between audio and timer source, it's not simpler at all and as Clemens wrote can lead to corrupted pointers if you screw up.