[alsa-devel] [PATCH] ALSA: hda - Apply codec delay to wallclock.
For playback add the codec-side delay to the timestamp, for capture subtract it. This brings the timestamps in line with the time that was recently added to the delay reporting.
Signed-off-by: Dylan Reid dgreid@chromium.org --- sound/pci/hda/hda_intel.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+)
diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c index 735567e..ec8ac71 100644 --- a/sound/pci/hda/hda_intel.c +++ b/sound/pci/hda/hda_intel.c @@ -1889,6 +1889,27 @@ static void azx_timecounter_init(struct snd_pcm_substream *substream, tc->cycle_last = last; }
+static u64 azx_add_codec_delay_nsec(struct snd_pcm_substream *substream, + u64 nsec) +{ + struct azx_pcm *apcm = snd_pcm_substream_chip(substream); + int stream = substream->stream; + struct hda_pcm_stream *hinfo = apcm->hinfo[stream]; + struct hda_codec *codec = apcm->codec; + u64 codec_nsec; + + if (hinfo->ops.get_delay) { + codec_nsec = + hinfo->ops.get_delay(hinfo, codec, substream) * 1000000; + if (stream == SNDRV_PCM_STREAM_CAPTURE) + nsec = (nsec > codec_nsec) ? nsec - codec_nsec : 0; + else if (stream == SNDRV_PCM_STREAM_PLAYBACK) + nsec += codec_nsec; + } + + return nsec; +} + static int azx_get_wallclock_tstamp(struct snd_pcm_substream *substream, struct timespec *ts) { @@ -1897,6 +1918,7 @@ static int azx_get_wallclock_tstamp(struct snd_pcm_substream *substream,
nsec = timecounter_read(&azx_dev->azx_tc); nsec = div_u64(nsec, 3); /* can be optimized */ + nsec = azx_add_codec_delay_nsec(substream, nsec);
*ts = ns_to_timespec(nsec);
At Fri, 5 Apr 2013 17:22:04 -0700, Dylan Reid wrote:
For playback add the codec-side delay to the timestamp, for capture subtract it. This brings the timestamps in line with the time that was recently added to the delay reporting.
Signed-off-by: Dylan Reid dgreid@chromium.org
sound/pci/hda/hda_intel.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+)
diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c index 735567e..ec8ac71 100644 --- a/sound/pci/hda/hda_intel.c +++ b/sound/pci/hda/hda_intel.c @@ -1889,6 +1889,27 @@ static void azx_timecounter_init(struct snd_pcm_substream *substream, tc->cycle_last = last; }
+static u64 azx_add_codec_delay_nsec(struct snd_pcm_substream *substream,
u64 nsec)
+{
- struct azx_pcm *apcm = snd_pcm_substream_chip(substream);
- int stream = substream->stream;
- struct hda_pcm_stream *hinfo = apcm->hinfo[stream];
- struct hda_codec *codec = apcm->codec;
- u64 codec_nsec;
- if (hinfo->ops.get_delay) {
codec_nsec =
hinfo->ops.get_delay(hinfo, codec, substream) * 1000000;
if (stream == SNDRV_PCM_STREAM_CAPTURE)
nsec = (nsec > codec_nsec) ? nsec - codec_nsec : 0;
else if (stream == SNDRV_PCM_STREAM_PLAYBACK)
nsec += codec_nsec;
I think the compensation is applied in a wrong direction.
For the playback, the wallclock indicates the timestamp corresponding to the position currently being played. With the codec delay, the position is back more, i.e. the timestamp has to be subtracted.
Takashi
- }
- return nsec;
+}
static int azx_get_wallclock_tstamp(struct snd_pcm_substream *substream, struct timespec *ts) { @@ -1897,6 +1918,7 @@ static int azx_get_wallclock_tstamp(struct snd_pcm_substream *substream,
nsec = timecounter_read(&azx_dev->azx_tc); nsec = div_u64(nsec, 3); /* can be optimized */
nsec = azx_add_codec_delay_nsec(substream, nsec);
*ts = ns_to_timespec(nsec);
-- 1.8.1.3.605.g02339dd
On Sun, Apr 7, 2013 at 12:41 AM, Takashi Iwai tiwai@suse.de wrote:
At Fri, 5 Apr 2013 17:22:04 -0700, Dylan Reid wrote:
For playback add the codec-side delay to the timestamp, for capture subtract it. This brings the timestamps in line with the time that was recently added to the delay reporting.
Signed-off-by: Dylan Reid dgreid@chromium.org
sound/pci/hda/hda_intel.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+)
diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c index 735567e..ec8ac71 100644 --- a/sound/pci/hda/hda_intel.c +++ b/sound/pci/hda/hda_intel.c @@ -1889,6 +1889,27 @@ static void azx_timecounter_init(struct snd_pcm_substream *substream, tc->cycle_last = last; }
+static u64 azx_add_codec_delay_nsec(struct snd_pcm_substream *substream,
u64 nsec)
+{
struct azx_pcm *apcm = snd_pcm_substream_chip(substream);
int stream = substream->stream;
struct hda_pcm_stream *hinfo = apcm->hinfo[stream];
struct hda_codec *codec = apcm->codec;
u64 codec_nsec;
if (hinfo->ops.get_delay) {
codec_nsec =
hinfo->ops.get_delay(hinfo, codec, substream) * 1000000;
if (stream == SNDRV_PCM_STREAM_CAPTURE)
nsec = (nsec > codec_nsec) ? nsec - codec_nsec : 0;
else if (stream == SNDRV_PCM_STREAM_PLAYBACK)
nsec += codec_nsec;
I think the compensation is applied in a wrong direction.
For the playback, the wallclock indicates the timestamp corresponding to the position currently being played. With the codec delay, the position is back more, i.e. the timestamp has to be subtracted.
OK, I was thinking of the timestamp as the wall time when the next sample will be played. If it represents the time the currently rendered sample was fed into the pipeline, this makes sense, I'll change it to subtract for output. In this case, what does it mean for input? I was thinking of it as the time the sample was converted by the A-to-D.
Thanks,
-dg
Takashi
}
return nsec;
+}
static int azx_get_wallclock_tstamp(struct snd_pcm_substream *substream, struct timespec *ts) { @@ -1897,6 +1918,7 @@ static int azx_get_wallclock_tstamp(struct snd_pcm_substream *substream,
nsec = timecounter_read(&azx_dev->azx_tc); nsec = div_u64(nsec, 3); /* can be optimized */
nsec = azx_add_codec_delay_nsec(substream, nsec); *ts = ns_to_timespec(nsec);
-- 1.8.1.3.605.g02339dd
- if (hinfo->ops.get_delay) {
codec_nsec =
hinfo->ops.get_delay(hinfo, codec, substream) * 1000000;
if (stream == SNDRV_PCM_STREAM_CAPTURE)
nsec = (nsec > codec_nsec) ? nsec - codec_nsec : 0;
else if (stream == SNDRV_PCM_STREAM_PLAYBACK)
nsec += codec_nsec;
Can the .get_delay be modified to provide a better resolution than a ms? If you already convert to time, microseconds would seem like a better fit? Your codec seems to report frames (ie 20.83 us). -Pierre
At Mon, 08 Apr 2013 11:31:12 -0500, Pierre-Louis Bossart wrote:
- if (hinfo->ops.get_delay) {
codec_nsec =
hinfo->ops.get_delay(hinfo, codec, substream) * 1000000;
if (stream == SNDRV_PCM_STREAM_CAPTURE)
nsec = (nsec > codec_nsec) ? nsec - codec_nsec : 0;
else if (stream == SNDRV_PCM_STREAM_PLAYBACK)
nsec += codec_nsec;
Can the .get_delay be modified to provide a better resolution than a ms? If you already convert to time, microseconds would seem like a better fit? Your codec seems to report frames (ie 20.83 us).
Actually the patch must be wrong -- get_delay returns the delay in frame (sample) unit.
Takashi
On Mon, Apr 8, 2013 at 9:34 AM, Takashi Iwai tiwai@suse.de wrote:
At Mon, 08 Apr 2013 11:31:12 -0500, Pierre-Louis Bossart wrote:
- if (hinfo->ops.get_delay) {
codec_nsec =
hinfo->ops.get_delay(hinfo, codec, substream) * 1000000;
if (stream == SNDRV_PCM_STREAM_CAPTURE)
nsec = (nsec > codec_nsec) ? nsec - codec_nsec : 0;
else if (stream == SNDRV_PCM_STREAM_PLAYBACK)
nsec += codec_nsec;
Can the .get_delay be modified to provide a better resolution than a ms? If you already convert to time, microseconds would seem like a better fit? Your codec seems to report frames (ie 20.83 us).
Actually the patch must be wrong -- get_delay returns the delay in frame (sample) unit.
Of course you're right. That's annoying, it will have to convert from ms to frames (in get_delay) then to ns here. Converting from frames to ns here is a better idea, but for this particular codec, all I have is ms resolution.
Pierre, What should the capture timestamp represent? When the sample hits the A-to-D or when it is read out of the buffer?
Thanks,
-dg
Takashi
Of course you're right. That's annoying, it will have to convert from ms to frames (in get_delay) then to ns here. Converting from frames to ns here is a better idea, but for this particular codec, all I have is ms resolution.
This is odd, looks completely arbitrary...
Pierre, What should the capture timestamp represent? When the sample hits the A-to-D or when it is read out of the buffer?
When the samples hit A-to-D, as close as possible to the input (or the serial link if the codec doesn't report delay) -Pierre
On Mon, Apr 8, 2013 at 4:37 PM, Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com wrote:
Of course you're right. That's annoying, it will have to convert from ms to frames (in get_delay) then to ns here. Converting from frames to ns here is a better idea, but for this particular codec, all I have is ms resolution.
This is odd, looks completely arbitrary...
It does, I'll change this patch to convert from frames to ns, and check to see if the codec vendor can supply more accurate DSP latency numbers.
Pierre, What should the capture timestamp represent? When the sample hits the A-to-D or when it is read out of the buffer?
When the samples hit A-to-D, as close as possible to the input (or the serial link if the codec doesn't report delay)
OK, the codec latency will be subtracted from both the playback and capture times.
Thanks for the clarifications!
-dg
-Pierre
participants (3)
-
Dylan Reid
-
Pierre-Louis Bossart
-
Takashi Iwai