[alsa-devel] [RFC - AAF PCM plugin 0/5] Introduce AVTP Audio Format (AAF) plugin
Hi all,
This patchset introduces the AAF plugin to the alsa-plugins project which enables TSN audio applications on Linux.
For those not familiarized with TSN, it stands for Time-Sensitive Networking, and it's a set of IEEE technologies (enhancements to IEEE 802.1Q and Ethernet protocols) that provide precise time synchronization, bounded latency and application interoperability to bridged networks. Those technologies enable time-sensitive applications such as audio, video and control loops to run on top of bridged networks, co-existing with regular applications. TSN technologies are a super set of Audio Video Bridging (AVB) technologies also developed by IEEE. AVTP is the protocol defined to transport data in a TSN system, and AAF is one of the formats defined by AVTP to encapsulate audio data. AAF is specified in Chapter 7 from the AVTP spec [1].
This work is part of the effort to enable TSN technologies on upstream Linux ecosystem. All building-blocks required to enable TSN audio applications have been already developed and pushed to upstream projects. Time synchronization features are provided by linuxptp project [2], bounded latency features are provided by Linux Traffic Control subsystem since kernel version 4.15 [3], and AVTP packetization is provided by libavtp [4]. What is currently missing in the ecosystem to enable TSN audio applications is a piece of software that plumbs it all together and interfaces with Linux Audio system properly. That's the point of the AAF plugin introduced here.
The AAF plugin is a PCM plugin that uses AVTP protocol to transmit/receive audio data through a TSN capable network. When operating in playback mode, the plugin reads PCM samples from the audio buffer, encapsulates into AVTP packets and transmits to the network, mimicking a typical AVTP Talker. When operating in capture mode, the plugin receives AVTP packets from the network, retrieves the PCM samples, and present them to alsa-lib layer at the presentation time, mimicking a typical AVTP Listener.
Details about the plugin implementation are provided in the patches description, and information on how to configure and run the AAF plugin is provided in doc/aaf.txt file.
Best regards,
Andre
[1] 1722-2016 - IEEE Standard for a Transport Protocol for Time-Sensitive Applications in Bridged Local Area Networks [2] http://linuxptp.sourceforge.net [3] https://patchwork.ozlabs.org/cover/826678/ [4] https://github.com/AVnu/libavtp
Andre Guedes (5): aaf: Introduce plugin skeleton aaf: Load configuration parameters aaf: Implement Playback mode support aaf: Prepare for Capture mode support aaf: Implement Capture mode support
Makefile.am | 3 + aaf/Makefile.am | 9 + aaf/pcm_aaf.c | 1172 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ configure.ac | 9 + doc/aaf.txt | 110 ++++++ 5 files changed, 1303 insertions(+) create mode 100644 aaf/Makefile.am create mode 100644 aaf/pcm_aaf.c create mode 100644 doc/aaf.txt
The patch introduces the skeleton code from the AAF plugin as well as the buildsystem bits in order to get the plugin built. Following the approach from other plugins, the AAF plugin is only built if its dependency (libavtp) is detected by configure.
Follow-up patches implement support for both playback and capture modes.
Signed-off-by: Andre Guedes andre.guedes@intel.com --- Makefile.am | 3 ++ aaf/Makefile.am | 9 ++++++ aaf/pcm_aaf.c | 91 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ configure.ac | 9 ++++++ doc/aaf.txt | 18 ++++++++++++ 5 files changed, 130 insertions(+) create mode 100644 aaf/Makefile.am create mode 100644 aaf/pcm_aaf.c create mode 100644 doc/aaf.txt
diff --git a/Makefile.am b/Makefile.am index 27f61a4..af0e9c4 100644 --- a/Makefile.am +++ b/Makefile.am @@ -35,6 +35,9 @@ endif if HAVE_SPEEXDSP SUBDIRS += speex endif +if HAVE_AAF +SUBDIRS += aaf +endif
EXTRA_DIST = gitcompile version COPYING.GPL m4/attributes.m4 AUTOMAKE_OPTIONS = foreign diff --git a/aaf/Makefile.am b/aaf/Makefile.am new file mode 100644 index 0000000..492b883 --- /dev/null +++ b/aaf/Makefile.am @@ -0,0 +1,9 @@ +asound_module_pcm_aaf_LTLIBRARIES = libasound_module_pcm_aaf.la + +asound_module_pcm_aafdir = @ALSA_PLUGIN_DIR@ + +AM_CFLAGS = @ALSA_CFLAGS@ @AVTP_CFLAGS@ +AM_LDFLAGS = -module -avoid-version -export-dynamic -no-undefined $(LDFLAGS_NOUNDEFINED) + +libasound_module_pcm_aaf_la_SOURCES = pcm_aaf.c +libasound_module_pcm_aaf_la_LIBADD = @ALSA_LIBS@ @AVTP_LIBS@ diff --git a/aaf/pcm_aaf.c b/aaf/pcm_aaf.c new file mode 100644 index 0000000..7890e10 --- /dev/null +++ b/aaf/pcm_aaf.c @@ -0,0 +1,91 @@ +/* + * AVTP Audio Format (AAF) PCM Plugin + * + * Copyright (c) 2018, Intel Corporation + * + * This library is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as + * published by the Free Software Foundation; either version 2.1 of + * the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include <alsa/asoundlib.h> +#include <alsa/pcm_external.h> + +typedef struct { + snd_pcm_ioplug_t io; +} snd_pcm_aaf_t; + +static int aaf_close(snd_pcm_ioplug_t *io) +{ + snd_pcm_aaf_t *aaf = io->private_data; + + if (!aaf) + return -EBADFD; + + free(aaf); + aaf = NULL; + return 0; +} + +static snd_pcm_sframes_t aaf_pointer(snd_pcm_ioplug_t *io) +{ + return 0; +} + +static int aaf_start(snd_pcm_ioplug_t *io) +{ + return 0; +} + +static int aaf_stop(snd_pcm_ioplug_t *io) +{ + return 0; +} + +static const snd_pcm_ioplug_callback_t aaf_callback = { + .close = aaf_close, + .pointer = aaf_pointer, + .start = aaf_start, + .stop = aaf_stop, +}; + +SND_PCM_PLUGIN_DEFINE_FUNC(aaf) +{ + snd_pcm_aaf_t *aaf; + int res; + + aaf = calloc(1, sizeof(*aaf)); + if (!aaf) { + SNDERR("Failed to allocate memory"); + return -ENOMEM; + } + + aaf->io.version = SND_PCM_IOPLUG_VERSION; + aaf->io.name = "AVTP Audio Format (AAF) Plugin"; + aaf->io.callback = &aaf_callback; + aaf->io.private_data = aaf; + res = snd_pcm_ioplug_create(&aaf->io, name, stream, mode); + if (res < 0) { + SNDERR("Failed to create ioplug instance"); + goto err; + } + + *pcmp = aaf->io.pcm; + return 0; + +err: + free(aaf); + return res; +} + +SND_PCM_PLUGIN_SYMBOL(aaf); diff --git a/configure.ac b/configure.ac index 3e2d233..1524cba 100644 --- a/configure.ac +++ b/configure.ac @@ -176,6 +176,14 @@ fi test "x$prefix" = xNONE && prefix=$ac_default_prefix test "x$exec_prefix" = xNONE && exec_prefix=$prefix
+AC_ARG_ENABLE([aaf], + AS_HELP_STRING([--disable-aaf], [Disable building of AAF plugin])) + +if test "x$enable_aaf" != "xno"; then + PKG_CHECK_MODULES(AVTP, avtp >= 0.1, [HAVE_AAF=yes], [HAVE_AAF=no]) +fi +AM_CONDITIONAL(HAVE_AAF, test x$HAVE_AAF = xyes) + dnl ALSA plugin directory AC_ARG_WITH(plugindir, AS_HELP_STRING([--with-plugindir=dir], @@ -251,6 +259,7 @@ AC_OUTPUT([ usb_stream/Makefile speex/Makefile arcam-av/Makefile + aaf/Makefile ])
dnl Show the build conditions diff --git a/doc/aaf.txt b/doc/aaf.txt new file mode 100644 index 0000000..22144de --- /dev/null +++ b/doc/aaf.txt @@ -0,0 +1,18 @@ +AVTP Audio Format (AAF) Plugin +============================== + +Overview +-------- + +The AAF plugin is a PCM plugin that uses Audio Video Transport Protocol (AVTP) +to transmit/receive audio samples through a Time-Sensitive Network (TSN) +capable network. The plugin enables media applications to easily implement AVTP +Talker and Listener functionalities. + +Dependency +---------- + +The AAF plugin uses libavtp to handle AVTP packetization. Libavtp source code +can be found in https://github.com/AVnu/libavtp as well as instructions to +build and install it. If libavtp isn't detected by configure, the plugin isn't +built.
This patch implements the infrastructure to load the plugin configuration from ALSA configuration file. The configuration is loaded in open() callback.
All configuration parameters are described in details in doc/aaf.txt file.
Signed-off-by: Andre Guedes andre.guedes@intel.com --- aaf/pcm_aaf.c | 126 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ doc/aaf.txt | 52 ++++++++++++++++++++++++ 2 files changed, 178 insertions(+)
diff --git a/aaf/pcm_aaf.c b/aaf/pcm_aaf.c index 7890e10..4c6f031 100644 --- a/aaf/pcm_aaf.c +++ b/aaf/pcm_aaf.c @@ -20,11 +20,133 @@
#include <alsa/asoundlib.h> #include <alsa/pcm_external.h> +#include <linux/if.h> +#include <linux/if_ether.h> +#include <string.h> +#include <stdint.h> + +#define NSEC_PER_USEC 1000ULL
typedef struct { snd_pcm_ioplug_t io; + + char ifname[IFNAMSIZ]; + uint8_t addr[ETH_ALEN]; + int prio; + uint64_t streamid; + int mtt; + int t_uncertainty; + int frames_per_pkt; } snd_pcm_aaf_t;
+static int aaf_load_config(snd_pcm_aaf_t *aaf, snd_config_t *conf) +{ + snd_config_iterator_t cur, next; + + snd_config_for_each(cur, next, conf) { + snd_config_t *entry = snd_config_iterator_entry(cur); + const char *id; + + if (snd_config_get_id(entry, &id) < 0) + goto err; + + if (strcmp(id, "comment") == 0 || + strcmp(id, "type") == 0 || + strcmp(id, "hint") == 0) + continue; + + if (strcmp(id, "ifname") == 0) { + const char *ifname; + + if (snd_config_get_string(entry, &ifname) < 0) + goto err; + + snprintf(aaf->ifname, sizeof(aaf->ifname), "%s", + ifname); + } else if (strcmp(id, "addr") == 0) { + const char *addr; + int n; + + if (snd_config_get_string(entry, &addr) < 0) + goto err; + + n = sscanf(addr, "%hhx:%hhx:%hhx:%hhx:%hhx:%hhx", + &aaf->addr[0], &aaf->addr[1], + &aaf->addr[2], &aaf->addr[3], + &aaf->addr[4], &aaf->addr[5]); + if (n != 6) + goto err; + } else if (strcmp(id, "prio") == 0) { + long prio; + + if (snd_config_get_integer(entry, &prio) < 0) + goto err; + + if (prio < 0) + goto err; + + aaf->prio = prio; + } else if (strcmp(id, "streamid") == 0) { + const char *streamid; + int n; + uint64_t buf[7]; + + if (snd_config_get_string(entry, &streamid) < 0) + goto err; + + n = sscanf(streamid, "%lx:%lx:%lx:%lx:%lx:%lx:%lx", + &buf[0], &buf[1], &buf[2], &buf[3], + &buf[4], &buf[5], &buf[6]); + if (n != 7) + goto err; + + aaf->streamid = buf[0] << 56 | buf[1] << 48 | + buf[2] << 40 | buf[3] << 32 | + buf[4] << 24 | buf[5] << 16 | + buf[6]; + } else if (strcmp(id, "mtt") == 0) { + long mtt; + + if (snd_config_get_integer(entry, &mtt) < 0) + goto err; + + if (mtt < 0) + goto err; + + aaf->mtt = mtt * NSEC_PER_USEC; + } else if (strcmp(id, "time_uncertainty") == 0) { + long t_uncertainty; + + if (snd_config_get_integer(entry, &t_uncertainty) < 0) + goto err; + + if (t_uncertainty < 0) + goto err; + + aaf->t_uncertainty = t_uncertainty * NSEC_PER_USEC; + } else if (strcmp(id, "frames_per_pkt") == 0) { + long frames_per_pkt; + + if (snd_config_get_integer(entry, &frames_per_pkt) < 0) + goto err; + + if (frames_per_pkt < 0) + goto err; + + aaf->frames_per_pkt = frames_per_pkt; + } else { + SNDERR("Invalid configuration: %s", id); + goto err; + } + } + + return 0; + +err: + SNDERR("Error loading device configuration"); + return -EINVAL; +} + static int aaf_close(snd_pcm_ioplug_t *io) { snd_pcm_aaf_t *aaf = io->private_data; @@ -70,6 +192,10 @@ SND_PCM_PLUGIN_DEFINE_FUNC(aaf) return -ENOMEM; }
+ res = aaf_load_config(aaf, conf); + if (res < 0) + goto err; + aaf->io.version = SND_PCM_IOPLUG_VERSION; aaf->io.name = "AVTP Audio Format (AAF) Plugin"; aaf->io.callback = &aaf_callback; diff --git a/doc/aaf.txt b/doc/aaf.txt index 22144de..24ea888 100644 --- a/doc/aaf.txt +++ b/doc/aaf.txt @@ -16,3 +16,55 @@ The AAF plugin uses libavtp to handle AVTP packetization. Libavtp source code can be found in https://github.com/AVnu/libavtp as well as instructions to build and install it. If libavtp isn't detected by configure, the plugin isn't built. + +Plugin Configuration +-------------------- + +The plugin parameters are passed via ALSA configuration file. They are defined +as follows: + + * ifname: Network interface used to transmit/receive AVTP packets. + + * addr: Stream destination MAC address. + + * prio: Priority used by the plugin to transmit AVTP traffic. This + option is relevant only when operating in playback mode. + + * streamid: Stream ID associated with the AAF stream transmitted or + received by the plugin. + + * mtt: Maximum Transit Time (in microseconds) as defined in AVTP spec + section 4.3.3. This option is relevant only when operating in + playback mode. + + * time_uncertainty: Maximum Time Uncertainty (in microseconds) as + defined by AVTP spec section 4.3.3. This option is relevant only when + operating in playback mode. + + * frames_per_pkt: Number of audio frames transmitted in one AVTP + packet. + +The plugin provides the PCM type "aaf". Configure an AAF PCM virtual device +according to the AAF stream you want to transmit or receive. A hypothetical +configuration file is shown below: + + pcm.aaf { + type aaf + ifname eth0 + addr AA:AA:AA:AA:AA:AA + prio 3 + streamid BB:BB:BB:BB:BB:BB:0001 + mtt 2000 + time_uncertainty 125 + frames_per_pkt 6 + } + +Put the above to ~/.asoundrc (or /etc/asound.conf), and use the AAF PCM virtual +device with your ALSA apps. For example, to stream the content from a wav file +through the network, run: + + $ aplay -Daaf foo.wav + +To receive the AAF stream generated by the previous command, run: + + $ arecord -Daaf
On 8/20/18 8:06 PM, Andre Guedes wrote:
This patch implements the infrastructure to load the plugin configuration from ALSA configuration file. The configuration is loaded in open() callback.
All configuration parameters are described in details in doc/aaf.txt file.
Signed-off-by: Andre Guedes andre.guedes@intel.com
aaf/pcm_aaf.c | 126 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ doc/aaf.txt | 52 ++++++++++++++++++++++++ 2 files changed, 178 insertions(+)
diff --git a/aaf/pcm_aaf.c b/aaf/pcm_aaf.c index 7890e10..4c6f031 100644 --- a/aaf/pcm_aaf.c +++ b/aaf/pcm_aaf.c @@ -20,11 +20,133 @@
#include <alsa/asoundlib.h> #include <alsa/pcm_external.h> +#include <linux/if.h> +#include <linux/if_ether.h> +#include <string.h> +#include <stdint.h>
+#define NSEC_PER_USEC 1000ULL
typedef struct { snd_pcm_ioplug_t io;
- char ifname[IFNAMSIZ];
- uint8_t addr[ETH_ALEN];
- int prio;
- uint64_t streamid;
- int mtt;
- int t_uncertainty;
- int frames_per_pkt; } snd_pcm_aaf_t;
+static int aaf_load_config(snd_pcm_aaf_t *aaf, snd_config_t *conf) +{
- snd_config_iterator_t cur, next;
- snd_config_for_each(cur, next, conf) {
snd_config_t *entry = snd_config_iterator_entry(cur);
const char *id;
if (snd_config_get_id(entry, &id) < 0)
goto err;
if (strcmp(id, "comment") == 0 ||
strcmp(id, "type") == 0 ||
strcmp(id, "hint") == 0)
continue;
if (strcmp(id, "ifname") == 0) {
const char *ifname;
if (snd_config_get_string(entry, &ifname) < 0)
goto err;
snprintf(aaf->ifname, sizeof(aaf->ifname), "%s",
ifname);
} else if (strcmp(id, "addr") == 0) {
const char *addr;
int n;
if (snd_config_get_string(entry, &addr) < 0)
goto err;
n = sscanf(addr, "%hhx:%hhx:%hhx:%hhx:%hhx:%hhx",
&aaf->addr[0], &aaf->addr[1],
&aaf->addr[2], &aaf->addr[3],
&aaf->addr[4], &aaf->addr[5]);
if (n != 6)
goto err;
} else if (strcmp(id, "prio") == 0) {
long prio;
if (snd_config_get_integer(entry, &prio) < 0)
goto err;
if (prio < 0)
goto err;
aaf->prio = prio;
} else if (strcmp(id, "streamid") == 0) {
const char *streamid;
int n;
uint64_t buf[7];
if (snd_config_get_string(entry, &streamid) < 0)
goto err;
n = sscanf(streamid, "%lx:%lx:%lx:%lx:%lx:%lx:%lx",
&buf[0], &buf[1], &buf[2], &buf[3],
&buf[4], &buf[5], &buf[6]);
if (n != 7)
goto err;
aaf->streamid = buf[0] << 56 | buf[1] << 48 |
buf[2] << 40 | buf[3] << 32 |
buf[4] << 24 | buf[5] << 16 |
buf[6];
} else if (strcmp(id, "mtt") == 0) {
long mtt;
if (snd_config_get_integer(entry, &mtt) < 0)
goto err;
if (mtt < 0)
goto err;
aaf->mtt = mtt * NSEC_PER_USEC;
} else if (strcmp(id, "time_uncertainty") == 0) {
long t_uncertainty;
if (snd_config_get_integer(entry, &t_uncertainty) < 0)
goto err;
if (t_uncertainty < 0)
goto err;
aaf->t_uncertainty = t_uncertainty * NSEC_PER_USEC;
} else if (strcmp(id, "frames_per_pkt") == 0) {
long frames_per_pkt;
if (snd_config_get_integer(entry, &frames_per_pkt) < 0)
goto err;
if (frames_per_pkt < 0)
goto err;
aaf->frames_per_pkt = frames_per_pkt;
} else {
SNDERR("Invalid configuration: %s", id);
goto err;
}
- }
- return 0;
+err:
- SNDERR("Error loading device configuration");
- return -EINVAL;
+}
- static int aaf_close(snd_pcm_ioplug_t *io) { snd_pcm_aaf_t *aaf = io->private_data;
@@ -70,6 +192,10 @@ SND_PCM_PLUGIN_DEFINE_FUNC(aaf) return -ENOMEM; }
- res = aaf_load_config(aaf, conf);
- if (res < 0)
goto err;
- aaf->io.version = SND_PCM_IOPLUG_VERSION; aaf->io.name = "AVTP Audio Format (AAF) Plugin"; aaf->io.callback = &aaf_callback;
diff --git a/doc/aaf.txt b/doc/aaf.txt index 22144de..24ea888 100644 --- a/doc/aaf.txt +++ b/doc/aaf.txt @@ -16,3 +16,55 @@ The AAF plugin uses libavtp to handle AVTP packetization. Libavtp source code can be found in https://github.com/AVnu/libavtp as well as instructions to build and install it. If libavtp isn't detected by configure, the plugin isn't built.
+Plugin Configuration +--------------------
+The plugin parameters are passed via ALSA configuration file. They are defined +as follows:
- ifname: Network interface used to transmit/receive AVTP packets.
- addr: Stream destination MAC address.
including multi-cast address?
- prio: Priority used by the plugin to transmit AVTP traffic. This
option is relevant only when operating in playback mode.
- streamid: Stream ID associated with the AAF stream transmitted or
received by the plugin.
So the streamid has to be statically configured, right? If there an assumption that all streams exposed in this configuration file can all be used concurrently? Or put differently that the streams are only exposed if they can be used independently at any time?
- mtt: Maximum Transit Time (in microseconds) as defined in AVTP spec
section 4.3.3. This option is relevant only when operating in
playback mode.
- time_uncertainty: Maximum Time Uncertainty (in microseconds) as
defined by AVTP spec section 4.3.3. This option is relevant only when
operating in playback mode.
- frames_per_pkt: Number of audio frames transmitted in one AVTP
packet.
+The plugin provides the PCM type "aaf". Configure an AAF PCM virtual device +according to the AAF stream you want to transmit or receive. A hypothetical +configuration file is shown below:
- pcm.aaf {
type aaf
ifname eth0
addr AA:AA:AA:AA:AA:AA
prio 3
streamid BB:BB:BB:BB:BB:BB:0001
mtt 2000
time_uncertainty 125
frames_per_pkt 6
- }
+Put the above to ~/.asoundrc (or /etc/asound.conf), and use the AAF PCM virtual +device with your ALSA apps. For example, to stream the content from a wav file +through the network, run:
- $ aplay -Daaf foo.wav
How do you deal with multiple streams then? -Daaf:<stream_number> ?
+To receive the AAF stream generated by the previous command, run:
- $ arecord -Daaf
Did you mean on a different machine or are there any loopback capabilities?
Hi Pierre,
On Mon, 2018-08-20 at 22:16 -0500, Pierre-Louis Bossart wrote:
+Plugin Configuration +--------------------
+The plugin parameters are passed via ALSA configuration file. They are defined +as follows:
- ifname: Network interface used to transmit/receive AVTP
packets.
- addr: Stream destination MAC address.
including multi-cast address?
Correct.
- prio: Priority used by the plugin to transmit AVTP
traffic. This
option is relevant only when operating in playback mode.
- streamid: Stream ID associated with the AAF stream
transmitted or
received by the plugin.
So the streamid has to be statically configured, right?
Correct.
If there an assumption that all streams exposed in this configuration file can all be used concurrently? Or put differently that the streams are only exposed if they can be used independently at any time?
Yes, the plugin is expected to handle multiple streams concurrently. I have tested with two, mimicking one Class A and one Class B streams.
- pcm.aaf {
type aaf
ifname eth0
addr AA:AA:AA:AA:AA:AA
prio 3
streamid BB:BB:BB:BB:BB:BB:0001
mtt 2000
time_uncertainty 125
frames_per_pkt 6
- }
+Put the above to ~/.asoundrc (or /etc/asound.conf), and use the AAF PCM virtual +device with your ALSA apps. For example, to stream the content from a wav file +through the network, run:
- $ aplay -Daaf foo.wav
How do you deal with multiple streams then? -Daaf:<stream_number> ?
Two ways:
1) Define multiple AAF devices in asoundrc (e.g. aaf0, aaf1, aaf2, etc.) and select them using the -D option e.g.:
$ aplay -D aaf0 foo.wav $ aplay -D aaf1 bar.wav
2) Leverage ALSA config runtime arguments e.g. define the aaf device as:
pcm.aaf { @args [ IFNAME ADDR PRIO STREAMID MTT UNCERTAINTY FRAMES ] @args.IFNAME { type string } @args.ADDR { type string } @args.PRIO { type integer } @args.STREAMID { type string } @args.MTT { type integer } @args.UNCERTAINTY { type integer } @args.FRAMES { type integer }
type aaf ifname $IFNAME addr $ADDR prio $PRIO streamid $STREAMID mtt $MTT time_uncertainty $UNCERTAINTY frames_per_pkt $FRAMES }
And pass the device parameters via -D option like this:
$ aplay -D aaf:eth0.1,01:AA:AA:AA:AA:AA,3,AA:BB:CC:DD:EE:FF:0001,2000,125,6 foo.wav
$ aplay -D aaf:eth0.2,01:BB:BB:BB:BB:BB,3,AA:BB:CC:DD:EE:FF:0002,50000,1000,12 bar.wav
Particularly, I'm using 2) since it is more practical for testing purposes.
+To receive the AAF stream generated by the previous command, run:
- $ arecord -Daaf
Did you mean on a different machine or are there any loopback capabilities?
Yep, on a different machine. I'll add that info here.
Regards,
Andre
This patch implements the playback mode support from the AAF plugin. Simply put, this mode works as follows: PCM samples provided by alsa-lib layer are encapsulated into AVTP packets and transmitted through the network. In summary, the playback mode implements a typical AVTP Talker.
When the AAF device is put in running state, its media clock is started. At every tick from the media clock, audio frames are consumed from the audio buffer, encapsulated into an AVTP packet, and transmitted to the network. The presentation time from each AVTP packet is calculated taking in consideration the maximum transit time and time uncertainty values configured by the user.
Below follows some discussion about implementation details:
Even though only one file descriptor is used to implement the playback mode, this patch doesn't leverage ioplug->poll_fd but defines poll callbacks instead. The reason is these callbacks will be required to support capture mode (to be implemented by upcoming patch).
The TSN data plane interface is the AF_PACKET socket family so the plugin uses an AF_PACKET socket to send/receive AVTP packets. Linux requires CAP_NET_RAW capability in order to open an AF_PACKET socket so the application that instantiates the plugin must have it. For further info about AF_PACKET socket family see packet(7).
Signed-off-by: Andre Guedes andre.guedes@intel.com --- aaf/pcm_aaf.c | 611 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ doc/aaf.txt | 40 ++++ 2 files changed, 651 insertions(+)
diff --git a/aaf/pcm_aaf.c b/aaf/pcm_aaf.c index 4c6f031..72f6652 100644 --- a/aaf/pcm_aaf.c +++ b/aaf/pcm_aaf.c @@ -20,13 +20,30 @@
#include <alsa/asoundlib.h> #include <alsa/pcm_external.h> +#include <arpa/inet.h> +#include <avtp.h> +#include <avtp_aaf.h> +#include <limits.h> #include <linux/if.h> #include <linux/if_ether.h> +#include <linux/if_packet.h> #include <string.h> #include <stdint.h> +#include <sys/ioctl.h> +#include <sys/timerfd.h>
+#ifdef AAF_DEBUG +#define pr_debug(...) SNDERR(__VA_ARGS__) +#else +#define pr_debug(...) (void)0 +#endif + +#define CLOCK_REF CLOCK_REALTIME +#define NSEC_PER_SEC 1000000000ULL #define NSEC_PER_USEC 1000ULL
+#define FD_COUNT_PLAYBACK 1 + typedef struct { snd_pcm_ioplug_t io;
@@ -37,8 +54,74 @@ typedef struct { int mtt; int t_uncertainty; int frames_per_pkt; + + int sk_fd; + int timer_fd; + + struct sockaddr_ll sk_addr; + + char *audiobuf; + + struct avtp_stream_pdu *pdu; + int pdu_size; + uint8_t pdu_seq; + + uint64_t mclk_start_time; + uint64_t mclk_period; + uint64_t mclk_ticks; + + snd_pcm_channel_area_t *audiobuf_areas; + snd_pcm_channel_area_t *payload_areas; + + snd_pcm_sframes_t hw_ptr; + snd_pcm_sframes_t buffer_size; + snd_pcm_sframes_t boundary; } snd_pcm_aaf_t;
+static unsigned int alsa_to_avtp_format(snd_pcm_format_t format) +{ + switch (format) { + case SND_PCM_FORMAT_S16_BE: + return AVTP_AAF_FORMAT_INT_16BIT; + case SND_PCM_FORMAT_S24_3BE: + return AVTP_AAF_FORMAT_INT_24BIT; + case SND_PCM_FORMAT_S32_BE: + return AVTP_AAF_FORMAT_INT_32BIT; + case SND_PCM_FORMAT_FLOAT_BE: + return AVTP_AAF_FORMAT_FLOAT_32BIT; + default: + return AVTP_AAF_FORMAT_USER; + } +} + +static unsigned int alsa_to_avtp_rate(unsigned int rate) +{ + switch (rate) { + case 8000: + return AVTP_AAF_PCM_NSR_8KHZ; + case 16000: + return AVTP_AAF_PCM_NSR_16KHZ; + case 24000: + return AVTP_AAF_PCM_NSR_24KHZ; + case 32000: + return AVTP_AAF_PCM_NSR_32KHZ; + case 44100: + return AVTP_AAF_PCM_NSR_44_1KHZ; + case 48000: + return AVTP_AAF_PCM_NSR_48KHZ; + case 88200: + return AVTP_AAF_PCM_NSR_88_2KHZ; + case 96000: + return AVTP_AAF_PCM_NSR_96KHZ; + case 176400: + return AVTP_AAF_PCM_NSR_176_4KHZ; + case 192000: + return AVTP_AAF_PCM_NSR_192KHZ; + default: + return AVTP_AAF_PCM_NSR_USER; + } +} + static int aaf_load_config(snd_pcm_aaf_t *aaf, snd_config_t *conf) { snd_config_iterator_t cur, next; @@ -147,6 +230,327 @@ err: return -EINVAL; }
+static int aaf_init_socket(snd_pcm_aaf_t *aaf) +{ + int fd, res; + struct ifreq req; + + fd = socket(AF_PACKET, SOCK_DGRAM, htons(ETH_P_TSN)); + if (fd < 0) { + SNDERR("Failed to open AF_PACKET socket"); + return -errno; + } + + snprintf(req.ifr_name, sizeof(req.ifr_name), "%s", aaf->ifname); + res = ioctl(fd, SIOCGIFINDEX, &req); + if (res < 0) { + SNDERR("Failed to get network interface index"); + res = -errno; + goto err; + } + + aaf->sk_addr.sll_family = AF_PACKET; + aaf->sk_addr.sll_protocol = htons(ETH_P_TSN); + aaf->sk_addr.sll_halen = ETH_ALEN; + aaf->sk_addr.sll_ifindex = req.ifr_ifindex; + memcpy(&aaf->sk_addr.sll_addr, aaf->addr, ETH_ALEN); + + res = setsockopt(fd, SOL_SOCKET, SO_PRIORITY, &aaf->prio, + sizeof(aaf->prio)); + if (res < 0) { + SNDERR("Failed to set socket priority"); + res = -errno; + goto err; + } + + aaf->sk_fd = fd; + return 0; + +err: + close(fd); + return res; +} + +static int aaf_init_timer(snd_pcm_aaf_t *aaf) +{ + int fd; + + fd = timerfd_create(CLOCK_REF, 0); + if (fd < 0) + return -errno; + + aaf->timer_fd = fd; + return 0; +} + +static int aaf_init_pdu(snd_pcm_aaf_t *aaf) +{ + int res; + struct avtp_stream_pdu *pdu; + ssize_t frame_size, payload_size, pdu_size; + snd_pcm_ioplug_t *io = &aaf->io; + + frame_size = snd_pcm_format_size(io->format, io->channels); + if (frame_size < 0) + return frame_size; + + payload_size = frame_size * aaf->frames_per_pkt; + pdu_size = sizeof(*pdu) + payload_size; + pdu = calloc(1, pdu_size); + if (!pdu) + return -ENOMEM; + + res = avtp_aaf_pdu_init(pdu); + if (res < 0) + goto err; + + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_TV, 1); + if (res < 0) + goto err; + + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_STREAM_ID, aaf->streamid); + if (res < 0) + goto err; + + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_FORMAT, + alsa_to_avtp_format(io->format)); + if (res < 0) + goto err; + + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_NSR, + alsa_to_avtp_rate(io->rate)); + if (res < 0) + goto err; + + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_CHAN_PER_FRAME, + io->channels); + if (res < 0) + goto err; + + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_BIT_DEPTH, + snd_pcm_format_width(io->format)); + if (res < 0) + goto err; + + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_STREAM_DATA_LEN, + payload_size); + if (res < 0) + goto err; + + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_SP, AVTP_AAF_PCM_SP_NORMAL); + if (res < 0) + goto err; + + aaf->pdu = pdu; + aaf->pdu_size = pdu_size; + aaf->pdu_seq = 0; + return 0; + +err: + free(pdu); + return res; +} + +static int aaf_init_audio_buffer(snd_pcm_aaf_t *aaf) +{ + char *audiobuf; + ssize_t frame_size, audiobuf_size; + snd_pcm_ioplug_t *io = &aaf->io; + + frame_size = snd_pcm_format_size(io->format, io->channels); + if (frame_size < 0) + return frame_size; + + audiobuf_size = frame_size * aaf->buffer_size; + audiobuf = calloc(1, audiobuf_size); + if (!audiobuf) + return -ENOMEM; + + aaf->audiobuf = audiobuf; + return 0; +} + +static int aaf_init_areas(snd_pcm_aaf_t *aaf) +{ + snd_pcm_channel_area_t *audiobuf_areas, *payload_areas; + ssize_t sample_size, frame_size; + snd_pcm_ioplug_t *io = &aaf->io; + + sample_size = snd_pcm_format_size(io->format, 1); + if (sample_size < 0) + return sample_size; + + frame_size = sample_size * io->channels; + + audiobuf_areas = calloc(io->channels, sizeof(snd_pcm_channel_area_t)); + if (!audiobuf_areas) + return -ENOMEM; + + payload_areas = calloc(io->channels, sizeof(snd_pcm_channel_area_t)); + if (!payload_areas) { + free(audiobuf_areas); + return -ENOMEM; + } + + for (unsigned int i = 0; i < io->channels; i++) { + audiobuf_areas[i].addr = aaf->audiobuf; + audiobuf_areas[i].first = i * sample_size * 8; + audiobuf_areas[i].step = frame_size * 8; + + payload_areas[i].addr = aaf->pdu->avtp_payload; + payload_areas[i].first = i * sample_size * 8; + payload_areas[i].step = frame_size * 8; + } + + aaf->audiobuf_areas = audiobuf_areas; + aaf->payload_areas = payload_areas; + return 0; +} + +static void aaf_inc_hw_ptr(snd_pcm_aaf_t *aaf, snd_pcm_sframes_t val) +{ + aaf->hw_ptr += val; + + if (aaf->hw_ptr >= aaf->boundary) + aaf->hw_ptr -= aaf->boundary; +} + +static int aaf_mclk_start_playback(snd_pcm_aaf_t *aaf) +{ + int res; + struct timespec now; + struct itimerspec itspec; + snd_pcm_ioplug_t *io = &aaf->io; + + res = clock_gettime(CLOCK_REF, &now); + if (res < 0) { + SNDERR("Failed to get time from clock"); + return -errno; + } + + aaf->mclk_period = (NSEC_PER_SEC * aaf->frames_per_pkt) / io->rate; + aaf->mclk_ticks = 0; + aaf->mclk_start_time = now.tv_sec * NSEC_PER_SEC + now.tv_nsec + + aaf->mclk_period; + + itspec.it_value.tv_sec = aaf->mclk_start_time / NSEC_PER_SEC; + itspec.it_value.tv_nsec = aaf->mclk_start_time % NSEC_PER_SEC; + itspec.it_interval.tv_sec = 0; + itspec.it_interval.tv_nsec = aaf->mclk_period; + res = timerfd_settime(aaf->timer_fd, TFD_TIMER_ABSTIME, &itspec, NULL); + if (res < 0) + return -errno; + + return 0; +} + +static int aaf_mclk_reset(snd_pcm_aaf_t *aaf) +{ + aaf->mclk_start_time = 0; + aaf->mclk_period = 0; + aaf->mclk_ticks = 0; + + if (aaf->timer_fd != -1) { + int res; + struct itimerspec itspec = { 0 }; + + res = timerfd_settime(aaf->timer_fd, 0, &itspec, NULL); + if (res < 0) { + SNDERR("Failed to stop media clock"); + return res; + } + } + + return 0; +} + +static uint64_t aaf_mclk_gettime(snd_pcm_aaf_t *aaf) +{ + return aaf->mclk_start_time + aaf->mclk_period * aaf->mclk_ticks; +} + +static int aaf_tx_pdu(snd_pcm_aaf_t *aaf) +{ + int res; + uint64_t ptime; + snd_pcm_sframes_t n; + snd_pcm_ioplug_t *io = &aaf->io; + snd_pcm_t *pcm = io->pcm; + struct avtp_stream_pdu *pdu = aaf->pdu; + + n = aaf->buffer_size - snd_pcm_avail(pcm); + if (n == 0) { + /* If there is no data in audio buffer to be transmitted, + * we reached an underrun state. + */ + return -EPIPE; + } + if (n < aaf->frames_per_pkt) { + /* If there isn't enough frames to fill the AVTP packet, we + * drop them. This behavior is suggested by IEEE 1722-2016 + * spec, section 7.3.5. + */ + aaf_inc_hw_ptr(aaf, n); + return 0; + } + + res = snd_pcm_areas_copy_wrap(aaf->payload_areas, 0, + aaf->frames_per_pkt, + aaf->audiobuf_areas, + aaf->hw_ptr % aaf->buffer_size, + aaf->buffer_size, io->channels, + aaf->frames_per_pkt, io->format); + if (res < 0) { + SNDERR("Failed to copy data to AVTP payload"); + return res; + } + + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_SEQ_NUM, aaf->pdu_seq++); + if (res < 0) + return res; + + ptime = aaf_mclk_gettime(aaf) + aaf->mtt + aaf->t_uncertainty; + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_TIMESTAMP, ptime); + if (res < 0) + return res; + + n = sendto(aaf->sk_fd, aaf->pdu, aaf->pdu_size, 0, + (struct sockaddr *) &aaf->sk_addr, + sizeof(aaf->sk_addr)); + if (n < 0 || n != aaf->pdu_size) { + SNDERR("Failed to send AAF PDU"); + return -EIO; + } + + aaf_inc_hw_ptr(aaf, aaf->frames_per_pkt); + return 0; +} + +static int aaf_mclk_timeout_playback(snd_pcm_aaf_t *aaf) +{ + int res; + ssize_t n; + uint64_t expirations; + + n = read(aaf->timer_fd, &expirations, sizeof(uint64_t)); + if (n < 0) { + SNDERR("Failed to read() timer"); + return -errno; + } + + if (expirations != 1) + pr_debug("Missed %llu tx interval(s) ", expirations - 1); + + while (expirations--) { + res = aaf_tx_pdu(aaf); + if (res < 0) + return res; + aaf->mclk_ticks++; + } + + return 0; +} + static int aaf_close(snd_pcm_ioplug_t *io) { snd_pcm_aaf_t *aaf = io->private_data; @@ -159,26 +563,223 @@ static int aaf_close(snd_pcm_ioplug_t *io) return 0; }
+static int aaf_hw_params(snd_pcm_ioplug_t *io, + snd_pcm_hw_params_t *params ATTRIBUTE_UNUSED) +{ + int res; + snd_pcm_aaf_t *aaf = io->private_data; + + if (io->access != SND_PCM_ACCESS_RW_INTERLEAVED) + return -ENOTSUP; + + if (io->buffer_size > LONG_MAX) + return -EINVAL; + + /* XXX: We might want to support Little Endian format in future. To + * achieve that, we need to convert LE samples to BE before + * transmitting them. + */ + switch (io->format) { + case SND_PCM_FORMAT_S16_BE: + case SND_PCM_FORMAT_S24_3BE: + case SND_PCM_FORMAT_S32_BE: + case SND_PCM_FORMAT_FLOAT_BE: + break; + default: + return -ENOTSUP; + } + + switch (io->rate) { + case 8000: + case 16000: + case 24000: + case 32000: + case 44100: + case 48000: + case 88200: + case 96000: + case 176400: + case 192000: + break; + default: + return -ENOTSUP; + } + + aaf->buffer_size = io->buffer_size; + + res = aaf_init_pdu(aaf); + if (res < 0) + return res; + + res = aaf_init_audio_buffer(aaf); + if (res < 0) + goto err_free_pdu; + + res = aaf_init_areas(aaf); + if (res < 0) + goto err_free_audiobuf; + + return 0; + +err_free_audiobuf: + free(aaf->audiobuf); +err_free_pdu: + free(aaf->pdu); + return res; +} + +static int aaf_hw_free(snd_pcm_ioplug_t *io) +{ + snd_pcm_aaf_t *aaf = io->private_data; + + free(aaf->audiobuf_areas); + free(aaf->payload_areas); + free(aaf->audiobuf); + free(aaf->pdu); + return 0; +} + +static int aaf_sw_params(snd_pcm_ioplug_t *io, snd_pcm_sw_params_t *params) +{ + int res; + snd_pcm_uframes_t boundary; + snd_pcm_aaf_t *aaf = io->private_data; + + res = snd_pcm_sw_params_get_boundary(params, &boundary); + if (res < 0) + return res; + + if (boundary > LONG_MAX) + return -EINVAL; + + aaf->boundary = boundary; + return 0; +} + static snd_pcm_sframes_t aaf_pointer(snd_pcm_ioplug_t *io) { + snd_pcm_aaf_t *aaf = io->private_data; + + return aaf->hw_ptr; +} + +static int aaf_poll_descriptors_count(snd_pcm_ioplug_t *io ATTRIBUTE_UNUSED) +{ + return FD_COUNT_PLAYBACK; +} + +static int aaf_poll_descriptors(snd_pcm_ioplug_t *io, struct pollfd *pfd, + unsigned int space) +{ + snd_pcm_aaf_t *aaf = io->private_data; + + if (space != FD_COUNT_PLAYBACK) + return -EINVAL; + + pfd[0].fd = aaf->timer_fd; + pfd[0].events = POLLIN; + return space; +} + +static int aaf_poll_revents(snd_pcm_ioplug_t *io, struct pollfd *pfd, + unsigned int nfds, unsigned short *revents) +{ + int res; + snd_pcm_aaf_t *aaf = io->private_data; + + if (nfds != FD_COUNT_PLAYBACK) + return -EINVAL; + + if (pfd[0].revents & POLLIN) { + res = aaf_mclk_timeout_playback(aaf); + if (res < 0) + return res; + + *revents = POLLIN; + } + + return 0; +} + +static int aaf_prepare(snd_pcm_ioplug_t *io) +{ + int res; + snd_pcm_aaf_t *aaf = io->private_data; + + aaf->hw_ptr = 0; + res = aaf_mclk_reset(aaf); + if (res < 0) + return res; + return 0; }
static int aaf_start(snd_pcm_ioplug_t *io) { + int res; + snd_pcm_aaf_t *aaf = io->private_data; + + res = aaf_init_socket(aaf); + if (res < 0) + return res; + + res = aaf_init_timer(aaf); + if (res < 0) + goto err_close_sk; + + res = aaf_mclk_start_playback(aaf); + if (res < 0) + goto err_close_timer; + return 0; + +err_close_timer: + close(aaf->timer_fd); +err_close_sk: + close(aaf->sk_fd); + return res; }
static int aaf_stop(snd_pcm_ioplug_t *io) { + snd_pcm_aaf_t *aaf = io->private_data; + + close(aaf->timer_fd); + close(aaf->sk_fd); return 0; }
+static snd_pcm_sframes_t aaf_transfer(snd_pcm_ioplug_t *io, + const snd_pcm_channel_area_t *areas, + snd_pcm_uframes_t offset, + snd_pcm_uframes_t size) +{ + int res; + snd_pcm_aaf_t *aaf = io->private_data; + + res = snd_pcm_areas_copy_wrap(aaf->audiobuf_areas, + (io->appl_ptr % aaf->buffer_size), + aaf->buffer_size, areas, offset, size, + io->channels, size, io->format); + if (res < 0) + return res; + + return size; +} + static const snd_pcm_ioplug_callback_t aaf_callback = { .close = aaf_close, + .hw_params = aaf_hw_params, + .hw_free = aaf_hw_free, + .sw_params = aaf_sw_params, .pointer = aaf_pointer, + .poll_descriptors_count = aaf_poll_descriptors_count, + .poll_descriptors = aaf_poll_descriptors, + .poll_revents = aaf_poll_revents, + .prepare = aaf_prepare, .start = aaf_start, .stop = aaf_stop, + .transfer = aaf_transfer, };
SND_PCM_PLUGIN_DEFINE_FUNC(aaf) @@ -186,12 +787,21 @@ SND_PCM_PLUGIN_DEFINE_FUNC(aaf) snd_pcm_aaf_t *aaf; int res;
+ /* For now the plugin only supports Playback mode i.e. AAF Talker + * functionality. + */ + if (stream != SND_PCM_STREAM_PLAYBACK) + return -EINVAL; + aaf = calloc(1, sizeof(*aaf)); if (!aaf) { SNDERR("Failed to allocate memory"); return -ENOMEM; }
+ aaf->sk_fd = -1; + aaf->timer_fd = -1; + res = aaf_load_config(aaf, conf); if (res < 0) goto err; @@ -200,6 +810,7 @@ SND_PCM_PLUGIN_DEFINE_FUNC(aaf) aaf->io.name = "AVTP Audio Format (AAF) Plugin"; aaf->io.callback = &aaf_callback; aaf->io.private_data = aaf; + aaf->io.flags = SND_PCM_IOPLUG_FLAG_BOUNDARY_WA; res = snd_pcm_ioplug_create(&aaf->io, name, stream, mode); if (res < 0) { SNDERR("Failed to create ioplug instance"); diff --git a/doc/aaf.txt b/doc/aaf.txt index 24ea888..b1a3f43 100644 --- a/doc/aaf.txt +++ b/doc/aaf.txt @@ -9,6 +9,46 @@ to transmit/receive audio samples through a Time-Sensitive Network (TSN) capable network. The plugin enables media applications to easily implement AVTP Talker and Listener functionalities.
+AVTP is designed to take advantage of generalized Precision Time Protocol +(gPTP) and Forwarding and Queuing Enhancements for Time-Sensitive Streams +(FQTSS). + +gPTP ensures AVTP talkers and listeners share the same time reference so the +presentation time from AVTP can be used to inform when PCM samples should be +presented to the application layer. Thus, in order to work properly, the plugin +requires the system clock is synchronized with the PTP time. Such functionality +is provided by ptp4l and phc2sys from Linuxptp project +(linuxptp.sourceforge.net). ptp4l and phc2sys can be set up in many different +ways, below we provide an example. For further information check ptp4l(8) and +phc2sys(8). + +On PTP master host run the following commands. Replace $IFNAME by your PTP +capable NIC name. The gPTP.cfg file mentioned below can be found in +/usr/share/doc/linuxptp/ (depending on your distro). + $ ptp4l -f gPTP.cfg -i $IFNAME + $ phc2sys -f gPTP.cfg -c $IFNAME -s CLOCK_REALTIME -w + +On PTP slave host run: + $ ptp4l -f gPTP.cfg -i $IFNAME -s + $ phc2sys -f gPTP.cfg -a -r + +FQTSS provides bandwidth reservation and traffic prioritization for the AVTP +stream. Thus, in order to work properly, the plugin requires FQTSS to be +configured properly. The FQTSS features is supported by Linux Traffic Control +system through the mpqrio and cbs qdiscs. Below we provide an example to +configure those qdiscs in order to transmit an AAF stream with 48 kHz sampling +rate, 16-bit sample size, stereo. For further information on how to configure +it check tc-mqprio(8) and tc-cbs(8) man pages. + +Configure mpqrio (replace $HANDLE_ID by an unused handle ID): + $ tc qdisc add dev $IFNAME parent root handle $HANDLE_ID mqprio \ + num_tc 3 map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \ + queues 1@0 1@1 2@2 hw 0 + +Configure cbs: + $ tc qdisc replace dev $IFNAME parent $HANDLE_ID:1 cbs idleslope 5760 \ + sendslope -994240 hicredit 9 locredit -89 offload 1 + Dependency ----------
On 8/20/18 8:06 PM, Andre Guedes wrote:
This patch implements the playback mode support from the AAF plugin. Simply put, this mode works as follows: PCM samples provided by alsa-lib layer are encapsulated into AVTP packets and transmitted through the network. In summary, the playback mode implements a typical AVTP Talker.
When the AAF device is put in running state, its media clock is started. At every tick from the media clock, audio frames are consumed from the audio buffer, encapsulated into an AVTP packet, and transmitted to the network. The presentation time from each AVTP packet is calculated taking in consideration the maximum transit time and time uncertainty values configured by the user.
Below follows some discussion about implementation details:
Even though only one file descriptor is used to implement the playback mode, this patch doesn't leverage ioplug->poll_fd but defines poll callbacks instead. The reason is these callbacks will be required to support capture mode (to be implemented by upcoming patch).
The TSN data plane interface is the AF_PACKET socket family so the plugin uses an AF_PACKET socket to send/receive AVTP packets. Linux requires CAP_NET_RAW capability in order to open an AF_PACKET socket so the application that instantiates the plugin must have it. For further info about AF_PACKET socket family see packet(7).
Signed-off-by: Andre Guedes andre.guedes@intel.com
aaf/pcm_aaf.c | 611 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ doc/aaf.txt | 40 ++++ 2 files changed, 651 insertions(+)
diff --git a/aaf/pcm_aaf.c b/aaf/pcm_aaf.c index 4c6f031..72f6652 100644 --- a/aaf/pcm_aaf.c +++ b/aaf/pcm_aaf.c @@ -20,13 +20,30 @@
#include <alsa/asoundlib.h> #include <alsa/pcm_external.h> +#include <arpa/inet.h> +#include <avtp.h> +#include <avtp_aaf.h> +#include <limits.h> #include <linux/if.h> #include <linux/if_ether.h> +#include <linux/if_packet.h> #include <string.h> #include <stdint.h> +#include <sys/ioctl.h> +#include <sys/timerfd.h>
+#ifdef AAF_DEBUG +#define pr_debug(...) SNDERR(__VA_ARGS__) +#else +#define pr_debug(...) (void)0 +#endif
+#define CLOCK_REF CLOCK_REALTIME
You should probably comment on why you use CLOCK_REALTIME for something that is driven by a media clock that's completely unrelated to CLOCK_REALTIME?
+#define NSEC_PER_SEC 1000000000ULL #define NSEC_PER_USEC 1000ULL
+#define FD_COUNT_PLAYBACK 1
- typedef struct { snd_pcm_ioplug_t io;
@@ -37,8 +54,74 @@ typedef struct { int mtt; int t_uncertainty; int frames_per_pkt;
- int sk_fd;
- int timer_fd;
- struct sockaddr_ll sk_addr;
- char *audiobuf;
- struct avtp_stream_pdu *pdu;
- int pdu_size;
- uint8_t pdu_seq;
- uint64_t mclk_start_time;
- uint64_t mclk_period;
- uint64_t mclk_ticks;
- snd_pcm_channel_area_t *audiobuf_areas;
- snd_pcm_channel_area_t *payload_areas;
- snd_pcm_sframes_t hw_ptr;
- snd_pcm_sframes_t buffer_size;
- snd_pcm_sframes_t boundary; } snd_pcm_aaf_t;
+static unsigned int alsa_to_avtp_format(snd_pcm_format_t format) +{
- switch (format) {
- case SND_PCM_FORMAT_S16_BE:
we usually use S16_LE? I can't recall when I last used S16_BE...
return AVTP_AAF_FORMAT_INT_16BIT;
- case SND_PCM_FORMAT_S24_3BE:
this means 3 bytes without padding to 32-bit word, is this your definition as well?
return AVTP_AAF_FORMAT_INT_24BIT;
- case SND_PCM_FORMAT_S32_BE:
return AVTP_AAF_FORMAT_INT_32BIT;
- case SND_PCM_FORMAT_FLOAT_BE:
return AVTP_AAF_FORMAT_FLOAT_32BIT;
- default:
return AVTP_AAF_FORMAT_USER;
- }
+}
+static unsigned int alsa_to_avtp_rate(unsigned int rate) +{
- switch (rate) {
- case 8000:
return AVTP_AAF_PCM_NSR_8KHZ;
- case 16000:
return AVTP_AAF_PCM_NSR_16KHZ;
- case 24000:
return AVTP_AAF_PCM_NSR_24KHZ;
- case 32000:
return AVTP_AAF_PCM_NSR_32KHZ;
- case 44100:
return AVTP_AAF_PCM_NSR_44_1KHZ;
- case 48000:
return AVTP_AAF_PCM_NSR_48KHZ;
- case 88200:
return AVTP_AAF_PCM_NSR_88_2KHZ;
- case 96000:
return AVTP_AAF_PCM_NSR_96KHZ;
- case 176400:
return AVTP_AAF_PCM_NSR_176_4KHZ;
- case 192000:
return AVTP_AAF_PCM_NSR_192KHZ;
You should align avtp_aaf definitions with ALSA, you are missing quite a few frequencies. e.g. 11.025, 64, 384kHz. If this is intentional add a comment to explain the restrictions.
- default:
return AVTP_AAF_PCM_NSR_USER;
- }
+}
- static int aaf_load_config(snd_pcm_aaf_t *aaf, snd_config_t *conf) { snd_config_iterator_t cur, next;
@@ -147,6 +230,327 @@ err: return -EINVAL; }
+static int aaf_init_socket(snd_pcm_aaf_t *aaf) +{
- int fd, res;
- struct ifreq req;
- fd = socket(AF_PACKET, SOCK_DGRAM, htons(ETH_P_TSN));
- if (fd < 0) {
SNDERR("Failed to open AF_PACKET socket");
return -errno;
- }
- snprintf(req.ifr_name, sizeof(req.ifr_name), "%s", aaf->ifname);
- res = ioctl(fd, SIOCGIFINDEX, &req);
- if (res < 0) {
SNDERR("Failed to get network interface index");
res = -errno;
goto err;
- }
- aaf->sk_addr.sll_family = AF_PACKET;
- aaf->sk_addr.sll_protocol = htons(ETH_P_TSN);
- aaf->sk_addr.sll_halen = ETH_ALEN;
- aaf->sk_addr.sll_ifindex = req.ifr_ifindex;
- memcpy(&aaf->sk_addr.sll_addr, aaf->addr, ETH_ALEN);
- res = setsockopt(fd, SOL_SOCKET, SO_PRIORITY, &aaf->prio,
sizeof(aaf->prio));
- if (res < 0) {
SNDERR("Failed to set socket priority");
res = -errno;
goto err;
- }
- aaf->sk_fd = fd;
- return 0;
+err:
- close(fd);
- return res;
+}
+static int aaf_init_timer(snd_pcm_aaf_t *aaf) +{
- int fd;
- fd = timerfd_create(CLOCK_REF, 0);
- if (fd < 0)
return -errno;
- aaf->timer_fd = fd;
- return 0;
+}
+static int aaf_init_pdu(snd_pcm_aaf_t *aaf) +{
- int res;
- struct avtp_stream_pdu *pdu;
- ssize_t frame_size, payload_size, pdu_size;
- snd_pcm_ioplug_t *io = &aaf->io;
- frame_size = snd_pcm_format_size(io->format, io->channels);
- if (frame_size < 0)
return frame_size;
- payload_size = frame_size * aaf->frames_per_pkt;
- pdu_size = sizeof(*pdu) + payload_size;
- pdu = calloc(1, pdu_size);
- if (!pdu)
return -ENOMEM;
- res = avtp_aaf_pdu_init(pdu);
- if (res < 0)
goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_TV, 1);
- if (res < 0)
goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_STREAM_ID, aaf->streamid);
- if (res < 0)
goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_FORMAT,
alsa_to_avtp_format(io->format));
- if (res < 0)
goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_NSR,
alsa_to_avtp_rate(io->rate));
- if (res < 0)
goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_CHAN_PER_FRAME,
io->channels);
- if (res < 0)
goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_BIT_DEPTH,
snd_pcm_format_width(io->format));
- if (res < 0)
goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_STREAM_DATA_LEN,
payload_size);
- if (res < 0)
goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_SP, AVTP_AAF_PCM_SP_NORMAL);
- if (res < 0)
goto err;
- aaf->pdu = pdu;
- aaf->pdu_size = pdu_size;
- aaf->pdu_seq = 0;
- return 0;
+err:
- free(pdu);
- return res;
+}
+static int aaf_init_audio_buffer(snd_pcm_aaf_t *aaf) +{
- char *audiobuf;
- ssize_t frame_size, audiobuf_size;
- snd_pcm_ioplug_t *io = &aaf->io;
- frame_size = snd_pcm_format_size(io->format, io->channels);
- if (frame_size < 0)
return frame_size;
- audiobuf_size = frame_size * aaf->buffer_size;
- audiobuf = calloc(1, audiobuf_size);
- if (!audiobuf)
return -ENOMEM;
- aaf->audiobuf = audiobuf;
- return 0;
+}
+static int aaf_init_areas(snd_pcm_aaf_t *aaf) +{
- snd_pcm_channel_area_t *audiobuf_areas, *payload_areas;
- ssize_t sample_size, frame_size;
- snd_pcm_ioplug_t *io = &aaf->io;
- sample_size = snd_pcm_format_size(io->format, 1);
- if (sample_size < 0)
return sample_size;
- frame_size = sample_size * io->channels;
- audiobuf_areas = calloc(io->channels, sizeof(snd_pcm_channel_area_t));
- if (!audiobuf_areas)
return -ENOMEM;
- payload_areas = calloc(io->channels, sizeof(snd_pcm_channel_area_t));
- if (!payload_areas) {
free(audiobuf_areas);
return -ENOMEM;
- }
- for (unsigned int i = 0; i < io->channels; i++) {
audiobuf_areas[i].addr = aaf->audiobuf;
audiobuf_areas[i].first = i * sample_size * 8;
audiobuf_areas[i].step = frame_size * 8;
payload_areas[i].addr = aaf->pdu->avtp_payload;
payload_areas[i].first = i * sample_size * 8;
payload_areas[i].step = frame_size * 8;
not sure what 8 represents here?
- }
- aaf->audiobuf_areas = audiobuf_areas;
- aaf->payload_areas = payload_areas;
- return 0;
+}
+static void aaf_inc_hw_ptr(snd_pcm_aaf_t *aaf, snd_pcm_sframes_t val) +{
- aaf->hw_ptr += val;
- if (aaf->hw_ptr >= aaf->boundary)
aaf->hw_ptr -= aaf->boundary;
+}
+static int aaf_mclk_start_playback(snd_pcm_aaf_t *aaf) +{
- int res;
- struct timespec now;
- struct itimerspec itspec;
- snd_pcm_ioplug_t *io = &aaf->io;
- res = clock_gettime(CLOCK_REF, &now);
- if (res < 0) {
SNDERR("Failed to get time from clock");
return -errno;
- }
- aaf->mclk_period = (NSEC_PER_SEC * aaf->frames_per_pkt) / io->rate;
is this always an integer? If not, don't you have a systematic arithmetic error?
- aaf->mclk_ticks = 0;
- aaf->mclk_start_time = now.tv_sec * NSEC_PER_SEC + now.tv_nsec +
aaf->mclk_period;
- itspec.it_value.tv_sec = aaf->mclk_start_time / NSEC_PER_SEC;
- itspec.it_value.tv_nsec = aaf->mclk_start_time % NSEC_PER_SEC;
- itspec.it_interval.tv_sec = 0;
- itspec.it_interval.tv_nsec = aaf->mclk_period;
- res = timerfd_settime(aaf->timer_fd, TFD_TIMER_ABSTIME, &itspec, NULL);
- if (res < 0)
return -errno;
- return 0;
+}
+static int aaf_mclk_reset(snd_pcm_aaf_t *aaf) +{
- aaf->mclk_start_time = 0;
- aaf->mclk_period = 0;
- aaf->mclk_ticks = 0;
- if (aaf->timer_fd != -1) {
int res;
struct itimerspec itspec = { 0 };
res = timerfd_settime(aaf->timer_fd, 0, &itspec, NULL);
if (res < 0) {
SNDERR("Failed to stop media clock");
return res;
}
- }
- return 0;
+}
+static uint64_t aaf_mclk_gettime(snd_pcm_aaf_t *aaf) +{
- return aaf->mclk_start_time + aaf->mclk_period * aaf->mclk_ticks;
+}
+static int aaf_tx_pdu(snd_pcm_aaf_t *aaf) +{
- int res;
- uint64_t ptime;
- snd_pcm_sframes_t n;
- snd_pcm_ioplug_t *io = &aaf->io;
- snd_pcm_t *pcm = io->pcm;
- struct avtp_stream_pdu *pdu = aaf->pdu;
- n = aaf->buffer_size - snd_pcm_avail(pcm);
- if (n == 0) {
/* If there is no data in audio buffer to be transmitted,
* we reached an underrun state.
*/
return -EPIPE;
- }
- if (n < aaf->frames_per_pkt) {
/* If there isn't enough frames to fill the AVTP packet, we
* drop them. This behavior is suggested by IEEE 1722-2016
* spec, section 7.3.5.
*/
aaf_inc_hw_ptr(aaf, n);
return 0;
- }
- res = snd_pcm_areas_copy_wrap(aaf->payload_areas, 0,
aaf->frames_per_pkt,
aaf->audiobuf_areas,
aaf->hw_ptr % aaf->buffer_size,
aaf->buffer_size, io->channels,
aaf->frames_per_pkt, io->format);
- if (res < 0) {
SNDERR("Failed to copy data to AVTP payload");
return res;
- }
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_SEQ_NUM, aaf->pdu_seq++);
- if (res < 0)
return res;
- ptime = aaf_mclk_gettime(aaf) + aaf->mtt + aaf->t_uncertainty;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_TIMESTAMP, ptime);
- if (res < 0)
return res;
- n = sendto(aaf->sk_fd, aaf->pdu, aaf->pdu_size, 0,
(struct sockaddr *) &aaf->sk_addr,
sizeof(aaf->sk_addr));
- if (n < 0 || n != aaf->pdu_size) {
SNDERR("Failed to send AAF PDU");
return -EIO;
- }
- aaf_inc_hw_ptr(aaf, aaf->frames_per_pkt);
- return 0;
+}
+static int aaf_mclk_timeout_playback(snd_pcm_aaf_t *aaf) +{
- int res;
- ssize_t n;
- uint64_t expirations;
- n = read(aaf->timer_fd, &expirations, sizeof(uint64_t));
- if (n < 0) {
SNDERR("Failed to read() timer");
return -errno;
- }
- if (expirations != 1)
pr_debug("Missed %llu tx interval(s) ", expirations - 1);
- while (expirations--) {
res = aaf_tx_pdu(aaf);
if (res < 0)
return res;
aaf->mclk_ticks++;
- }
- return 0;
+}
- static int aaf_close(snd_pcm_ioplug_t *io) { snd_pcm_aaf_t *aaf = io->private_data;
@@ -159,26 +563,223 @@ static int aaf_close(snd_pcm_ioplug_t *io) return 0; }
+static int aaf_hw_params(snd_pcm_ioplug_t *io,
snd_pcm_hw_params_t *params ATTRIBUTE_UNUSED)
+{
- int res;
- snd_pcm_aaf_t *aaf = io->private_data;
- if (io->access != SND_PCM_ACCESS_RW_INTERLEAVED)
return -ENOTSUP;
- if (io->buffer_size > LONG_MAX)
return -EINVAL;
- /* XXX: We might want to support Little Endian format in future. To
* achieve that, we need to convert LE samples to BE before
* transmitting them.
*/
- switch (io->format) {
- case SND_PCM_FORMAT_S16_BE:
- case SND_PCM_FORMAT_S24_3BE:
- case SND_PCM_FORMAT_S32_BE:
- case SND_PCM_FORMAT_FLOAT_BE:
break;
- default:
return -ENOTSUP;
- }
- switch (io->rate) {
- case 8000:
- case 16000:
- case 24000:
- case 32000:
- case 44100:
- case 48000:
- case 88200:
- case 96000:
- case 176400:
- case 192000:
break;
- default:
return -ENOTSUP;
- }
- aaf->buffer_size = io->buffer_size;
- res = aaf_init_pdu(aaf);
- if (res < 0)
return res;
- res = aaf_init_audio_buffer(aaf);
- if (res < 0)
goto err_free_pdu;
- res = aaf_init_areas(aaf);
- if (res < 0)
goto err_free_audiobuf;
- return 0;
+err_free_audiobuf:
- free(aaf->audiobuf);
+err_free_pdu:
- free(aaf->pdu);
- return res;
+}
+static int aaf_hw_free(snd_pcm_ioplug_t *io) +{
- snd_pcm_aaf_t *aaf = io->private_data;
- free(aaf->audiobuf_areas);
- free(aaf->payload_areas);
- free(aaf->audiobuf);
- free(aaf->pdu);
- return 0;
+}
+static int aaf_sw_params(snd_pcm_ioplug_t *io, snd_pcm_sw_params_t *params) +{
- int res;
- snd_pcm_uframes_t boundary;
- snd_pcm_aaf_t *aaf = io->private_data;
- res = snd_pcm_sw_params_get_boundary(params, &boundary);
- if (res < 0)
return res;
- if (boundary > LONG_MAX)
return -EINVAL;
- aaf->boundary = boundary;
- return 0;
+}
- static snd_pcm_sframes_t aaf_pointer(snd_pcm_ioplug_t *io) {
- snd_pcm_aaf_t *aaf = io->private_data;
- return aaf->hw_ptr;
+}
+static int aaf_poll_descriptors_count(snd_pcm_ioplug_t *io ATTRIBUTE_UNUSED) +{
- return FD_COUNT_PLAYBACK;
+}
+static int aaf_poll_descriptors(snd_pcm_ioplug_t *io, struct pollfd *pfd,
unsigned int space)
+{
- snd_pcm_aaf_t *aaf = io->private_data;
- if (space != FD_COUNT_PLAYBACK)
return -EINVAL;
- pfd[0].fd = aaf->timer_fd;
- pfd[0].events = POLLIN;
- return space;
+}
+static int aaf_poll_revents(snd_pcm_ioplug_t *io, struct pollfd *pfd,
unsigned int nfds, unsigned short *revents)
+{
- int res;
- snd_pcm_aaf_t *aaf = io->private_data;
- if (nfds != FD_COUNT_PLAYBACK)
return -EINVAL;
- if (pfd[0].revents & POLLIN) {
res = aaf_mclk_timeout_playback(aaf);
if (res < 0)
return res;
*revents = POLLIN;
- }
I couldn't figure out how you use playback events and your timer. When there are two audio clock sources or timers that's usually where the fun begins.
- return 0;
+}
+static int aaf_prepare(snd_pcm_ioplug_t *io) +{
int res;
snd_pcm_aaf_t *aaf = io->private_data;
aaf->hw_ptr = 0;
res = aaf_mclk_reset(aaf);
if (res < 0)
return res;
return 0; }
static int aaf_start(snd_pcm_ioplug_t *io) {
int res;
snd_pcm_aaf_t *aaf = io->private_data;
res = aaf_init_socket(aaf);
if (res < 0)
return res;
res = aaf_init_timer(aaf);
if (res < 0)
goto err_close_sk;
res = aaf_mclk_start_playback(aaf);
if (res < 0)
goto err_close_timer;
return 0;
+err_close_timer:
- close(aaf->timer_fd);
+err_close_sk:
close(aaf->sk_fd);
return res; }
static int aaf_stop(snd_pcm_ioplug_t *io) {
snd_pcm_aaf_t *aaf = io->private_data;
close(aaf->timer_fd);
close(aaf->sk_fd); return 0; }
+static snd_pcm_sframes_t aaf_transfer(snd_pcm_ioplug_t *io,
const snd_pcm_channel_area_t *areas,
snd_pcm_uframes_t offset,
snd_pcm_uframes_t size)
+{
- int res;
- snd_pcm_aaf_t *aaf = io->private_data;
- res = snd_pcm_areas_copy_wrap(aaf->audiobuf_areas,
(io->appl_ptr % aaf->buffer_size),
aaf->buffer_size, areas, offset, size,
io->channels, size, io->format);
- if (res < 0)
return res;
- return size;
+}
static const snd_pcm_ioplug_callback_t aaf_callback = { .close = aaf_close,
.hw_params = aaf_hw_params,
.hw_free = aaf_hw_free,
.sw_params = aaf_sw_params, .pointer = aaf_pointer,
.poll_descriptors_count = aaf_poll_descriptors_count,
.poll_descriptors = aaf_poll_descriptors,
.poll_revents = aaf_poll_revents,
.prepare = aaf_prepare, .start = aaf_start, .stop = aaf_stop,
.transfer = aaf_transfer, };
SND_PCM_PLUGIN_DEFINE_FUNC(aaf)
@@ -186,12 +787,21 @@ SND_PCM_PLUGIN_DEFINE_FUNC(aaf) snd_pcm_aaf_t *aaf; int res;
/* For now the plugin only supports Playback mode i.e. AAF Talker
* functionality.
*/
if (stream != SND_PCM_STREAM_PLAYBACK)
return -EINVAL;
aaf = calloc(1, sizeof(*aaf)); if (!aaf) { SNDERR("Failed to allocate memory"); return -ENOMEM; }
aaf->sk_fd = -1;
aaf->timer_fd = -1;
res = aaf_load_config(aaf, conf); if (res < 0) goto err;
@@ -200,6 +810,7 @@ SND_PCM_PLUGIN_DEFINE_FUNC(aaf) aaf->io.name = "AVTP Audio Format (AAF) Plugin"; aaf->io.callback = &aaf_callback; aaf->io.private_data = aaf;
- aaf->io.flags = SND_PCM_IOPLUG_FLAG_BOUNDARY_WA; res = snd_pcm_ioplug_create(&aaf->io, name, stream, mode); if (res < 0) { SNDERR("Failed to create ioplug instance");
diff --git a/doc/aaf.txt b/doc/aaf.txt index 24ea888..b1a3f43 100644 --- a/doc/aaf.txt +++ b/doc/aaf.txt @@ -9,6 +9,46 @@ to transmit/receive audio samples through a Time-Sensitive Network (TSN) capable network. The plugin enables media applications to easily implement AVTP Talker and Listener functionalities.
+AVTP is designed to take advantage of generalized Precision Time Protocol +(gPTP) and Forwarding and Queuing Enhancements for Time-Sensitive Streams +(FQTSS).
+gPTP ensures AVTP talkers and listeners share the same time reference so the +presentation time from AVTP can be used to inform when PCM samples should be +presented to the application layer. Thus, in order to work properly, the plugin +requires the system clock is synchronized with the PTP time. Such functionality +is provided by ptp4l and phc2sys from Linuxptp project +(linuxptp.sourceforge.net). ptp4l and phc2sys can be set up in many different +ways, below we provide an example. For further information check ptp4l(8) and +phc2sys(8).
+On PTP master host run the following commands. Replace $IFNAME by your PTP +capable NIC name. The gPTP.cfg file mentioned below can be found in +/usr/share/doc/linuxptp/ (depending on your distro).
- $ ptp4l -f gPTP.cfg -i $IFNAME
- $ phc2sys -f gPTP.cfg -c $IFNAME -s CLOCK_REALTIME -w
+On PTP slave host run:
- $ ptp4l -f gPTP.cfg -i $IFNAME -s
- $ phc2sys -f gPTP.cfg -a -r
+FQTSS provides bandwidth reservation and traffic prioritization for the AVTP +stream. Thus, in order to work properly, the plugin requires FQTSS to be +configured properly. The FQTSS features is supported by Linux Traffic Control +system through the mpqrio and cbs qdiscs. Below we provide an example to +configure those qdiscs in order to transmit an AAF stream with 48 kHz sampling +rate, 16-bit sample size, stereo. For further information on how to configure +it check tc-mqprio(8) and tc-cbs(8) man pages.
+Configure mpqrio (replace $HANDLE_ID by an unused handle ID):
- $ tc qdisc add dev $IFNAME parent root handle $HANDLE_ID mqprio \
num_tc 3 map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \
queues 1@0 1@1 2@2 hw 0
+Configure cbs:
- $ tc qdisc replace dev $IFNAME parent $HANDLE_ID:1 cbs idleslope 5760 \
sendslope -994240 hicredit 9 locredit -89 offload 1
Dependency
Hi Pierre,
On Mon, 2018-08-20 at 22:37 -0500, Pierre-Louis Bossart wrote:
+#define CLOCK_REF CLOCK_REALTIME
You should probably comment on why you use CLOCK_REALTIME for something that is driven by a media clock that's completely unrelated to CLOCK_REALTIME?
Sure, I'll add a comment here explaining why.
Just anticipating, the AAF plugin implements a media clock itself. To achieve that it relies on a periodic timer, which requires a clock id. CLOCK_REALTIME was chosen because we can synchronize it against the PTP clock (see instructions on doc/aaf.txt) and use it to get PTP time. PTP time is the time reference in an AVTP system e.g. time reference utilized to set presentation time.
+static unsigned int alsa_to_avtp_format(snd_pcm_format_t format) +{
- switch (format) {
- case SND_PCM_FORMAT_S16_BE:
we usually use S16_LE? I can't recall when I last used S16_BE...
AVTP specifies that the byte order of the samples is network order (i.e. big-endian). So, for simplicity, the plugin accepts only BE samples (see aaf_hw_params) at the moment.
Later on, we can extend the plugin to convert LE to BE, or rely on another plugin (e.g. linear) to do the conversion.
return AVTP_AAF_FORMAT_INT_16BIT;
- case SND_PCM_FORMAT_S24_3BE:
this means 3 bytes without padding to 32-bit word, is this your definition as well?
Yes, that's AVTP definition.
+static unsigned int alsa_to_avtp_rate(unsigned int rate) +{
- switch (rate) {
- case 8000:
return AVTP_AAF_PCM_NSR_8KHZ;
- case 16000:
return AVTP_AAF_PCM_NSR_16KHZ;
- case 24000:
return AVTP_AAF_PCM_NSR_24KHZ;
- case 32000:
return AVTP_AAF_PCM_NSR_32KHZ;
- case 44100:
return AVTP_AAF_PCM_NSR_44_1KHZ;
- case 48000:
return AVTP_AAF_PCM_NSR_48KHZ;
- case 88200:
return AVTP_AAF_PCM_NSR_88_2KHZ;
- case 96000:
return AVTP_AAF_PCM_NSR_96KHZ;
- case 176400:
return AVTP_AAF_PCM_NSR_176_4KHZ;
- case 192000:
return AVTP_AAF_PCM_NSR_192KHZ;
You should align avtp_aaf definitions with ALSA, you are missing quite a few frequencies. e.g. 11.025, 64, 384kHz. If this is intentional add a comment to explain the restrictions.
This is intentional. AVTP doesn't support all frequencies. I'll add a comment as requested.
+static int aaf_init_areas(snd_pcm_aaf_t *aaf) +{
- snd_pcm_channel_area_t *audiobuf_areas, *payload_areas;
- ssize_t sample_size, frame_size;
- snd_pcm_ioplug_t *io = &aaf->io;
- sample_size = snd_pcm_format_size(io->format, 1);
- if (sample_size < 0)
return sample_size;
- frame_size = sample_size * io->channels;
- audiobuf_areas = calloc(io->channels,
sizeof(snd_pcm_channel_area_t));
- if (!audiobuf_areas)
return -ENOMEM;
- payload_areas = calloc(io->channels,
sizeof(snd_pcm_channel_area_t));
- if (!payload_areas) {
free(audiobuf_areas);
return -ENOMEM;
- }
- for (unsigned int i = 0; i < io->channels; i++) {
audiobuf_areas[i].addr = aaf->audiobuf;
audiobuf_areas[i].first = i * sample_size * 8;
audiobuf_areas[i].step = frame_size * 8;
payload_areas[i].addr = aaf->pdu->avtp_payload;
payload_areas[i].first = i * sample_size * 8;
payload_areas[i].step = frame_size * 8;
not sure what 8 represents here?
Number of bits per byte.
'sample_size' and 'frame_size' are in bytes. 'first' and 'step' fields from 'snd_pcm_channel_area_t' are in bits.
+static int aaf_mclk_start_playback(snd_pcm_aaf_t *aaf) +{
- int res;
- struct timespec now;
- struct itimerspec itspec;
- snd_pcm_ioplug_t *io = &aaf->io;
- res = clock_gettime(CLOCK_REF, &now);
- if (res < 0) {
SNDERR("Failed to get time from clock");
return -errno;
- }
- aaf->mclk_period = (NSEC_PER_SEC * aaf->frames_per_pkt) /
io->rate;
is this always an integer? If not, don't you have a systematic arithmetic error?
NSEC_PER_SEC is 64-bit so I don't see an arithmetic error during calculation (e.g. integer overflow). Not sure this was your concern, though. Let me know otherwise.
+static int aaf_poll_revents(snd_pcm_ioplug_t *io, struct pollfd *pfd,
unsigned int nfds, unsigned short
*revents) +{
- int res;
- snd_pcm_aaf_t *aaf = io->private_data;
- if (nfds != FD_COUNT_PLAYBACK)
return -EINVAL;
- if (pfd[0].revents & POLLIN) {
res = aaf_mclk_timeout_playback(aaf);
if (res < 0)
return res;
*revents = POLLIN;
- }
I couldn't figure out how you use playback events and your timer.
Every time aaf->timer_fd expires, the audio buffer is consumed by the plugin, making some room available on the buffer. So here a POLLIN event is returned so alsa-lib layer can copy more data into the audio buffer.
When there are two audio clock sources or timers that's usually where the fun begins.
Regarding scenarios with two audio clock sources or timers, the plugin doesn't support them at the moment. This is something we should work on once the basic functionality is pushed upstream.
Thank you for your feedback.
Regards,
Andre
+static int aaf_mclk_start_playback(snd_pcm_aaf_t *aaf) +{
- int res;
- struct timespec now;
- struct itimerspec itspec;
- snd_pcm_ioplug_t *io = &aaf->io;
- res = clock_gettime(CLOCK_REF, &now);
- if (res < 0) {
SNDERR("Failed to get time from clock");
return -errno;
- }
- aaf->mclk_period = (NSEC_PER_SEC * aaf->frames_per_pkt) /
io->rate;
is this always an integer? If not, don't you have a systematic arithmetic error?
NSEC_PER_SEC is 64-bit so I don't see an arithmetic error during calculation (e.g. integer overflow). Not sure this was your concern, though. Let me know otherwise.
No, I was talking about the fractional part, e.g with 256 frames with 44.1kHz you have a period of 5804988.662131519274376 - so your math adds a truncation. same with 48khz, the fractional part is .333
I burned a number of my remaining neurons chasing a <100 ppb error which led to underruns after 10 hours, so careful now with truncation...
+static int aaf_poll_revents(snd_pcm_ioplug_t *io, struct pollfd *pfd,
unsigned int nfds, unsigned short
*revents) +{
- int res;
- snd_pcm_aaf_t *aaf = io->private_data;
- if (nfds != FD_COUNT_PLAYBACK)
return -EINVAL;
- if (pfd[0].revents & POLLIN) {
res = aaf_mclk_timeout_playback(aaf);
if (res < 0)
return res;
*revents = POLLIN;
- }
I couldn't figure out how you use playback events and your timer.
Every time aaf->timer_fd expires, the audio buffer is consumed by the plugin, making some room available on the buffer. So here a POLLIN event is returned so alsa-lib layer can copy more data into the audio buffer.
When there are two audio clock sources or timers that's usually where the fun begins.
Regarding scenarios with two audio clock sources or timers, the plugin doesn't support them at the moment. This is something we should work on once the basic functionality is pushed upstream.
I was talking about adjusting the relationship between your CLOCK_REALTIME timer and the media/network clock. I don't quite get how this happens, I vaguely recall there should be a daemon which tracks the difference between local and media/network clock, and I don't see it here.
Thank you for your feedback.
Regards,
Andre
Hi Pierre,
On Tue, 2018-08-21 at 17:51 -0500, Pierre-Louis Bossart wrote:
+static int aaf_mclk_start_playback(snd_pcm_aaf_t *aaf) +{
- int res;
- struct timespec now;
- struct itimerspec itspec;
- snd_pcm_ioplug_t *io = &aaf->io;
- res = clock_gettime(CLOCK_REF, &now);
- if (res < 0) {
SNDERR("Failed to get time from clock");
return -errno;
- }
- aaf->mclk_period = (NSEC_PER_SEC * aaf-
frames_per_pkt) /
io->rate;
is this always an integer? If not, don't you have a systematic arithmetic error?
NSEC_PER_SEC is 64-bit so I don't see an arithmetic error during calculation (e.g. integer overflow). Not sure this was your concern, though. Let me know otherwise.
No, I was talking about the fractional part, e.g with 256 frames with 44.1kHz you have a period of 5804988.662131519274376 - so your math adds a truncation. same with 48khz, the fractional part is .333
I burned a number of my remaining neurons chasing a <100 ppb error which led to underruns after 10 hours, so careful now with truncation...
Thanks for clarifying.
Yes, we can end up having a fractional period which is truncated. Note that both 'frames' and 'rate' are configured by the user. The user should set 'frames' as multiple of 'rate' whenever possible to avoid inaccuracy.
From the plugin perspective, I'm not sure what we could do. Truncating might lead to underruns as you said, but I'm afraid that rounding up might lead to overruns, theoretically.
+static int aaf_poll_revents(snd_pcm_ioplug_t *io, struct pollfd *pfd,
unsigned int nfds, unsigned short
*revents) +{
- int res;
- snd_pcm_aaf_t *aaf = io->private_data;
- if (nfds != FD_COUNT_PLAYBACK)
return -EINVAL;
- if (pfd[0].revents & POLLIN) {
res = aaf_mclk_timeout_playback(aaf);
if (res < 0)
return res;
*revents = POLLIN;
- }
I couldn't figure out how you use playback events and your timer.
Every time aaf->timer_fd expires, the audio buffer is consumed by the plugin, making some room available on the buffer. So here a POLLIN event is returned so alsa-lib layer can copy more data into the audio buffer.
When there are two audio clock sources or timers that's usually where the fun begins.
Regarding scenarios with two audio clock sources or timers, the plugin doesn't support them at the moment. This is something we should work on once the basic functionality is pushed upstream.
I was talking about adjusting the relationship between your CLOCK_REALTIME timer and the media/network clock. I don't quite get how this happens, I vaguely recall there should be a daemon which tracks the difference between local and media/network clock, and I don't see it here.
Oh okay, I thought you were talking about something else :)
I believe you are referring to the gptp daemon from Openavnu [1]. The AAF plugin doesn't use it. Instead, it uses linuxptp [2] which is distributed by several Linux distros.
Linuxptp provides the phc2sys daemon that synchronizes both system clock (i.e. CLOCK_REALTIME) and network clock (i.e. PTP clock). The daemon disciplines the clocks instead of providing the time difference to applications. So we don't need to do any cross-timestamping at the plugin.
Regards,
Andre
[1] https://github.com/AVnu/OpenAvnu/tree/master/daemons/gptp [2] http://linuxptp.sourceforge.net/
On 08/22/2018 07:46 PM, Guedes, Andre wrote:
Hi Pierre,
On Tue, 2018-08-21 at 17:51 -0500, Pierre-Louis Bossart wrote:
+static int aaf_mclk_start_playback(snd_pcm_aaf_t *aaf) +{
- int res;
- struct timespec now;
- struct itimerspec itspec;
- snd_pcm_ioplug_t *io = &aaf->io;
- res = clock_gettime(CLOCK_REF, &now);
- if (res < 0) {
SNDERR("Failed to get time from clock");
return -errno;
- }
- aaf->mclk_period = (NSEC_PER_SEC * aaf-
frames_per_pkt) /
io->rate;
is this always an integer? If not, don't you have a systematic arithmetic error?
NSEC_PER_SEC is 64-bit so I don't see an arithmetic error during calculation (e.g. integer overflow). Not sure this was your concern, though. Let me know otherwise.
No, I was talking about the fractional part, e.g with 256 frames with 44.1kHz you have a period of 5804988.662131519274376 - so your math adds a truncation. same with 48khz, the fractional part is .333
I burned a number of my remaining neurons chasing a <100 ppb error which led to underruns after 10 hours, so careful now with truncation...
Thanks for clarifying.
Yes, we can end up having a fractional period which is truncated. Note that both 'frames' and 'rate' are configured by the user. The user should set 'frames' as multiple of 'rate' whenever possible to avoid inaccuracy.
It's unlikely to happen. it's classic in audio that people want powers of two for fast filtering, and don't really care that the periods are fractional. If you cannot guarantee long-term operation without timing issues, you should add constraints to the frames and rates so that there is no surprise.
From the plugin perspective, I'm not sure what we could do. Truncating might lead to underruns as you said, but I'm afraid that rounding up might lead to overruns, theoretically.
Yes, you don't want to round-up either, you'd want to track when deviations become too high and compensate for it.
+static int aaf_poll_revents(snd_pcm_ioplug_t *io, struct pollfd *pfd,
unsigned int nfds, unsigned short
*revents) +{
- int res;
- snd_pcm_aaf_t *aaf = io->private_data;
- if (nfds != FD_COUNT_PLAYBACK)
return -EINVAL;
- if (pfd[0].revents & POLLIN) {
res = aaf_mclk_timeout_playback(aaf);
if (res < 0)
return res;
*revents = POLLIN;
- }
I couldn't figure out how you use playback events and your timer.
Every time aaf->timer_fd expires, the audio buffer is consumed by the plugin, making some room available on the buffer. So here a POLLIN event is returned so alsa-lib layer can copy more data into the audio buffer.
When there are two audio clock sources or timers that's usually where the fun begins.
Regarding scenarios with two audio clock sources or timers, the plugin doesn't support them at the moment. This is something we should work on once the basic functionality is pushed upstream.
I was talking about adjusting the relationship between your CLOCK_REALTIME timer and the media/network clock. I don't quite get how this happens, I vaguely recall there should be a daemon which tracks the difference between local and media/network clock, and I don't see it here.
Oh okay, I thought you were talking about something else :)
I believe you are referring to the gptp daemon from Openavnu [1]. The AAF plugin doesn't use it. Instead, it uses linuxptp [2] which is distributed by several Linux distros.
Linuxptp provides the phc2sys daemon that synchronizes both system clock (i.e. CLOCK_REALTIME) and network clock (i.e. PTP clock). The daemon disciplines the clocks instead of providing the time difference to applications. So we don't need to do any cross-timestamping at the plugin.
Humm, I don't get this. The CLOCK_REALTIME is based on the local oscillator + NTP updates. And the network clock isn't necessarily owned by the transmitter, so how do you adjust?
On Wed, 2018-08-22 at 21:25 -0500, Pierre-Louis Bossart wrote:
On 08/22/2018 07:46 PM, Guedes, Andre wrote:
On Tue, 2018-08-21 at 17:51 -0500, Pierre-Louis Bossart wrote:
+static int aaf_mclk_start_playback(snd_pcm_aaf_t *aaf) +{
- int res;
- struct timespec now;
- struct itimerspec itspec;
- snd_pcm_ioplug_t *io = &aaf->io;
- res = clock_gettime(CLOCK_REF, &now);
- if (res < 0) {
SNDERR("Failed to get time from clock");
return -errno;
- }
- aaf->mclk_period = (NSEC_PER_SEC * aaf-
> frames_per_pkt) /
io->rate;
is this always an integer? If not, don't you have a systematic arithmetic error?
NSEC_PER_SEC is 64-bit so I don't see an arithmetic error during calculation (e.g. integer overflow). Not sure this was your concern, though. Let me know otherwise.
No, I was talking about the fractional part, e.g with 256 frames with 44.1kHz you have a period of 5804988.662131519274376 - so your math adds a truncation. same with 48khz, the fractional part is .333
I burned a number of my remaining neurons chasing a <100 ppb error which led to underruns after 10 hours, so careful now with truncation...
Thanks for clarifying.
Yes, we can end up having a fractional period which is truncated. Note that both 'frames' and 'rate' are configured by the user. The user should set 'frames' as multiple of 'rate' whenever possible to avoid inaccuracy.
It's unlikely to happen. it's classic in audio that people want powers of two for fast filtering, and don't really care that the periods are fractional. If you cannot guarantee long-term operation without timing issues, you should add constraints to the frames and rates so that there is no surprise.
Fair enough. So for now I'll add a constraint on frames and rates to unsure no surprises. Later we can revisit this and implement the compesation mechanism you described below.
From the plugin perspective, I'm not sure what we could do. Truncating might lead to underruns as you said, but I'm afraid that rounding up might lead to overruns, theoretically.
Yes, you don't want to round-up either, you'd want to track when deviations become too high and compensate for it.
+static int aaf_poll_revents(snd_pcm_ioplug_t *io, struct pollfd *pfd,
unsigned int nfds, unsigned
short *revents) +{
- int res;
- snd_pcm_aaf_t *aaf = io->private_data;
- if (nfds != FD_COUNT_PLAYBACK)
return -EINVAL;
- if (pfd[0].revents & POLLIN) {
res = aaf_mclk_timeout_playback(aaf);
if (res < 0)
return res;
*revents = POLLIN;
- }
I couldn't figure out how you use playback events and your timer.
Every time aaf->timer_fd expires, the audio buffer is consumed by the plugin, making some room available on the buffer. So here a POLLIN event is returned so alsa-lib layer can copy more data into the audio buffer.
When there are two audio clock sources or timers that's usually where the fun begins.
Regarding scenarios with two audio clock sources or timers, the plugin doesn't support them at the moment. This is something we should work on once the basic functionality is pushed upstream.
I was talking about adjusting the relationship between your CLOCK_REALTIME timer and the media/network clock. I don't quite get how this happens, I vaguely recall there should be a daemon which tracks the difference between local and media/network clock, and I don't see it here.
Oh okay, I thought you were talking about something else :)
I believe you are referring to the gptp daemon from Openavnu [1]. The AAF plugin doesn't use it. Instead, it uses linuxptp [2] which is distributed by several Linux distros.
Linuxptp provides the phc2sys daemon that synchronizes both system clock (i.e. CLOCK_REALTIME) and network clock (i.e. PTP clock). The daemon disciplines the clocks instead of providing the time difference to applications. So we don't need to do any cross-timestamping at the plugin.
Humm, I don't get this. The CLOCK_REALTIME is based on the local oscillator + NTP updates. And the network clock isn't necessarily owned by the transmitter, so how do you adjust?
When phc2sys is running, CLOCK_REALTIME is based on local oscillator + phc2sys updates. The daemon keeps adjusting CLOCK_REALTIME based on PTP clock via clock_adjtime syscall.
- Andre
Every time aaf->timer_fd expires, the audio buffer is consumed by the plugin, making some room available on the buffer. So here a POLLIN event is returned so alsa-lib layer can copy more data into the audio buffer.
When there are two audio clock sources or timers that's usually where the fun begins.
Regarding scenarios with two audio clock sources or timers, the plugin doesn't support them at the moment. This is something we should work on once the basic functionality is pushed upstream.
I was talking about adjusting the relationship between your CLOCK_REALTIME timer and the media/network clock. I don't quite get how this happens, I vaguely recall there should be a daemon which tracks the difference between local and media/network clock, and I don't see it here.
Oh okay, I thought you were talking about something else :)
I believe you are referring to the gptp daemon from Openavnu [1]. The AAF plugin doesn't use it. Instead, it uses linuxptp [2] which is distributed by several Linux distros.
Linuxptp provides the phc2sys daemon that synchronizes both system clock (i.e. CLOCK_REALTIME) and network clock (i.e. PTP clock). The daemon disciplines the clocks instead of providing the time difference to applications. So we don't need to do any cross-timestamping at the plugin.
Humm, I don't get this. The CLOCK_REALTIME is based on the local oscillator + NTP updates. And the network clock isn't necessarily owned by the transmitter, so how do you adjust?
When phc2sys is running, CLOCK_REALTIME is based on local oscillator + phc2sys updates. The daemon keeps adjusting CLOCK_REALTIME based on PTP clock via clock_adjtime syscall.
Is this a desirable "feature"? You may want to run other non-audio applications where the NTP clock makes sense, e.g. to know when your next call is, while playing audio on an external amp. Looks to me that you've simplified the problem too much, or this is for custom audio-only solutions. What am I missing?
On Thu, 2018-08-23 at 13:51 -0500, Pierre-Louis Bossart wrote:
Every time aaf->timer_fd expires, the audio buffer is consumed by the plugin, making some room available on the buffer. So here a POLLIN event is returned so alsa-lib layer can copy more data into the audio buffer.
> When there are two audio clock sources or timers that's > usually > where > the fun begins.
Regarding scenarios with two audio clock sources or timers, the plugin doesn't support them at the moment. This is something we should work on once the basic functionality is pushed upstream.
I was talking about adjusting the relationship between your CLOCK_REALTIME timer and the media/network clock. I don't quite get how this happens, I vaguely recall there should be a daemon which tracks the difference between local and media/network clock, and I don't see it here.
Oh okay, I thought you were talking about something else :)
I believe you are referring to the gptp daemon from Openavnu [1]. The AAF plugin doesn't use it. Instead, it uses linuxptp [2] which is distributed by several Linux distros.
Linuxptp provides the phc2sys daemon that synchronizes both system clock (i.e. CLOCK_REALTIME) and network clock (i.e. PTP clock). The daemon disciplines the clocks instead of providing the time difference to applications. So we don't need to do any cross-timestamping at the plugin.
Humm, I don't get this. The CLOCK_REALTIME is based on the local oscillator + NTP updates. And the network clock isn't necessarily owned by the transmitter, so how do you adjust?
When phc2sys is running, CLOCK_REALTIME is based on local oscillator + phc2sys updates. The daemon keeps adjusting CLOCK_REALTIME based on PTP clock via clock_adjtime syscall.
Is this a desirable "feature"? You may want to run other non-audio applications where the NTP clock makes sense, e.g. to know when your next call is, while playing audio on an external amp. Looks to me that you've simplified the problem too much, or this is for custom audio- only solutions. What am I missing?
The plugin requires both CLOCK_REALTIME and PTP to be synchronized, and this can add some usage scenarios limitation, indeed.
However, the scenario you described looks still feasible. For instance, at the host running as PTP master, we could have NTP disciplining CLOCK_REALTIME (ntp daemon) and CLOCK_REALTIME disciplining PTP (phc2sys daemon). At the hosts running as PTP slave, we have PTP disciplining CLOCK_REALTIME (phc2sys daemon). This way, CLOCK_REALTIME time from all systems is NTP time while CLOCK_REALTIME and PTP clock are in sync. Makes sense?
- Andre
Hi,
On Aug 24 2018 03:32, Guedes, Andre wrote:
On Wed, 2018-08-22 at 21:25 -0500, Pierre-Louis Bossart wrote:
On 08/22/2018 07:46 PM, Guedes, Andre wrote:
On Tue, 2018-08-21 at 17:51 -0500, Pierre-Louis Bossart wrote:
> +static int aaf_mclk_start_playback(snd_pcm_aaf_t *aaf) > +{ > + int res; > + struct timespec now; > + struct itimerspec itspec; > + snd_pcm_ioplug_t *io = &aaf->io; > + > + res = clock_gettime(CLOCK_REF, &now); > + if (res < 0) { > + SNDERR("Failed to get time from clock"); > + return -errno; > + } > + > + aaf->mclk_period = (NSEC_PER_SEC * aaf- >> frames_per_pkt) / > > io->rate;
is this always an integer? If not, don't you have a systematic arithmetic error?
NSEC_PER_SEC is 64-bit so I don't see an arithmetic error during calculation (e.g. integer overflow). Not sure this was your concern, though. Let me know otherwise.
No, I was talking about the fractional part, e.g with 256 frames with 44.1kHz you have a period of 5804988.662131519274376 - so your math adds a truncation. same with 48khz, the fractional part is .333
I burned a number of my remaining neurons chasing a <100 ppb error which led to underruns after 10 hours, so careful now with truncation...
Thanks for clarifying.
Yes, we can end up having a fractional period which is truncated. Note that both 'frames' and 'rate' are configured by the user. The user should set 'frames' as multiple of 'rate' whenever possible to avoid inaccuracy.
It's unlikely to happen. it's classic in audio that people want powers of two for fast filtering, and don't really care that the periods are fractional. If you cannot guarantee long-term operation without timing issues, you should add constraints to the frames and rates so that there is no surprise.
Fair enough. So for now I'll add a constraint on frames and rates to unsure no surprises. Later we can revisit this and implement the compesation mechanism you described below.
In my understanding, transmission timing of 'AVTP Audio format' in IEEE 1722:2016 is similar to 'blocking transmission' of IEC 61883-6. Packets have fixed size of data in its payload, thus include the same number of PCM frames. Talkers are expected to fill data till the size, then transmit the packet. Receivers are expected to perform buffering till presentation timestamp is elapsed, with one (or more) AVTPDUs.
In clause 7.7 'AAF and SRP', I can see below sentence: '... A 44.1-kHz stream with 6 samples per AAF AVTPDU and an FQTSS observation interval of 125 us also has an SRP reservation of 1 frame per observation interval even though there will periodically be an observation interval where no AAF AVTPDU will be transmitted since it has a transmission interval of 136.054 us as can be seen in the example given in Figure 36.' This means that packet transmission is not always periodically. There's a blank cycle per several cycles; like IEC 61883-1/6.
In my opinion, it's better calculate proper interval of timerfd to create the black interval, without truncate the fraction. Then, give proper constrains to SND_PCM_IOPLUG_HW_PERIOD_BYTES to prevent applications from underrun.
Furthermore, in your proposal, the number of PCM frames in one AVTPDU is decided according to plugin parameter. However, if compliant to specification, it's better to decide the number according to 'FQTSS observation interval'. I can see recommendations in two cases; FQTSS = 125 us and 256 us, in Table 17 and 18 of IEEE 1212:2016.
Regards
Takashi Sakamoto
Hi Takashi,
On Sat, 2018-08-25 at 17:13 +0900, Takashi Sakamoto wrote:
On Aug 24 2018 03:32, Guedes, Andre wrote:
On Wed, 2018-08-22 at 21:25 -0500, Pierre-Louis Bossart wrote:
On 08/22/2018 07:46 PM, Guedes, Andre wrote:
On Tue, 2018-08-21 at 17:51 -0500, Pierre-Louis Bossart wrote:
> > +static int aaf_mclk_start_playback(snd_pcm_aaf_t *aaf) > > +{ > > + int res; > > + struct timespec now; > > + struct itimerspec itspec; > > + snd_pcm_ioplug_t *io = &aaf->io; > > + > > + res = clock_gettime(CLOCK_REF, &now); > > + if (res < 0) { > > + SNDERR("Failed to get time from > > clock"); > > + return -errno; > > + } > > + > > + aaf->mclk_period = (NSEC_PER_SEC * aaf- > > > frames_per_pkt) / > > > > io->rate; > > is this always an integer? If not, don't you have a > systematic > arithmetic error?
NSEC_PER_SEC is 64-bit so I don't see an arithmetic error during calculation (e.g. integer overflow). Not sure this was your concern, though. Let me know otherwise.
No, I was talking about the fractional part, e.g with 256 frames with 44.1kHz you have a period of 5804988.662131519274376 - so your math adds a truncation. same with 48khz, the fractional part is .333
I burned a number of my remaining neurons chasing a <100 ppb error which led to underruns after 10 hours, so careful now with truncation...
Thanks for clarifying.
Yes, we can end up having a fractional period which is truncated. Note that both 'frames' and 'rate' are configured by the user. The user should set 'frames' as multiple of 'rate' whenever possible to avoid inaccuracy.
It's unlikely to happen. it's classic in audio that people want powers of two for fast filtering, and don't really care that the periods are fractional. If you cannot guarantee long-term operation without timing issues, you should add constraints to the frames and rates so that there is no surprise.
Fair enough. So for now I'll add a constraint on frames and rates to unsure no surprises. Later we can revisit this and implement the compesation mechanism you described below.
In my understanding, transmission timing of 'AVTP Audio format' in IEEE 1722:2016 is similar to 'blocking transmission' of IEC 61883-6. Packets have fixed size of data in its payload, thus include the same number of PCM frames. Talkers are expected to fill data till the size, then transmit the packet. Receivers are expected to perform buffering till presentation timestamp is elapsed, with one (or more) AVTPDUs.
I'm not familiar with the 'blocking transmission' of IEC 61883-6 but from the description above, yes, it looks similar indeed.
In clause 7.7 'AAF and SRP', I can see below sentence: '... A 44.1-kHz stream with 6 samples per AAF AVTPDU and an FQTSS observation interval of 125 us also has an SRP reservation of 1 frame per observation interval even though there will periodically be an observation interval where no AAF AVTPDU will be transmitted since it has a transmission interval of 136.054 us as can be seen in the example given in Figure 36.' This means that packet transmission is not always periodically. There's a blank cycle per several cycles; like IEC 61883-1/6.
My understanding of the periodicity of packet transmission is different. I believe it is always periodic. Let me elaborate on this.
The first paragraph from Section 7.7 states that "AAF transmission interval is defined by the clock rate of the media rather than the FQTSS observation interval." Since the clock is periodic, the AAF transmission interval is periodic too. For instance, if we take the 44.1 kHz example from Figure 36, we can see the AAF transmission interval is always ~136us. So, from AAF perspective (i.e. the plugin perspective), the packet transmission is always periodic.
The AAF transmission interval isn't necessarily equal to the FQTSS observation interval. Again, if we take a look at the 44.1 kHz example, we can see the AAF transmission interval is ~136us while the FQTSS observation interval is 125us. This, however, isn't an issue since the plugin is not expected to operate in terms of FQTSS observation interval, but in terms of AAF transmission interval as stated in the first paragraph from Section 7.7.
In my opinion, it's better calculate proper interval of timerfd to create the black interval, without truncate the fraction. Then, give proper constrains to SND_PCM_IOPLUG_HW_PERIOD_BYTES to prevent applications from underrun.
If the above understanding is correct, I'm not sure this approach would work. Let me know otherwise.
Furthermore, in your proposal, the number of PCM frames in one AVTPDU is decided according to plugin parameter. However, if compliant to specification, it's better to decide the number according to 'FQTSS observation interval'. I can see recommendations in two cases; FQTSS = 125 us and 256 us, in Table 17 and 18 of IEEE 1212:2016.
It was designed that way on purpose. Let me share the rationale.
The values in Table 17 and 18 are just recommendations. From the plugin perspective, we should enable users to configure the number of frames according to their needs. This way, users are free to configure the values recommended by the spec or other values optimized to their AVTP application.
Besides that, as stated in the NOTE right below Table 18, 125us and 250us are not the only possible FQTSS observation intervals.
Thank you for your input.
Regards,
Andre
Hi Guedes,
On Aug 29 2018 10:00, Guedes, Andre wrote:
On Sat, 2018-08-25 at 17:13 +0900, Takashi Sakamoto wrote:
On Aug 24 2018 03:32, Guedes, Andre wrote:
On Wed, 2018-08-22 at 21:25 -0500, Pierre-Louis Bossart wrote:
On 08/22/2018 07:46 PM, Guedes, Andre wrote:
On Tue, 2018-08-21 at 17:51 -0500, Pierre-Louis Bossart wrote:
>>> +static int aaf_mclk_start_playback(snd_pcm_aaf_t *aaf) >>> +{ >>> + int res; >>> + struct timespec now; >>> + struct itimerspec itspec; >>> + snd_pcm_ioplug_t *io = &aaf->io; >>> + >>> + res = clock_gettime(CLOCK_REF, &now); >>> + if (res < 0) { >>> + SNDERR("Failed to get time from >>> clock"); >>> + return -errno; >>> + } >>> + >>> + aaf->mclk_period = (NSEC_PER_SEC * aaf- >>>> frames_per_pkt) / >>> >>> io->rate; >> >> is this always an integer? If not, don't you have a >> systematic >> arithmetic error? > > NSEC_PER_SEC is 64-bit so I don't see an arithmetic error > during > calculation (e.g. integer overflow). Not sure this was your > concern, > though. Let me know otherwise.
No, I was talking about the fractional part, e.g with 256 frames with 44.1kHz you have a period of 5804988.662131519274376 - so your math adds a truncation. same with 48khz, the fractional part is .333
I burned a number of my remaining neurons chasing a <100 ppb error which led to underruns after 10 hours, so careful now with truncation...
Thanks for clarifying.
Yes, we can end up having a fractional period which is truncated. Note that both 'frames' and 'rate' are configured by the user. The user should set 'frames' as multiple of 'rate' whenever possible to avoid inaccuracy.
It's unlikely to happen. it's classic in audio that people want powers of two for fast filtering, and don't really care that the periods are fractional. If you cannot guarantee long-term operation without timing issues, you should add constraints to the frames and rates so that there is no surprise.
Fair enough. So for now I'll add a constraint on frames and rates to unsure no surprises. Later we can revisit this and implement the compesation mechanism you described below.
In my understanding, transmission timing of 'AVTP Audio format' in IEEE 1722:2016 is similar to 'blocking transmission' of IEC 61883-6. Packets have fixed size of data in its payload, thus include the same number of PCM frames. Talkers are expected to fill data till the size, then transmit the packet. Receivers are expected to perform buffering till presentation timestamp is elapsed, with one (or more) AVTPDUs.
I'm not familiar with the 'blocking transmission' of IEC 61883-6 but from the description above, yes, it looks similar indeed.
Further investigation, I realized that transmission timing of AAF is not similar to IEC 61883-1/6 at all... I'm sorry to address to it into this topic.
In clause 7.7 'AAF and SRP', I can see below sentence: '... A 44.1-kHz stream with 6 samples per AAF AVTPDU and an FQTSS observation interval of 125 us also has an SRP reservation of 1 frame per observation interval even though there will periodically be an observation interval where no AAF AVTPDU will be transmitted since it has a transmission interval of 136.054 us as can be seen in the example given in Figure 36.' This means that packet transmission is not always periodically. There's a blank cycle per several cycles; like IEC 61883-1/6.
My understanding of the periodicity of packet transmission is different. I believe it is always periodic. Let me elaborate on this.
The first paragraph from Section 7.7 states that "AAF transmission interval is defined by the clock rate of the media rather than the FQTSS observation interval." Since the clock is periodic, the AAF transmission interval is periodic too. For instance, if we take the 44.1 kHz example from Figure 36, we can see the AAF transmission interval is always ~136us. So, from AAF perspective (i.e. the plugin perspective), the packet transmission is always periodic.
The AAF transmission interval isn't necessarily equal to the FQTSS observation interval. Again, if we take a look at the 44.1 kHz example, we can see the AAF transmission interval is ~136us while the FQTSS observation interval is 125us. This, however, isn't an issue since the plugin is not expected to operate in terms of FQTSS observation interval, but in terms of AAF transmission interval as stated in the first paragraph from Section 7.7.
Indeed, thanks for your correction against my misunderstanding.
In my opinion, it's better calculate proper interval of timerfd to create the black interval, without truncate the fraction. Then, give proper constrains to SND_PCM_IOPLUG_HW_PERIOD_BYTES to prevent applications from underrun.
If the above understanding is correct, I'm not sure this approach would work. Let me know otherwise.
I have another concern of buffering in a perspectives of delay of task scheduling.
The interval of task scheduling for this plugin is decided mainly by the value of 'frames_per_pkt', given by users. In your documentation, the value is 6[1]. Of cource this is an example but in this case the interval is calculated as 125us at 48.0kHz. In my opinion, task scheduling in Linux kernel brings deadline misses for the interval, in most cases such as major Linux distribution on usual personal computers. When considering about the fact that recent motherboards implements Intel I210/220 series, it's better to care for the low-level realtime systems, in my opinion.
Furthermore, in your proposal, the number of PCM frames in one AVTPDU is decided according to plugin parameter. However, if compliant to specification, it's better to decide the number according to 'FQTSS observation interval'. I can see recommendations in two cases; FQTSS = 125 us and 256 us, in Table 17 and 18 of IEEE 1212:2016.
It was designed that way on purpose. Let me share the rationale.
The values in Table 17 and 18 are just recommendations. From the plugin perspective, we should enable users to configure the number of frames according to their needs. This way, users are free to configure the values recommended by the spec or other values optimized to their AVTP application.
Besides that, as stated in the NOTE right below Table 18, 125us and 250us are not the only possible FQTSS observation intervals.
I addressed to the reccomendation itself. If specification describes recommendations, there will be a reason to consider about it, just not refer to values on the table.
I can see a sentence in section 7.7; 'in order to maintain interoperability between devices, the transmission intervals listed in Table 17 and Table 18 should be used.' I understand that 'if a talker end station transfers an AVTP packet with largely different number of sample frames from expectations on listener end station, there's a case that the listener cannot handles the sample frames due to processor loading or buffer overflow on listener side. To avoid this case, FQTSS observation interval is loosely used to decide intervals of AVTPDU transmission'. But it's within my imagination.
[1] http://mailman.alsa-project.org/pipermail/alsa-devel/2018-August/139495.html
Thanks
Takashi Sakamoto
Hi Takashi,
On Fri, 2018-08-31 at 13:33 +0900, Takashi Sakamoto wrote:
Hi Guedes,
On Aug 29 2018 10:00, Guedes, Andre wrote:
On Sat, 2018-08-25 at 17:13 +0900, Takashi Sakamoto wrote:
On Aug 24 2018 03:32, Guedes, Andre wrote:
On Wed, 2018-08-22 at 21:25 -0500, Pierre-Louis Bossart wrote:
On 08/22/2018 07:46 PM, Guedes, Andre wrote:
On Tue, 2018-08-21 at 17:51 -0500, Pierre-Louis Bossart wrote: > > > > +static int aaf_mclk_start_playback(snd_pcm_aaf_t > > > > *aaf) > > > > +{ > > > > + int res; > > > > + struct timespec now; > > > > + struct itimerspec itspec; > > > > + snd_pcm_ioplug_t *io = &aaf->io; > > > > + > > > > + res = clock_gettime(CLOCK_REF, &now); > > > > + if (res < 0) { > > > > + SNDERR("Failed to get time from > > > > clock"); > > > > + return -errno; > > > > + } > > > > + > > > > + aaf->mclk_period = (NSEC_PER_SEC * aaf- > > > > > frames_per_pkt) / > > > > > > > > io->rate; > > > > > > is this always an integer? If not, don't you have a > > > systematic > > > arithmetic error? > > > > NSEC_PER_SEC is 64-bit so I don't see an arithmetic > > error > > during > > calculation (e.g. integer overflow). Not sure this was > > your > > concern, > > though. Let me know otherwise. > > No, I was talking about the fractional part, e.g with 256 > frames > with > 44.1kHz you have a period of 5804988.662131519274376 - so > your > math > adds > a truncation. same with 48khz, the fractional part is > .333 > > I burned a number of my remaining neurons chasing a <100 > ppb > error > which > led to underruns after 10 hours, so careful now with > truncation...
Thanks for clarifying.
Yes, we can end up having a fractional period which is truncated. Note that both 'frames' and 'rate' are configured by the user. The user should set 'frames' as multiple of 'rate' whenever possible to avoid inaccuracy.
It's unlikely to happen. it's classic in audio that people want powers of two for fast filtering, and don't really care that the periods are fractional. If you cannot guarantee long-term operation without timing issues, you should add constraints to the frames and rates so that there is no surprise.
Fair enough. So for now I'll add a constraint on frames and rates to unsure no surprises. Later we can revisit this and implement the compesation mechanism you described below.
In my understanding, transmission timing of 'AVTP Audio format' in IEEE 1722:2016 is similar to 'blocking transmission' of IEC 61883-6. Packets have fixed size of data in its payload, thus include the same number of PCM frames. Talkers are expected to fill data till the size, then transmit the packet. Receivers are expected to perform buffering till presentation timestamp is elapsed, with one (or more) AVTPDUs.
I'm not familiar with the 'blocking transmission' of IEC 61883-6 but from the description above, yes, it looks similar indeed.
Further investigation, I realized that transmission timing of AAF is not similar to IEC 61883-1/6 at all... I'm sorry to address to it into this topic.
In clause 7.7 'AAF and SRP', I can see below sentence: '... A 44.1-kHz stream with 6 samples per AAF AVTPDU and an FQTSS observation interval of 125 us also has an SRP reservation of 1 frame per observation interval even though there will periodically be an observation interval where no AAF AVTPDU will be transmitted since it has a transmission interval of 136.054 us as can be seen in the example given in Figure 36.' This means that packet transmission is not always periodically. There's a blank cycle per several cycles; like IEC 61883-1/6.
My understanding of the periodicity of packet transmission is different. I believe it is always periodic. Let me elaborate on this.
The first paragraph from Section 7.7 states that "AAF transmission interval is defined by the clock rate of the media rather than the FQTSS observation interval." Since the clock is periodic, the AAF transmission interval is periodic too. For instance, if we take the 44.1 kHz example from Figure 36, we can see the AAF transmission interval is always ~136us. So, from AAF perspective (i.e. the plugin perspective), the packet transmission is always periodic.
The AAF transmission interval isn't necessarily equal to the FQTSS observation interval. Again, if we take a look at the 44.1 kHz example, we can see the AAF transmission interval is ~136us while the FQTSS observation interval is 125us. This, however, isn't an issue since the plugin is not expected to operate in terms of FQTSS observation interval, but in terms of AAF transmission interval as stated in the first paragraph from Section 7.7.
Indeed, thanks for your correction against my misunderstanding.
In my opinion, it's better calculate proper interval of timerfd to create the black interval, without truncate the fraction. Then, give proper constrains to SND_PCM_IOPLUG_HW_PERIOD_BYTES to prevent applications from underrun.
If the above understanding is correct, I'm not sure this approach would work. Let me know otherwise.
I have another concern of buffering in a perspectives of delay of task scheduling.
The interval of task scheduling for this plugin is decided mainly by the value of 'frames_per_pkt', given by users. In your documentation, the value is 6[1]. Of cource this is an example but in this case the interval is calculated as 125us at 48.0kHz. In my opinion, task scheduling in Linux kernel brings deadline misses for the interval, in most cases such as major Linux distribution on usual personal computers. When considering about the fact that recent motherboards implements Intel I210/220 series, it's better to care for the low- level realtime systems, in my opinion.
Agreed.
To mitigate this scheduling issue, a follow-up patchset will extend the plugin to leverage the ETF qdisc [1] which will be available on next kernel release. The idea is: instead of sending one AVTP packet at every interval, the plugin will send several AVTP packets at once, configuring their Tx time accordingly, so it can "sleep" for longer periods of time. The goal is to ease on task scheduling by offloading packet transmissions to the NIC.
Furthermore, in your proposal, the number of PCM frames in one AVTPDU is decided according to plugin parameter. However, if compliant to specification, it's better to decide the number according to 'FQTSS observation interval'. I can see recommendations in two cases; FQTSS = 125 us and 256 us, in Table 17 and 18 of IEEE 1212:2016.
It was designed that way on purpose. Let me share the rationale.
The values in Table 17 and 18 are just recommendations. From the plugin perspective, we should enable users to configure the number of frames according to their needs. This way, users are free to configure the values recommended by the spec or other values optimized to their AVTP application.
Besides that, as stated in the NOTE right below Table 18, 125us and 250us are not the only possible FQTSS observation intervals.
I addressed to the reccomendation itself. If specification describes recommendations, there will be a reason to consider about it, just not refer to values on the table.
The point I was trying to make is that it's up to the users to configure the plugin with the spec recommended values (to ensure interoperabilty) or their own custom/application-specific values. For example, for automotive use-cases, they have defined intervals equals to 1333.33us and 1451.25us (see Table 15 from [2]) that are not covered in Table 17 and 18.
Regards,
Andre
[1] https://marc.info/?l=linux-netdev&m=153065819412748 [2] https://avnu.org/wp-content/uploads/2014/05/Automotive-Ethernet-AVB -Func-Interop-Spec-v1.5-Public.pdf
I can see a sentence in section 7.7; 'in order to maintain interoperability between devices, the transmission intervals listed in Table 17 and Table 18 should be used.' I understand that 'if a talker end station transfers an AVTP packet with largely different number of sample frames from expectations on listener end station, there's a case that the listener cannot handles the sample frames due to processor loading or buffer overflow on listener side. To avoid this case, FQTSS observation interval is loosely used to decide intervals of AVTPDU transmission'. But it's within my imagination.
[1] http://mailman.alsa-project.org/pipermail/alsa-devel/2018-August/1394 95.html
Thanks
Takashi Sakamoto _______________________________________________ Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
Hi,
On Sep 1 2018 08:18, Guedes, Andre wrote:
I have another concern of buffering in a perspectives of delay of task scheduling.
The interval of task scheduling for this plugin is decided mainly by the value of 'frames_per_pkt', given by users. In your documentation, the value is 6[1]. Of cource this is an example but in this case the interval is calculated as 125us at 48.0kHz. In my opinion, task scheduling in Linux kernel brings deadline misses for the interval, in most cases such as major Linux distribution on usual personal computers. When considering about the fact that recent motherboards implements Intel I210/220 series, it's better to care for the low- level realtime systems, in my opinion.
Agreed.
To mitigate this scheduling issue, a follow-up patchset will extend the plugin to leverage the ETF qdisc [1] which will be available on next kernel release. The idea is: instead of sending one AVTP packet at every interval, the plugin will send several AVTP packets at once, configuring their Tx time accordingly, so it can "sleep" for longer periods of time. The goal is to ease on task scheduling by offloading packet transmissions to the NIC.
A feature to queue packets with a transmission timing is mandatory for applications to implement this kind of time-awareness packet transmission protocol on general purpose operating system which has less guarantees of its real-time capability.
In a case of IEC 61882-1/6 on IEEE 1394 bus, controllers of 1394 OHCI[1] allows applications to queue several packets to corresponding isochronous cycle. As a result, the controller can transfer each packet at each isochronous cycle.
From my curiousity, would I ask you to explain about usage of the ETF 'qdisk' for this kind of applications? How relevant hardware guarantees timing of transmission for queued packets? Or network stack on Linux kernel govern the transmission timing in the proposed patchset?[2].
[1] 1394 Open Host Controller Interface Specification Release 1.1 (2000, Promoters of the 1394 Open HCI) http://download.microsoft.com/download/1/6/1/161ba512-40e2-4cc9-843a-923143f... [2] [PATCH v2 net-next 00/14] Scheduled packet Transmission: ETF https://marc.info/?l=linux-netdev&m=153065819412748
Thanks
Takashi Sakamoto
Hi Takashi,
On Mon, 2018-09-03 at 10:24 +0900, Takashi Sakamoto wrote:
Hi,
On Sep 1 2018 08:18, Guedes, Andre wrote:
I have another concern of buffering in a perspectives of delay of task scheduling.
The interval of task scheduling for this plugin is decided mainly by the value of 'frames_per_pkt', given by users. In your documentation, the value is 6[1]. Of cource this is an example but in this case the interval is calculated as 125us at 48.0kHz. In my opinion, task scheduling in Linux kernel brings deadline misses for the interval, in most cases such as major Linux distribution on usual personal computers. When considering about the fact that recent motherboards implements Intel I210/220 series, it's better to care for the low- level realtime systems, in my opinion.
Agreed.
To mitigate this scheduling issue, a follow-up patchset will extend the plugin to leverage the ETF qdisc [1] which will be available on next kernel release. The idea is: instead of sending one AVTP packet at every interval, the plugin will send several AVTP packets at once, configuring their Tx time accordingly, so it can "sleep" for longer periods of time. The goal is to ease on task scheduling by offloading packet transmissions to the NIC.
A feature to queue packets with a transmission timing is mandatory for applications to implement this kind of time-awareness packet transmission protocol on general purpose operating system which has less guarantees of its real-time capability.
In a case of IEC 61882-1/6 on IEEE 1394 bus, controllers of 1394 OHCI[1] allows applications to queue several packets to corresponding isochronous cycle. As a result, the controller can transfer each packet at each isochronous cycle.
From my curiousity, would I ask you to explain about usage of the ETF 'qdisk' for this kind of applications?
That patchset introduces the SO_TXTIME sockopt which enables user-space applications to specify when a given packet should be transmitted. The ETF qdisc ensures that packets from multiple user-space applications are sent to the network controller in the right order (earlest txtime first). The controller then sends packets to the network at the txtime configured by the user.
In the AAF plugin case, it will prepare several AVTPDUs, configure their txtime so the transmission interval is respected, and offload them to the kernel/NIC.
How relevant hardware guarantees timing of transmission for queued packets?
In [1] you can find some performance measurements. The highlight is "Using so_txtime, the peak to peak jitter is about 100 nanoseconds, independent of the period."
Or network stack on Linux kernel govern the transmission timing in the proposed patchset?[2].
If the NIC doesn't support the scheduled transmission feature, yes, the kernel govern the transmission.
Hope this clarifies.
Regards,
Andre
Hi Pierre,
On Thu, 2018-08-23 at 11:32 -0700, Andre Guedes wrote:
On Wed, 2018-08-22 at 21:25 -0500, Pierre-Louis Bossart wrote:
On 08/22/2018 07:46 PM, Guedes, Andre wrote:
On Tue, 2018-08-21 at 17:51 -0500, Pierre-Louis Bossart wrote:
> +static int aaf_mclk_start_playback(snd_pcm_aaf_t *aaf) > +{ > + int res; > + struct timespec now; > + struct itimerspec itspec; > + snd_pcm_ioplug_t *io = &aaf->io; > + > + res = clock_gettime(CLOCK_REF, &now); > + if (res < 0) { > + SNDERR("Failed to get time from clock"); > + return -errno; > + } > + > + aaf->mclk_period = (NSEC_PER_SEC * aaf- > > frames_per_pkt) / > > io->rate;
is this always an integer? If not, don't you have a systematic arithmetic error?
NSEC_PER_SEC is 64-bit so I don't see an arithmetic error during calculation (e.g. integer overflow). Not sure this was your concern, though. Let me know otherwise.
No, I was talking about the fractional part, e.g with 256
frames
with 44.1kHz you have a period of 5804988.662131519274376 - so your math adds a truncation. same with 48khz, the fractional part is .333
I burned a number of my remaining neurons chasing a <100 ppb error which led to underruns after 10 hours, so careful now with truncation...
Thanks for clarifying.
Yes, we can end up having a fractional period which is truncated. Note that both 'frames' and 'rate' are configured by the user. The
user
should set 'frames' as multiple of 'rate' whenever possible to avoid inaccuracy.
It's unlikely to happen. it's classic in audio that people want powers of two for fast filtering, and don't really care that the periods are fractional. If you cannot guarantee long-term operation without timing issues, you should add constraints to the frames and rates so that there is no surprise.
Fair enough. So for now I'll add a constraint on frames and rates to unsure no surprises. Later we can revisit this and implement the compesation mechanism you described below.
I've been thinking about this potential underrun situation you described, and I'd like to get your input on the following assessment.
In a typical scenario using real audio hardware, we have one thread producing data to the audio buffer (application thread) and another thread consuming data from the buffer (driver thread). If the consuming thread is slightly faster than the producing one, we end up having an underrun error.
In the AAF plugin scenario, however, we have only one thread (application thread) that is responsible for both producing and consuming data to/from the buffer. Once the buffer is full, the thread consumes data from the buffer, trasmits an AVTP packet, copies more data to the buffer, and blocks until the next transmission time. Since it is single threaded, the potential underrun situation you described doesn't look real (even if the consumption rate is slightly faster than the nominal rate due to the period truncation).
Does the above make sense to you?
BTW, I have been running an experiment with 44.1 kHz and 6 frames for 72+ hours and haven't seen any xrun error so far.
Regards,
Andre
Hi,
On Aug 21 2018 10:06, Andre Guedes wrote:
This patch implements the playback mode support from the AAF plugin. Simply put, this mode works as follows: PCM samples provided by alsa-lib layer are encapsulated into AVTP packets and transmitted through the network. In summary, the playback mode implements a typical AVTP Talker.
When the AAF device is put in running state, its media clock is started. At every tick from the media clock, audio frames are consumed from the audio buffer, encapsulated into an AVTP packet, and transmitted to the network. The presentation time from each AVTP packet is calculated taking in consideration the maximum transit time and time uncertainty values configured by the user.
Below follows some discussion about implementation details:
Even though only one file descriptor is used to implement the playback mode, this patch doesn't leverage ioplug->poll_fd but defines poll callbacks instead. The reason is these callbacks will be required to support capture mode (to be implemented by upcoming patch).
The TSN data plane interface is the AF_PACKET socket family so the plugin uses an AF_PACKET socket to send/receive AVTP packets. Linux requires CAP_NET_RAW capability in order to open an AF_PACKET socket so the application that instantiates the plugin must have it. For further info about AF_PACKET socket family see packet(7).
Signed-off-by: Andre Guedes andre.guedes@intel.com
aaf/pcm_aaf.c | 611 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ doc/aaf.txt | 40 ++++ 2 files changed, 651 insertions(+)
diff --git a/aaf/pcm_aaf.c b/aaf/pcm_aaf.c index 4c6f031..72f6652 100644 --- a/aaf/pcm_aaf.c +++ b/aaf/pcm_aaf.c ... +static int aaf_hw_params(snd_pcm_ioplug_t *io,
snd_pcm_hw_params_t *params ATTRIBUTE_UNUSED)
+{
- int res;
- snd_pcm_aaf_t *aaf = io->private_data;
- if (io->access != SND_PCM_ACCESS_RW_INTERLEAVED)
return -ENOTSUP;
- if (io->buffer_size > LONG_MAX)
return -EINVAL;
- /* XXX: We might want to support Little Endian format in future. To
* achieve that, we need to convert LE samples to BE before
* transmitting them.
*/
- switch (io->format) {
- case SND_PCM_FORMAT_S16_BE:
- case SND_PCM_FORMAT_S24_3BE:
- case SND_PCM_FORMAT_S32_BE:
- case SND_PCM_FORMAT_FLOAT_BE:
break;
- default:
return -ENOTSUP;
- }
- switch (io->rate) {
- case 8000:
- case 16000:
- case 24000:
- case 32000:
- case 44100:
- case 48000:
- case 88200:
- case 96000:
- case 176400:
- case 192000:
break;
- default:
return -ENOTSUP;
- }
- aaf->buffer_size = io->buffer_size;
- res = aaf_init_pdu(aaf);
- if (res < 0)
return res;
- res = aaf_init_audio_buffer(aaf);
- if (res < 0)
goto err_free_pdu;
- res = aaf_init_areas(aaf);
- if (res < 0)
goto err_free_audiobuf;
- return 0;
+err_free_audiobuf:
- free(aaf->audiobuf);
+err_free_pdu:
- free(aaf->pdu);
- return res;
+} ... SND_PCM_PLUGIN_DEFINE_FUNC(aaf) @@ -186,12 +787,21 @@ SND_PCM_PLUGIN_DEFINE_FUNC(aaf) snd_pcm_aaf_t *aaf; int res;
/* For now the plugin only supports Playback mode i.e. AAF Talker
* functionality.
*/
if (stream != SND_PCM_STREAM_PLAYBACK)
return -EINVAL;
aaf = calloc(1, sizeof(*aaf)); if (!aaf) { SNDERR("Failed to allocate memory"); return -ENOMEM; }
aaf->sk_fd = -1;
aaf->timer_fd = -1;
res = aaf_load_config(aaf, conf); if (res < 0) goto err;
@@ -200,6 +810,7 @@ SND_PCM_PLUGIN_DEFINE_FUNC(aaf) aaf->io.name = "AVTP Audio Format (AAF) Plugin"; aaf->io.callback = &aaf_callback; aaf->io.private_data = aaf;
- aaf->io.flags = SND_PCM_IOPLUG_FLAG_BOUNDARY_WA; res = snd_pcm_ioplug_create(&aaf->io, name, stream, mode); if (res < 0) { SNDERR("Failed to create ioplug instance");
In a design of ALSA external plugin SDK[1], you can apply constrains to available hw_params/sw_params by a call of 'snd_pcm_ioplug_set_param_minmax()' and 'snd_pcm_ioplug_set_param_list()' at entry point. If you program error path at .hw_params callback for unexpected parameters, it's better to apply constants in advance.
Do you have any reason not to do it? Is it difficult to retrieve capabilities of target AVTP end station in the entry point?
[1] http://www.alsa-project.org/alsa-doc/alsa-lib/pcm_external_plugins.html
Regards
Takashi Sakamoto
Hi Takashi,
On Tue, 2018-08-21 at 13:31 +0900, Takashi Sakamoto wrote:
In a design of ALSA external plugin SDK[1], you can apply constrains to available hw_params/sw_params by a call of 'snd_pcm_ioplug_set_param_minmax()' and 'snd_pcm_ioplug_set_param_list()' at entry point. If you program error path at .hw_params callback for unexpected parameters, it's better to apply constants in advance.
Understood.
Do you have any reason not to do it? Is it difficult to retrieve capabilities of target AVTP end station in the entry point?
No reason, I was just not aware of it. I'll change the code to take advantage of this.
Thank you for your feedback.
- Andre
The plugin code assumes only Playback mode is supported. This patch prepares the code to support both Playback and Capture mode. Capture mode support is implemented by a follow-up patch.
Signed-off-by: Andre Guedes andre.guedes@intel.com --- aaf/pcm_aaf.c | 156 ++++++++++++++++++++++++++++++++++------------------------ 1 file changed, 91 insertions(+), 65 deletions(-)
diff --git a/aaf/pcm_aaf.c b/aaf/pcm_aaf.c index 72f6652..e098247 100644 --- a/aaf/pcm_aaf.c +++ b/aaf/pcm_aaf.c @@ -234,6 +234,7 @@ static int aaf_init_socket(snd_pcm_aaf_t *aaf) { int fd, res; struct ifreq req; + snd_pcm_ioplug_t *io = &aaf->io;
fd = socket(AF_PACKET, SOCK_DGRAM, htons(ETH_P_TSN)); if (fd < 0) { @@ -255,12 +256,17 @@ static int aaf_init_socket(snd_pcm_aaf_t *aaf) aaf->sk_addr.sll_ifindex = req.ifr_ifindex; memcpy(&aaf->sk_addr.sll_addr, aaf->addr, ETH_ALEN);
- res = setsockopt(fd, SOL_SOCKET, SO_PRIORITY, &aaf->prio, - sizeof(aaf->prio)); - if (res < 0) { - SNDERR("Failed to set socket priority"); - res = -errno; - goto err; + if (io->stream == SND_PCM_STREAM_PLAYBACK) { + res = setsockopt(fd, SOL_SOCKET, SO_PRIORITY, &aaf->prio, + sizeof(aaf->prio)); + if (res < 0) { + SNDERR("Failed to set socket priority"); + res = -errno; + goto err; + } + } else { + /* TODO: Implement Capture mode support. */ + return -ENOTSUP; }
aaf->sk_fd = fd; @@ -300,46 +306,50 @@ static int aaf_init_pdu(snd_pcm_aaf_t *aaf) if (!pdu) return -ENOMEM;
- res = avtp_aaf_pdu_init(pdu); - if (res < 0) - goto err; + if (io->stream == SND_PCM_STREAM_PLAYBACK) { + res = avtp_aaf_pdu_init(pdu); + if (res < 0) + goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_TV, 1); - if (res < 0) - goto err; + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_TV, 1); + if (res < 0) + goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_STREAM_ID, aaf->streamid); - if (res < 0) - goto err; + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_STREAM_ID, + aaf->streamid); + if (res < 0) + goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_FORMAT, - alsa_to_avtp_format(io->format)); - if (res < 0) - goto err; + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_FORMAT, + alsa_to_avtp_format(io->format)); + if (res < 0) + goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_NSR, - alsa_to_avtp_rate(io->rate)); - if (res < 0) - goto err; + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_NSR, + alsa_to_avtp_rate(io->rate)); + if (res < 0) + goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_CHAN_PER_FRAME, - io->channels); - if (res < 0) - goto err; + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_CHAN_PER_FRAME, + io->channels); + if (res < 0) + goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_BIT_DEPTH, - snd_pcm_format_width(io->format)); - if (res < 0) - goto err; + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_BIT_DEPTH, + snd_pcm_format_width(io->format)); + if (res < 0) + goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_STREAM_DATA_LEN, - payload_size); - if (res < 0) - goto err; + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_STREAM_DATA_LEN, + payload_size); + if (res < 0) + goto err;
- res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_SP, AVTP_AAF_PCM_SP_NORMAL); - if (res < 0) - goto err; + res = avtp_aaf_pdu_set(pdu, AVTP_AAF_FIELD_SP, + AVTP_AAF_PCM_SP_NORMAL); + if (res < 0) + goto err; + }
aaf->pdu = pdu; aaf->pdu_size = pdu_size; @@ -665,7 +675,10 @@ static snd_pcm_sframes_t aaf_pointer(snd_pcm_ioplug_t *io)
static int aaf_poll_descriptors_count(snd_pcm_ioplug_t *io ATTRIBUTE_UNUSED) { - return FD_COUNT_PLAYBACK; + if (io->stream == SND_PCM_STREAM_PLAYBACK) + return FD_COUNT_PLAYBACK; + else + return -ENOTSUP; }
static int aaf_poll_descriptors(snd_pcm_ioplug_t *io, struct pollfd *pfd, @@ -673,11 +686,17 @@ static int aaf_poll_descriptors(snd_pcm_ioplug_t *io, struct pollfd *pfd, { snd_pcm_aaf_t *aaf = io->private_data;
- if (space != FD_COUNT_PLAYBACK) - return -EINVAL; + if (io->stream == SND_PCM_STREAM_PLAYBACK) { + if (space != FD_COUNT_PLAYBACK) + return -EINVAL; + + pfd[0].fd = aaf->timer_fd; + pfd[0].events = POLLIN; + } else { + /* TODO: Implement Capture mode support. */ + return -ENOTSUP; + }
- pfd[0].fd = aaf->timer_fd; - pfd[0].events = POLLIN; return space; }
@@ -687,15 +706,20 @@ static int aaf_poll_revents(snd_pcm_ioplug_t *io, struct pollfd *pfd, int res; snd_pcm_aaf_t *aaf = io->private_data;
- if (nfds != FD_COUNT_PLAYBACK) - return -EINVAL; + if (io->stream == SND_PCM_STREAM_PLAYBACK) { + if (nfds != FD_COUNT_PLAYBACK) + return -EINVAL;
- if (pfd[0].revents & POLLIN) { - res = aaf_mclk_timeout_playback(aaf); - if (res < 0) - return res; + if (pfd[0].revents & POLLIN) { + res = aaf_mclk_timeout_playback(aaf); + if (res < 0) + return res;
- *revents = POLLIN; + *revents = POLLIN; + } + } else { + /* TODO: Implement Capture mode support. */ + return -ENOTSUP; }
return 0; @@ -727,9 +751,11 @@ static int aaf_start(snd_pcm_ioplug_t *io) if (res < 0) goto err_close_sk;
- res = aaf_mclk_start_playback(aaf); - if (res < 0) - goto err_close_timer; + if (io->stream == SND_PCM_STREAM_PLAYBACK) { + res = aaf_mclk_start_playback(aaf); + if (res < 0) + goto err_close_timer; + }
return 0;
@@ -757,12 +783,18 @@ static snd_pcm_sframes_t aaf_transfer(snd_pcm_ioplug_t *io, int res; snd_pcm_aaf_t *aaf = io->private_data;
- res = snd_pcm_areas_copy_wrap(aaf->audiobuf_areas, - (io->appl_ptr % aaf->buffer_size), - aaf->buffer_size, areas, offset, size, - io->channels, size, io->format); - if (res < 0) - return res; + if (io->stream == SND_PCM_STREAM_PLAYBACK) { + res = snd_pcm_areas_copy_wrap(aaf->audiobuf_areas, + (io->appl_ptr % aaf->buffer_size), + aaf->buffer_size, areas, offset, + size, io->channels, size, + io->format); + if (res < 0) + return res; + } else { + /* TODO: Implement Capture mode support. */ + return -ENOTSUP; + }
return size; } @@ -787,12 +819,6 @@ SND_PCM_PLUGIN_DEFINE_FUNC(aaf) snd_pcm_aaf_t *aaf; int res;
- /* For now the plugin only supports Playback mode i.e. AAF Talker - * functionality. - */ - if (stream != SND_PCM_STREAM_PLAYBACK) - return -EINVAL; - aaf = calloc(1, sizeof(*aaf)); if (!aaf) { SNDERR("Failed to allocate memory");
This patch implements the capture mode support from the AAF plugin. Simply put, this mode works as follows: AVTP packets are received from the network, the PCM samples are retrieved and presented to the alsa-lib layer at the presentation time. In summary, the capture mode implements a typical AVTP Listener.
Once the AAF device is put in running state, packet reception is started. Every time an AVTP packet is received the plugin checks if it is valid (according to the stream configuration provided by the user) and copies the PCM samples to the audio buffer. Note that at this moment, the samples are not presented to the alsa-lib layer (i.e. hw_ptr is not updated).
When the first AVTP packet is received, the media clock is set to start at the presentation time from that packet. At every tick from the media clock, PCM samples are presented to the alsa-lib layer.
Signed-off-by: Andre Guedes andre.guedes@intel.com --- aaf/pcm_aaf.c | 358 ++++++++++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 338 insertions(+), 20 deletions(-)
diff --git a/aaf/pcm_aaf.c b/aaf/pcm_aaf.c index e098247..b5bee57 100644 --- a/aaf/pcm_aaf.c +++ b/aaf/pcm_aaf.c @@ -28,6 +28,7 @@ #include <linux/if_ether.h> #include <linux/if_packet.h> #include <string.h> +#include <stdbool.h> #include <stdint.h> #include <sys/ioctl.h> #include <sys/timerfd.h> @@ -43,6 +44,7 @@ #define NSEC_PER_USEC 1000ULL
#define FD_COUNT_PLAYBACK 1 +#define FD_COUNT_CAPTURE 2
typedef struct { snd_pcm_ioplug_t io; @@ -70,10 +72,13 @@ typedef struct { uint64_t mclk_period; uint64_t mclk_ticks;
+ uint64_t pkt_count; + snd_pcm_channel_area_t *audiobuf_areas; snd_pcm_channel_area_t *payload_areas;
snd_pcm_sframes_t hw_ptr; + snd_pcm_sframes_t hw_virt_ptr; snd_pcm_sframes_t buffer_size; snd_pcm_sframes_t boundary; } snd_pcm_aaf_t; @@ -122,6 +127,106 @@ static unsigned int alsa_to_avtp_rate(unsigned int rate) } }
+static bool is_pdu_valid(struct avtp_stream_pdu *pdu, uint64_t streamid, + unsigned int data_len, unsigned int format, + unsigned int nsr, unsigned int channels, + unsigned int depth) +{ + int res; + uint64_t val64; + uint32_t val32; + struct avtp_common_pdu *common = (struct avtp_common_pdu *) pdu; + + res = avtp_pdu_get(common, AVTP_FIELD_SUBTYPE, &val32); + if (res < 0) + return false; + if (val32 != AVTP_SUBTYPE_AAF) { + pr_debug("Subtype mismatch: expected %u, got %u", + AVTP_SUBTYPE_AAF, val32); + return false; + } + + res = avtp_pdu_get(common, AVTP_FIELD_VERSION, &val32); + if (res < 0) + return false; + if (val32 != 0) { + pr_debug("Version mismatch: expected %u, got %u", 0, val32); + return false; + } + + res = avtp_aaf_pdu_get(pdu, AVTP_AAF_FIELD_STREAM_ID, &val64); + if (res < 0) + return false; + if (val64 != streamid) { + pr_debug("Streamid mismatch: expected %lu, got %lu", streamid, + val64); + return false; + } + + res = avtp_aaf_pdu_get(pdu, AVTP_AAF_FIELD_TV, &val64); + if (res < 0) + return false; + if (val64 != 1) { + pr_debug("TV mismatch: expected %u, got %lu", 1, val64); + return false; + } + + res = avtp_aaf_pdu_get(pdu, AVTP_AAF_FIELD_SP, &val64); + if (res < 0) + return false; + if (val64 != AVTP_AAF_PCM_SP_NORMAL) { + pr_debug("SP mismatch: expected %u, got %lu", + AVTP_AAF_PCM_SP_NORMAL, val64); + return false; + } + + res = avtp_aaf_pdu_get(pdu, AVTP_AAF_FIELD_FORMAT, &val64); + if (res < 0) + return false; + if (val64 != format) { + pr_debug("Format mismatch: expected %u, got %lu", format, + val64); + return false; + } + + res = avtp_aaf_pdu_get(pdu, AVTP_AAF_FIELD_NSR, &val64); + if (res < 0) + return false; + if (val64 != nsr) { + pr_debug("NSR mismatch: expected %u, got %lu", nsr, val64); + return false; + } + + res = avtp_aaf_pdu_get(pdu, AVTP_AAF_FIELD_CHAN_PER_FRAME, &val64); + if (res < 0) + return false; + if (val64 != channels) { + pr_debug("Channels mismatch: expected %u, got %lu", channels, + val64); + return false; + } + + res = avtp_aaf_pdu_get(pdu, AVTP_AAF_FIELD_BIT_DEPTH, &val64); + if (res < 0) + return false; + if (val64 != depth) { + pr_debug("Bit depth mismatch: expected %u, got %lu", depth, + val64); + return false; + } + + res = avtp_aaf_pdu_get(pdu, AVTP_AAF_FIELD_STREAM_DATA_LEN, &val64); + if (res < 0) + return false; + if (val64 != data_len) { + pr_debug("Data len mismatch: expected %u, got %lu", + data_len, val64); + return false; + } + + return true; +} + static int aaf_load_config(snd_pcm_aaf_t *aaf, snd_config_t *conf) { snd_config_iterator_t cur, next; @@ -265,8 +370,27 @@ static int aaf_init_socket(snd_pcm_aaf_t *aaf) goto err; } } else { - /* TODO: Implement Capture mode support. */ - return -ENOTSUP; + struct packet_mreq mreq; + + res = bind(fd, (struct sockaddr *) &aaf->sk_addr, + sizeof(aaf->sk_addr)); + if (res < 0) { + SNDERR("Failed to bind socket"); + res = -errno; + goto err; + } + + mreq.mr_ifindex = req.ifr_ifindex; + mreq.mr_type = PACKET_MR_MULTICAST; + mreq.mr_alen = ETH_ALEN; + memcpy(&mreq.mr_address, aaf->addr, ETH_ALEN); + res = setsockopt(fd, SOL_PACKET, PACKET_ADD_MEMBERSHIP, + &mreq, sizeof(struct packet_mreq)); + if (res < 0) { + SNDERR("Failed to add multicast address"); + res = -errno; + goto err; + } }
aaf->sk_fd = fd; @@ -417,12 +541,13 @@ static int aaf_init_areas(snd_pcm_aaf_t *aaf) return 0; }
-static void aaf_inc_hw_ptr(snd_pcm_aaf_t *aaf, snd_pcm_sframes_t val) +static void aaf_inc_ptr(snd_pcm_sframes_t *ptr, snd_pcm_sframes_t val, + snd_pcm_sframes_t boundary) { - aaf->hw_ptr += val; + *ptr += val;
- if (aaf->hw_ptr >= aaf->boundary) - aaf->hw_ptr -= aaf->boundary; + if (*ptr > boundary) + *ptr -= boundary; }
static int aaf_mclk_start_playback(snd_pcm_aaf_t *aaf) @@ -454,6 +579,52 @@ static int aaf_mclk_start_playback(snd_pcm_aaf_t *aaf) return 0; }
+static int aaf_mclk_start_capture(snd_pcm_aaf_t *aaf, uint32_t avtp_time) +{ + int res; + struct timespec tspec; + struct itimerspec itspec; + uint64_t ptime, now; + snd_pcm_ioplug_t *io = &aaf->io; + + res = clock_gettime(CLOCK_REF, &tspec); + if (res < 0) { + SNDERR("Failed to get time from clock"); + return -errno; + } + + now = (tspec.tv_sec * NSEC_PER_SEC) + tspec.tv_nsec; + + /* The avtp_timestamp within AAF packet is the lower part (32 + * less-significant bits) from presentation time calculated by the + * talker. + */ + ptime = (now & 0xFFFFFFFF00000000ULL) | avtp_time; + + /* If 'ptime' is less than the 'now', it means the higher part + * from 'ptime' needs to be incremented by 1 in order to recover the + * presentation time set by the talker. + */ + if (ptime < now) + ptime += (1ULL << 32); + + aaf->mclk_period = (NSEC_PER_SEC * aaf->frames_per_pkt) / io->rate; + aaf->mclk_ticks = 0; + aaf->mclk_start_time = ptime; + + itspec.it_value.tv_sec = aaf->mclk_start_time / NSEC_PER_SEC; + itspec.it_value.tv_nsec = aaf->mclk_start_time % NSEC_PER_SEC; + itspec.it_interval.tv_sec = 0; + itspec.it_interval.tv_nsec = aaf->mclk_period; + res = timerfd_settime(aaf->timer_fd, TFD_TIMER_ABSTIME, &itspec, NULL); + if (res < 0) { + SNDERR("Failed to set timer"); + return -errno; + } + + return 0; +} + static int aaf_mclk_reset(snd_pcm_aaf_t *aaf) { aaf->mclk_start_time = 0; @@ -500,7 +671,7 @@ static int aaf_tx_pdu(snd_pcm_aaf_t *aaf) * drop them. This behavior is suggested by IEEE 1722-2016 * spec, section 7.3.5. */ - aaf_inc_hw_ptr(aaf, n); + aaf_inc_ptr(&aaf->hw_ptr, n, aaf->boundary); return 0; }
@@ -532,7 +703,94 @@ static int aaf_tx_pdu(snd_pcm_aaf_t *aaf) return -EIO; }
- aaf_inc_hw_ptr(aaf, aaf->frames_per_pkt); + aaf_inc_ptr(&aaf->hw_ptr, aaf->frames_per_pkt, aaf->boundary); + return 0; +} + +static int aaf_rx_pdu(snd_pcm_aaf_t *aaf) +{ + int res; + ssize_t n; + uint64_t seq, avtp_time; + snd_pcm_sframes_t len; + snd_pcm_ioplug_t *io = &aaf->io; + snd_pcm_t *pcm = io->pcm; + + n = recv(aaf->sk_fd, aaf->pdu, aaf->pdu_size, 0); + if (n < 0 || n != aaf->pdu_size) { + SNDERR("Failed to receive data"); + return -EIO; + } + + if (io->state == SND_PCM_STATE_DRAINING) { + /* If device is in DRAIN state, we shouldn't copy any more data + * to audio buffer. So we are done here. + */ + return 0; + } + + if (!is_pdu_valid(aaf->pdu, aaf->streamid, + snd_pcm_frames_to_bytes(pcm, aaf->frames_per_pkt), + alsa_to_avtp_format(io->format), + alsa_to_avtp_rate(io->rate), + io->channels, snd_pcm_format_width(io->format))) + return 0; + + res = avtp_aaf_pdu_get(aaf->pdu, AVTP_AAF_FIELD_SEQ_NUM, &seq); + if (res < 0) + return res; + if (seq != aaf->pdu_seq) { + pr_debug("Sequence mismatch: expected %u, got %lu", + aaf->pdu_seq, seq); + aaf->pdu_seq = seq; + } + aaf->pdu_seq++; + + res = avtp_aaf_pdu_get(aaf->pdu, AVTP_AAF_FIELD_TIMESTAMP, &avtp_time); + if (res < 0) + return res; + + if (aaf->mclk_start_time == 0) { + res = aaf_mclk_start_capture(aaf, avtp_time); + if (res < 0) + return res; + } else { + uint64_t ptime = aaf->mclk_start_time + aaf->mclk_period * + aaf->pkt_count; + + if (avtp_time != ptime % (1ULL << 32)) { + pr_debug("Packet dropped: PT not expected"); + return 0; + } + if (ptime < aaf_mclk_gettime(aaf)) { + pr_debug("Packet dropped: PT in the past"); + return 0; + } + } + + res = snd_pcm_areas_copy_wrap(aaf->audiobuf_areas, + aaf->hw_virt_ptr % aaf->buffer_size, + aaf->buffer_size, aaf->payload_areas, + 0, aaf->frames_per_pkt, io->channels, + aaf->frames_per_pkt, io->format); + if (res < 0) { + SNDERR("Failed to copy data from AVTP payload"); + return res; + } + + aaf_inc_ptr(&aaf->hw_virt_ptr, aaf->frames_per_pkt, aaf->boundary); + + len = aaf->hw_virt_ptr - io->appl_ptr; + if (len < 0) + len += aaf->boundary; + if (len > aaf->buffer_size) { + /* If the distance between hw virtual pointer and application + * pointer is greater than the buffer size, it means we had an + * overrun error so -EPIPE is returned. + */ + return -EPIPE; + } + return 0; }
@@ -561,6 +819,42 @@ static int aaf_mclk_timeout_playback(snd_pcm_aaf_t *aaf) return 0; }
+static int aaf_mclk_timeout_capture(snd_pcm_aaf_t *aaf) +{ + ssize_t n; + uint64_t expirations; + + n = read(aaf->timer_fd, &expirations, sizeof(uint64_t)); + if (n < 0) { + SNDERR("Failed to read() timer"); + return -errno; + } + + if (expirations != 1) + pr_debug("Missed %llu presentation time(s) ", expirations - 1); + + while (expirations--) { + snd_pcm_sframes_t len; + + aaf_inc_ptr(&aaf->hw_ptr, aaf->frames_per_pkt, aaf->boundary); + + len = aaf->hw_virt_ptr - aaf->hw_ptr; + if (len < 0) + len += aaf->boundary; + if (len > aaf->buffer_size) { + /* If the distance between hw virtual pointer and hw + * pointer is greater than the buffer size, it means we + * had an overrun error so -EPIPE is returned. + */ + return -EPIPE; + } + + aaf->mclk_ticks++; + } + + return 0; +} + static int aaf_close(snd_pcm_ioplug_t *io) { snd_pcm_aaf_t *aaf = io->private_data; @@ -678,7 +972,7 @@ static int aaf_poll_descriptors_count(snd_pcm_ioplug_t *io ATTRIBUTE_UNUSED) if (io->stream == SND_PCM_STREAM_PLAYBACK) return FD_COUNT_PLAYBACK; else - return -ENOTSUP; + return FD_COUNT_CAPTURE; }
static int aaf_poll_descriptors(snd_pcm_ioplug_t *io, struct pollfd *pfd, @@ -693,8 +987,13 @@ static int aaf_poll_descriptors(snd_pcm_ioplug_t *io, struct pollfd *pfd, pfd[0].fd = aaf->timer_fd; pfd[0].events = POLLIN; } else { - /* TODO: Implement Capture mode support. */ - return -ENOTSUP; + if (space != FD_COUNT_CAPTURE) + return -EINVAL; + + pfd[0].fd = aaf->timer_fd; + pfd[0].events = POLLIN; + pfd[1].fd = aaf->sk_fd; + pfd[1].events = POLLIN; }
return space; @@ -718,8 +1017,23 @@ static int aaf_poll_revents(snd_pcm_ioplug_t *io, struct pollfd *pfd, *revents = POLLIN; } } else { - /* TODO: Implement Capture mode support. */ - return -ENOTSUP; + if (nfds != FD_COUNT_CAPTURE) + return -EINVAL; + + if (pfd[0].revents & POLLIN) { + res = aaf_mclk_timeout_capture(aaf); + if (res < 0) + return res; + + *revents = POLLIN; + } + + if (pfd[1].revents & POLLIN) { + res = aaf_rx_pdu(aaf); + if (res < 0) + return res; + aaf->pkt_count++; + } }
return 0; @@ -731,6 +1045,8 @@ static int aaf_prepare(snd_pcm_ioplug_t *io) snd_pcm_aaf_t *aaf = io->private_data;
aaf->hw_ptr = 0; + aaf->hw_virt_ptr = 0; + aaf->pkt_count = 0; res = aaf_mclk_reset(aaf); if (res < 0) return res; @@ -783,18 +1099,20 @@ static snd_pcm_sframes_t aaf_transfer(snd_pcm_ioplug_t *io, int res; snd_pcm_aaf_t *aaf = io->private_data;
- if (io->stream == SND_PCM_STREAM_PLAYBACK) { + if (io->stream == SND_PCM_STREAM_PLAYBACK) res = snd_pcm_areas_copy_wrap(aaf->audiobuf_areas, (io->appl_ptr % aaf->buffer_size), aaf->buffer_size, areas, offset, size, io->channels, size, io->format); - if (res < 0) - return res; - } else { - /* TODO: Implement Capture mode support. */ - return -ENOTSUP; - } + else + res = snd_pcm_areas_copy_wrap(areas, offset, (offset + size), + aaf->audiobuf_areas, + (io->appl_ptr % aaf->buffer_size), + aaf->buffer_size, io->channels, + size, io->format); + if (res < 0) + return res;
return size; }
Hi,
On Aug 21 2018 10:06, Andre Guedes wrote:
This patch implements the capture mode support from the AAF plugin. Simply put, this mode works as follows: AVTP packets are received from the network, the PCM samples are retrieved and presented to the alsa-lib layer at the presentation time. In summary, the capture mode implements a typical AVTP Listener.
Once the AAF device is put in running state, packet reception is started. Every time an AVTP packet is received the plugin checks if it is valid (according to the stream configuration provided by the user) and copies the PCM samples to the audio buffer. Note that at this moment, the samples are not presented to the alsa-lib layer (i.e. hw_ptr is not updated).
When the first AVTP packet is received, the media clock is set to start at the presentation time from that packet. At every tick from the media clock, PCM samples are presented to the alsa-lib layer.
Signed-off-by: Andre Guedes andre.guedes@intel.com
aaf/pcm_aaf.c | 358 ++++++++++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 338 insertions(+), 20 deletions(-)
diff --git a/aaf/pcm_aaf.c b/aaf/pcm_aaf.c index e098247..b5bee57 100644 --- a/aaf/pcm_aaf.c +++ b/aaf/pcm_aaf.c ... @@ -561,6 +819,42 @@ static int aaf_mclk_timeout_playback(snd_pcm_aaf_t *aaf) return 0; }
+static int aaf_mclk_timeout_capture(snd_pcm_aaf_t *aaf) +{
- ssize_t n;
- uint64_t expirations;
- n = read(aaf->timer_fd, &expirations, sizeof(uint64_t));
- if (n < 0) {
SNDERR("Failed to read() timer");
return -errno;
- }
- if (expirations != 1)
pr_debug("Missed %llu presentation time(s) ", expirations - 1);
- while (expirations--) {
snd_pcm_sframes_t len;
aaf_inc_ptr(&aaf->hw_ptr, aaf->frames_per_pkt, aaf->boundary);
len = aaf->hw_virt_ptr - aaf->hw_ptr;
if (len < 0)
len += aaf->boundary;
if (len > aaf->buffer_size) {
/* If the distance between hw virtual pointer and hw
* pointer is greater than the buffer size, it means we
* had an overrun error so -EPIPE is returned.
*/
return -EPIPE;
}
aaf->mclk_ticks++;
- }
- return 0;
+}
In your code, -EPIPE can be returned in a call of 'snd_pcm_poll_revents()'. In this case, typical applications follow recovery process below:
->snd_pcm_poll_revents() ->snd_pcm_prepare() ->snd_pcm_start()
In this case, status of corresponding PCM substream is reset (see 'snd_pcm_ioplug_prepare()'). In my opinion, in this case, backend is expected to stop data streaming, then initialize hardware and restart data streaming. In your driver, timer_fd/sk_fd is leaked. It's better to release them by checking status of PCM substream in .prepare callback.
Regards
Takashi Sakamoto
Hi Takashi,
On Tue, 2018-08-21 at 14:17 +0900, Takashi Sakamoto wrote:
+static int aaf_mclk_timeout_capture(snd_pcm_aaf_t *aaf) +{
ssize_t n;
uint64_t expirations;
n = read(aaf->timer_fd, &expirations, sizeof(uint64_t));
if (n < 0) {
SNDERR("Failed to read() timer");
return -errno;
}
if (expirations != 1)
pr_debug("Missed %llu presentation time(s) ",
expirations - 1);
while (expirations--) {
snd_pcm_sframes_t len;
aaf_inc_ptr(&aaf->hw_ptr, aaf->frames_per_pkt, aaf-
boundary);
len = aaf->hw_virt_ptr - aaf->hw_ptr;
if (len < 0)
len += aaf->boundary;
if (len > aaf->buffer_size) {
/* If the distance between hw virtual pointer
and hw
* pointer is greater than the buffer size,
it means we
* had an overrun error so -EPIPE is
returned.
*/
return -EPIPE;
}
aaf->mclk_ticks++;
}
return 0;
+}
In your code, -EPIPE can be returned in a call of 'snd_pcm_poll_revents()'. In this case, typical applications follow recovery process below:
->snd_pcm_poll_revents() ->snd_pcm_prepare() ->snd_pcm_start()
In this case, status of corresponding PCM substream is reset (see 'snd_pcm_ioplug_prepare()'). In my opinion, in this case, backend is expected to stop data streaming, then initialize hardware and restart data streaming. In your driver, timer_fd/sk_fd is leaked. It's better to release them by checking status of PCM substream in .prepare callback.
I think you're right. I'll fix it.
Thank you,
Andre
participants (4)
-
Andre Guedes
-
Guedes, Andre
-
Pierre-Louis Bossart
-
Takashi Sakamoto