Andreas Hildebrand starts by introducing 2110 and how it works in terms of sending the essences separately using multicast IP. This talk focusses on the ability of audio-only devices to subscribe to the audio streams without needing the video streams. Andreas then goes on to introduce AES67 which is a standard defining interoperability for audio defining timing, session description, encoding, QOS, transport and much more. Of all the things which are defined in AES67, discovery was deliberately not included and Andreas explains why.
Within SMPTE 2110, there are constraints added to AES67 under the sub-standard 2110-30. The different categories A, B and C (and their X counterparts) are explained in terms how how many audios are defined and the sample lengths with their implications detailed.
As for discovery and other aspects of creating a working system, Andreas looks towards AMWA’s NMOS suite summarising the specifications for Discovery & Registration, Connection Management, Network Control, Event & Tally, Audio Channel Mapping. It’s the latter which is the focus of the last part of this talk.
IS-08 defines a way of defining input and output blocks allowing a channel mapping to be defined. Using IS-05, we can determine which source stream should connect to which destination device. Then IS-08 gives the capability to determine which of the audios within this stream can be mapped to the output(s) of the receiving device and on top of this allows mapping from multiple received streams into the output(s) of one device. The talk then finishes with a deeper look at this process including where example code can be found.
AES67 is a flexible standard but with this there is complexity and nuance. Implementing it within ST 2110-30 takes some care and this talk covers lessons learnt in doing exactly that.
AES67 is a standard defined by the Audio Engineering Society to enable high-performance audio-over-IP streaming interoperability between various AoIP systems like Dante, WheatNet-IP and Livewire. It provides comprehensive interoperability recommendations in the areas of synchronization, media clock identification, network transport, encoding and streaming, session description, and connection management.
The SMPTE ST 2110 standards suite makes it possible to separately route and break away the essence streams – audio, video, and ancillary data. ST 2110-30 addresses system requirements and payload formats for uncompressed audio streams and refers to the subset of AES67 standard.
In this video Dominic Giambo from Wheatsone Corporation discusses tips for implementing AES67 and ST 2110-30 standards in a lab environment consisting of over 160 devices (consoles, sufraces, hardware and software I/O blades) and 3 different automation systems. The aim of the test was to pass audio through every single device creating a very long chain to detect any defects.
The following topics are covered:
SMPTE ST 2110-30 as a subset of AES67 (support of the PTP profile defined in SMPTE ST 2059-2, an offset value of zero between the media clock and the RTP stream clock, option to force a device to operate in PTP slave-only mode)
The importance of using IEEE-1588 PTP v2 master clock for accuracy
Packet structure (UDP and RTP header, payload type)
Network configuration considerations (mapping out IP and multicast addresses for different vendors, keeping all devices on the same subnet)
Discovery and control (SDP stream description files, configuration of signal flow from sources to destinations)
PTP and uncompressed video go hand in hand so this primer on ST 2022 and ST 2110 followed by a PTP deep dive is a great way to gain your footing in the uncompressed world.
In the longest video yet on The Broadcast Knowledge, Steve Holmes on behalf of Tektronix delivers two talks and a practical demo for the SMPTE San Francisco section where he introduces the reasons for and solutions to uncompressed video and goes through the key standards and technologies from ST 2022, those being -6 video and -7 seamless switching plus the major parts of ST 2110, those being timing, video, audio and metadata.
After that, at the 47 minute mark, Steve introduces the need for PTP by reference to black and burst, and goes on to explain how SMPTE’s ST2059 brings PTP into the broadcast domain and helps us synchronise uncompressed essences. He covered how PTP actually works, boundary clocks, Grandmaster/Master/Slave clocks and everything else you need to understand the system,
This video finishes with plenty of questions plus a look at the GUI of measurement equipment showing PTP in real life.
With the SMPTE 2110 suite of standards largely published and the related AMWA IS-04 and -05 specifications stable, people’s minds are turning to how to implement all these standards bringing them together into a complete working system.
The JT-NM TR-1001-1 is a technical recommendation document which describes a way of documenting how the system will work – for instance how do new devices on the network start up? How do they know what PTP domain is in use on the network?
John Mailhot starts by giving an overview of the standards and documents available, showing which ones are published and which are still in progress. He then looks at each of them in turn to summarise its use on the network and how it fits in to the system as a whole.
Once the groundwork is laid, we see how the JT-NM working group have looked at 5 major behaviours and what they have recommended for making them work in a scalable way. These cover things like DNS discovery, automated multicast address allocation and other considerations.