Video: Introduction To AES67 & SMPTE ST 2110

While standardisation of video and audio over IP is welcome, this does leave us with a plethora of standards numbers to keep track of along with interoperability edge cases to keep track of. Audio-over-IP standard AES67 is part of the SMPTE ST-2110 standards suite and was born largely from RAVENNA which is still in use in it’s own right. It’s with this backdrop that Andreas Hildebrand from ALC NetworX who have been developing RAVENNA for 10 years now, takes the mic to explain how this all fits together. Whilst there are many technologies at play, this webinar focusses on AES67 and 2110.

Andreas explains how AES67 started out of a plan to unite the many proprietary audio-over-IP formats. For instance, synchronisation – like ST 2110 as we’ll see later – was based on PTP. Andreas gives an overview of this synchronisation and then we shows how they looked at each of the OSI layers and defined a technology that could service everyone. RTP, the Real-time Transport Protocol has been in use for a long time for transport of video and audio so made a perfect option for the transport layer. Andreas highlights the important timing information in the headers and how it can be delivered by unicast or IGMP multicast.

As for the audio, standard PCM is the audio of choice here. Andreas details the different format options available such as 24-bit with 8 channels and 48 samples per packet. By varying the format permutations, we can increase the sample rate to 96kHz or modify the number of audio tracks. To signal all of this format information, Session Description Protocol messages are sent which are small text files outlining the format of the upcoming audio. These are defined in RFC 4566. For a deeper introduction to IP basics and these topics, have a look at Ed Calverley’s talk.

The second half of the video is an introduction to ST-2110. A deeper dive can be found elsewhere on the site from Wes Simpson.
Andreas starts from the basis of ST 2022-6 showing how that was an SDI-based format where all the audio, video and metadata were combined together. ST 2110 brings the splitting of media, known as ‘essences’, which allows them to follow separate workflows without requiring lots of de-embedding and embedding processes.

Like most modern standards, ATSC 3.0 is another example, SMPTE ST 2110 is a suite of many standards documents. Andreas takes the time to explain each one and the ones currently being worked on. The first standard is ST 2110-10 which defines the use of PTP for timing and synchronisation. This uses SMPTE ST 2059 to relate PTP time to the phase of media essences.

2110-20 is up next and is the main standard that defines use of uncompressed video with headline features such as being raster/resolution agnostic, colour sampling and more. 2110-21 defines traffic shaping. Andreas takes time to explain why traffic shaping is necessary and what Narrow, Narrow-Linear, Wide mean in terms of packet timing. Finishing the video theme, 2110-22 defines the carriage of mezzanine-compressed video. Intended for compression like TICO and JPEG XS which have light, fast compression, this is the first time that compressed media has entered the 2110 suite.

2110-30 marks the beginning of the audio standards describing how AES67 can be used. As Andreas demonstrates, AES67 has some modes which are not compatible, so he spends time explaining the constraints and how to implement this. For more detail on this topic, check out his previous talk on the matter. 2110-31 introduces AES3 audio which, like in SDI, provides both the ability to have PCM audio, but also non-PCM audio like Dolby E and D.

Finishing up the talk, we hear about 2110-40 which governs transport of ancillary metadata and a look to the standards still being written, 2110-23 Single Video essence over multiple 2110-20 streams, 2110-24 for transport of SD signals and 2110-41 Transport of extensible, dynamic metadata.

Watch now!
Speaker

Andreas Hildebrand Andreas Hildebrand
Senior Product Manager,
ALC NetworX Gmbh.

Video: Live Closed Captioning and Subtitling in SMPTE 2110 (update)

The SMPTE ST 2110-40 standard specifies the real-time, RTP transport of SMPTE ST 291-1 Ancillary Data packets. It allows creation of IP essence flows carrying the VANC data familiar to us from SDI (like AFD, closed captions or ad triggering), complementing the existing video and audio portions of the SMPTE ST 2110 suite.

This presentation, by Bill McLaughlin from EEG, is an updated tutorial on subtitling, closed captioning, and other ancillary data workflows using the ST 2110-40 standard. Topics include synchronization, merging of data from different sources and standards conversion.

Building on Bill’s previous presentation at the IP Showcase), this talk at NAB 2019 demonstrates a big increase in the number of vendors supporting ST 2110-40 standard. Previously a generic packet analyser like Wireshark with dissector was recommended for troubleshooting IP ancillary data. But now most leading multiviewer / analyser products can display captioning, subtitling and timecode from 2110-40 streams. At the recent “JT-NM Tested Program” event 29 products passed 2110-40 Reception Validation. Moreover, 27 products passed 2110-40 Transmitter Validation which mean that their output can be reconstructed into SDI video signals with appropriate timing and then decoded correctly.

Bill points out that ST 2110-40 is not really a new standard at this point, it only defines how to carry ancillary data from the traditional payloads over IP. Special care needs to be taken when different VANC data packets are concatenated in the IP domain. A lot of existing devices are simple ST 2110-40 receivers which would require a kind of VANC funnel to create a combined stream of all the relevant ancillary data, making sure that line numbers and packet types don’t conflict, especially when signals need to be converted back to SDI.

There is a new ST 2110-41 standard being developed for additional ancilary data which do not match up with ancillary data standardised in ST 291-1. Another idea discussed is to move away from SDI VANC data format and use a TTML track (Timed Text Markup Language – textual information associated with timing information) to carry ancillary information.

Watch now!

Download the slides.

Speakers

 

Bill McLaughlin Bill McLaughlin
VP of Product Development
EEG