Video: Introduction to IPMX

The Broadcast Knowledge has documented over 100 videos and webinars on SMPTE ST 2110. It’s a great suite of standards but it’s not always simple to implement. For smaller systems, many of the complications and nuances don’t occur so a lot of the deeper dives into ST 2110 and its associated specifications such as NMOS from AMWA focus on the work done in large systems in tier-1 broadcasters such as the BBC, tpc and FIS Skiing for SVT.

ProAV, the professional end of the AV market, is a different market. Very few companies have a large AV department if one at all. So the ProAV market needs technologies which are much more ‘plug and play’ particularly those in the events side of the market. To date, the ProAV market has been successful in adopting IP technology with quick deployments by using heavily proprietary solutions like ZeeVee, SDVoE and NDI to name a few. These achieve interoperability by having the same software or hardware in each and every implementation.

IPMX aims to change this by bringing together a mix of standards and open specifications: SMPTE ST 2110, NMOS specs and AES. Any individual or company can gain access and develop a service or product to meet them.

Andreas gives a brief history of IP to date outlining how AES67, ST 2110, ST 2059 and the IS specifications, his point being that the work is not yet done. ProAV has needs beyond, though complementary to, those of broadcast.

AES67 is already the answer to a previous interoperability challenge, explains Andreas, as the world of audio over IP was once a purely federated world of proprietary standards which had no, or limited, interoperability. AES67 defined a way to allow these standards to interoperate and has now become the main way audio is moved in SMPTE 2110 under ST 2110-30 (2110-31 allows for AES3). Andreas explains the basics of 2110, AES, as well as the NMOS specifications. He then shows how they fit together in a layered design.

Andreas brings the talk to a close looking at some of the extensions that are needed, he highlights the ability to be more flexible with the quality-bandwidth-latency trade-off. Some ProAV applications require pixel perfection, but some are dictated by lower bandwidth. The current ecosystem, if you include ST 2110-22’s ability to carry JPEG-XS instead of uncompressed video allows only very coarse control of this. HDMI, naturally, is of great importance for ProAV with so many HDMI interfaces in play but also the wide variety of resolutions and framerates that are found outside of broadcast. Work is ongoing to enable HDCP to be carried, suitably encrypted, in these systems. Finally, there is a plan to specify a way to reduce the highly strict PTP requirements.

Watch now!
Speaker

Andreas Hildebrand Andreas Hildebrand
Evangelist,
ALC NetworX

Video: Introduction To AES67 & SMPTE ST 2110

While standardisation of video and audio over IP is welcome, this does leave us with a plethora of standards numbers to keep track of along with interoperability edge cases to keep track of. Audio-over-IP standard AES67 is part of the SMPTE ST-2110 standards suite and was born largely from RAVENNA which is still in use in it’s own right. It’s with this backdrop that Andreas Hildebrand from ALC NetworX who have been developing RAVENNA for 10 years now, takes the mic to explain how this all fits together. Whilst there are many technologies at play, this webinar focusses on AES67 and 2110.

Andreas explains how AES67 started out of a plan to unite the many proprietary audio-over-IP formats. For instance, synchronisation – like ST 2110 as we’ll see later – was based on PTP. Andreas gives an overview of this synchronisation and then we shows how they looked at each of the OSI layers and defined a technology that could service everyone. RTP, the Real-time Transport Protocol has been in use for a long time for transport of video and audio so made a perfect option for the transport layer. Andreas highlights the important timing information in the headers and how it can be delivered by unicast or IGMP multicast.

As for the audio, standard PCM is the audio of choice here. Andreas details the different format options available such as 24-bit with 8 channels and 48 samples per packet. By varying the format permutations, we can increase the sample rate to 96kHz or modify the number of audio tracks. To signal all of this format information, Session Description Protocol messages are sent which are small text files outlining the format of the upcoming audio. These are defined in RFC 4566. For a deeper introduction to IP basics and these topics, have a look at Ed Calverley’s talk.

The second half of the video is an introduction to ST-2110. A deeper dive can be found elsewhere on the site from Wes Simpson.
Andreas starts from the basis of ST 2022-6 showing how that was an SDI-based format where all the audio, video and metadata were combined together. ST 2110 brings the splitting of media, known as ‘essences’, which allows them to follow separate workflows without requiring lots of de-embedding and embedding processes.

Like most modern standards, ATSC 3.0 is another example, SMPTE ST 2110 is a suite of many standards documents. Andreas takes the time to explain each one and the ones currently being worked on. The first standard is ST 2110-10 which defines the use of PTP for timing and synchronisation. This uses SMPTE ST 2059 to relate PTP time to the phase of media essences.

2110-20 is up next and is the main standard that defines use of uncompressed video with headline features such as being raster/resolution agnostic, colour sampling and more. 2110-21 defines traffic shaping. Andreas takes time to explain why traffic shaping is necessary and what Narrow, Narrow-Linear, Wide mean in terms of packet timing. Finishing the video theme, 2110-22 defines the carriage of mezzanine-compressed video. Intended for compression like TICO and JPEG XS which have light, fast compression, this is the first time that compressed media has entered the 2110 suite.

2110-30 marks the beginning of the audio standards describing how AES67 can be used. As Andreas demonstrates, AES67 has some modes which are not compatible, so he spends time explaining the constraints and how to implement this. For more detail on this topic, check out his previous talk on the matter. 2110-31 introduces AES3 audio which, like in SDI, provides both the ability to have PCM audio, but also non-PCM audio like Dolby E and D.

Finishing up the talk, we hear about 2110-40 which governs transport of ancillary metadata and a look to the standards still being written, 2110-23 Single Video essence over multiple 2110-20 streams, 2110-24 for transport of SD signals and 2110-41 Transport of extensible, dynamic metadata.

Watch now!
Speaker

Andreas Hildebrand Andreas Hildebrand
Senior Product Manager,
ALC NetworX Gmbh.

Video: Live Closed Captioning and Subtitling in SMPTE 2110 (update)

The SMPTE ST 2110-40 standard specifies the real-time, RTP transport of SMPTE ST 291-1 Ancillary Data packets. It allows creation of IP essence flows carrying the VANC data familiar to us from SDI (like AFD, closed captions or ad triggering), complementing the existing video and audio portions of the SMPTE ST 2110 suite.

This presentation, by Bill McLaughlin from EEG, is an updated tutorial on subtitling, closed captioning, and other ancillary data workflows using the ST 2110-40 standard. Topics include synchronization, merging of data from different sources and standards conversion.

Building on Bill’s previous presentation at the IP Showcase), this talk at NAB 2019 demonstrates a big increase in the number of vendors supporting ST 2110-40 standard. Previously a generic packet analyser like Wireshark with dissector was recommended for troubleshooting IP ancillary data. But now most leading multiviewer / analyser products can display captioning, subtitling and timecode from 2110-40 streams. At the recent “JT-NM Tested Program” event 29 products passed 2110-40 Reception Validation. Moreover, 27 products passed 2110-40 Transmitter Validation which mean that their output can be reconstructed into SDI video signals with appropriate timing and then decoded correctly.

Bill points out that ST 2110-40 is not really a new standard at this point, it only defines how to carry ancillary data from the traditional payloads over IP. Special care needs to be taken when different VANC data packets are concatenated in the IP domain. A lot of existing devices are simple ST 2110-40 receivers which would require a kind of VANC funnel to create a combined stream of all the relevant ancillary data, making sure that line numbers and packet types don’t conflict, especially when signals need to be converted back to SDI.

There is a new ST 2110-41 standard being developed for additional ancilary data which do not match up with ancillary data standardised in ST 291-1. Another idea discussed is to move away from SDI VANC data format and use a TTML track (Timed Text Markup Language – textual information associated with timing information) to carry ancillary information.

Watch now!

Download the slides.

Speakers

 

Bill McLaughlin Bill McLaughlin
VP of Product Development
EEG