Video: Case Study on a Large Scale Distributed ST 2110 Deployment

We’re “past the early-adopter stage” of SMPTE 2110, notes Andy Rayner from Nevion as he introduces this case study of a multi-national broadcaster who’s created a 2110-based live production network spanning ten countries.

This isn’t the first IP project that Nevion have worked on, but it’s doubtless the biggest to date. And it’s in the context of these projects that Andy says he’s seen the maturing of the IP market in terms of how broadcasters want to use it and, to an extent, the solutions on the market.

Fully engaging with the benefits of IP drives the demand for scale as people are freer to define a workflow that works best for the business without the constraints of staying within one facility. Part of the point of this whole project is to centralise all the equipment in two, shared, facilities with everyone working remotely. This isn’t remote production of an individual show, this is remote production of whole buildings.

SMPTE ST-2110, famously, sends all essences separately so where an 1024×1024 SDI router might have carried 70% of the media between two locations, we’re now seeing tens of thousands of streams. In fact, the project as a whole is managing in the order of 100,000 connections.

With so many connections, many of which are linked, manual management isn’t practical. The only sensible way to manage them is through an abstraction layer. For instance, if you abstract the IP connections from the control, you can still have a panel for an engineer or operator which says ‘Playout Server O/P 3’ which allow you to route it with a button that says ‘Prod Mon 2’. Behind the scenes, that may have to make 18 connections across 5 separate switches.

This orchestration is possible using SDN – Software Defined Networking – where router decisions are actually taken away from the routers/switches. The problem is that if a switch has to decide how to send some traffic, all it can do is look at its small part of the network and do its best. SDN allows you to have a controller, or orchestrator, which understands the network as a whole and can make much more efficient decisions. For instance, it can make absolutely sure that ST 2022-7 traffic is routed separately by diverse paths. It can do bandwidth calculations to stop bandwidths from being oversubscribed.

Whilst the network is, indeed, based on SMPTE ST 2110, one of the key enablers is JPEG XS for international links. JPEG XS provides a similar compression level to JPEG 2000 but with much less latency. The encode itself requires less than 1ms of latency, unlike JPEG 2000’s 60ms. Whilst 60ms may seem small, when a video needs to move 4 or even 10 times as part of a production workflow, it soon adds up to a latency that humans can’t work with. JPEG XS promises to allow such international production to feel responsive and natural. Making this possible was the extension of SMPTE ST 2110, for the first time, to allow carriage of compressed video in ST 2110-22.

Andy finishes his overview of this uniquely large case study talking about conversion between types of audio, operating SDN with IGMP multicast islands, and NMOS Control. In fact, it’s NMOS which the answer to the final question asking what the biggest challenge is in putting this type of project together. Clearly, in a project of this magnitude, there are challenges around every corner, but problems due to quantity can be measured and managed. Andy points to NMOS adoption with manufacturers still needing to be pushed higher whilst he lays down the challenge to AMWA to develop NMOS further so that it’s extended to describe more aspects of the equipment – to date, there are not enough data points.

Watch now!
Speakers

Andy Rayner Andy Rayner
Chief Technologist,
Nevion

Video: Introduction to IPMX

The Broadcast Knowledge has documented over 100 videos and webinars on SMPTE ST 2110. It’s a great suite of standards but it’s not always simple to implement. For smaller systems, many of the complications and nuances don’t occur so a lot of the deeper dives into ST 2110 and its associated specifications such as NMOS from AMWA focus on the work done in large systems in tier-1 broadcasters such as the BBC, tpc and FIS Skiing for SVT.

ProAV, the professional end of the AV market, is a different market. Very few companies have a large AV department if one at all. So the ProAV market needs technologies which are much more ‘plug and play’ particularly those in the events side of the market. To date, the ProAV market has been successful in adopting IP technology with quick deployments by using heavily proprietary solutions like ZeeVee, SDVoE and NDI to name a few. These achieve interoperability by having the same software or hardware in each and every implementation.

IPMX aims to change this by bringing together a mix of standards and open specifications: SMPTE ST 2110, NMOS specs and AES. Any individual or company can gain access and develop a service or product to meet them.

Andreas gives a brief history of IP to date outlining how AES67, ST 2110, ST 2059 and the IS specifications, his point being that the work is not yet done. ProAV has needs beyond, though complementary to, those of broadcast.

AES67 is already the answer to a previous interoperability challenge, explains Andreas, as the world of audio over IP was once a purely federated world of proprietary standards which had no, or limited, interoperability. AES67 defined a way to allow these standards to interoperate and has now become the main way audio is moved in SMPTE 2110 under ST 2110-30 (2110-31 allows for AES3). Andreas explains the basics of 2110, AES, as well as the NMOS specifications. He then shows how they fit together in a layered design.

Andreas brings the talk to a close looking at some of the extensions that are needed, he highlights the ability to be more flexible with the quality-bandwidth-latency trade-off. Some ProAV applications require pixel perfection, but some are dictated by lower bandwidth. The current ecosystem, if you include ST 2110-22’s ability to carry JPEG-XS instead of uncompressed video allows only very coarse control of this. HDMI, naturally, is of great importance for ProAV with so many HDMI interfaces in play but also the wide variety of resolutions and framerates that are found outside of broadcast. Work is ongoing to enable HDCP to be carried, suitably encrypted, in these systems. Finally, there is a plan to specify a way to reduce the highly strict PTP requirements.

Watch now!
Speaker

Andreas Hildebrand Andreas Hildebrand
Evangelist,
ALC NetworX

Video: Introduction To AES67 & SMPTE ST 2110

While standardisation of video and audio over IP is welcome, this does leave us with a plethora of standards numbers to keep track of along with interoperability edge cases to keep track of. Audio-over-IP standard AES67 is part of the SMPTE ST-2110 standards suite and was born largely from RAVENNA which is still in use in it’s own right. It’s with this backdrop that Andreas Hildebrand from ALC NetworX who have been developing RAVENNA for 10 years now, takes the mic to explain how this all fits together. Whilst there are many technologies at play, this webinar focusses on AES67 and 2110.

Andreas explains how AES67 started out of a plan to unite the many proprietary audio-over-IP formats. For instance, synchronisation – like ST 2110 as we’ll see later – was based on PTP. Andreas gives an overview of this synchronisation and then we shows how they looked at each of the OSI layers and defined a technology that could service everyone. RTP, the Real-time Transport Protocol has been in use for a long time for transport of video and audio so made a perfect option for the transport layer. Andreas highlights the important timing information in the headers and how it can be delivered by unicast or IGMP multicast.

As for the audio, standard PCM is the audio of choice here. Andreas details the different format options available such as 24-bit with 8 channels and 48 samples per packet. By varying the format permutations, we can increase the sample rate to 96kHz or modify the number of audio tracks. To signal all of this format information, Session Description Protocol messages are sent which are small text files outlining the format of the upcoming audio. These are defined in RFC 4566. For a deeper introduction to IP basics and these topics, have a look at Ed Calverley’s talk.

The second half of the video is an introduction to ST-2110. A deeper dive can be found elsewhere on the site from Wes Simpson.
Andreas starts from the basis of ST 2022-6 showing how that was an SDI-based format where all the audio, video and metadata were combined together. ST 2110 brings the splitting of media, known as ‘essences’, which allows them to follow separate workflows without requiring lots of de-embedding and embedding processes.

Like most modern standards, ATSC 3.0 is another example, SMPTE ST 2110 is a suite of many standards documents. Andreas takes the time to explain each one and the ones currently being worked on. The first standard is ST 2110-10 which defines the use of PTP for timing and synchronisation. This uses SMPTE ST 2059 to relate PTP time to the phase of media essences.

2110-20 is up next and is the main standard that defines use of uncompressed video with headline features such as being raster/resolution agnostic, colour sampling and more. 2110-21 defines traffic shaping. Andreas takes time to explain why traffic shaping is necessary and what Narrow, Narrow-Linear, Wide mean in terms of packet timing. Finishing the video theme, 2110-22 defines the carriage of mezzanine-compressed video. Intended for compression like TICO and JPEG XS which have light, fast compression, this is the first time that compressed media has entered the 2110 suite.

2110-30 marks the beginning of the audio standards describing how AES67 can be used. As Andreas demonstrates, AES67 has some modes which are not compatible, so he spends time explaining the constraints and how to implement this. For more detail on this topic, check out his previous talk on the matter. 2110-31 introduces AES3 audio which, like in SDI, provides both the ability to have PCM audio, but also non-PCM audio like Dolby E and D.

Finishing up the talk, we hear about 2110-40 which governs transport of ancillary metadata and a look to the standards still being written, 2110-23 Single Video essence over multiple 2110-20 streams, 2110-24 for transport of SD signals and 2110-41 Transport of extensible, dynamic metadata.

Watch now!
Speaker

Andreas Hildebrand Andreas Hildebrand
Senior Product Manager,
ALC NetworX Gmbh.

Video: The Basics of SMPTE ST 2110 in 60 Minutes

SMPTE ST 2110 is a growing suite of standards detailing uncompressed media transport over networks. Now at 8 documents, it’s far more than just ‘video over IP’. This talk looks at the new ways that video can be transported, dealing with PTP timing, creating ‘SDPs’ and is a thorough look at all the documents.

Building on this talk from Ed Calverley which explains how we can use networks to carry uncompressed video, Wes Simpson goes through all the parts of the ST 2110 suite explaining how they work and interoperate as part of the IP Showcase at NAB 2019.

Wes starts by highlighting the new parts of 2110, namely the overview document which gives a high level overview of all the standard docs, the addition of compressed bit-rate video carriage and the recommended practice document for splitting a single video and sending it over multiple links; both of which are detailed later in the talk.

SMPTE ST 2110 is fundamentally different, as highlighted next, in that it splits up all the separate parts of the signal (i.e. video, audio and metadata) so they can be transferred and processed separately. This is a great advantage in terms of reading metadata without having to ingest large amounts of video meaning that the networking and processing requirements are much lighter than they would otherwise be. However, when essences are separated, putting them back together without any synchronisation issues is tricky.

ST 2110-10 deals with timing and knowing which packets of one essence are associated with packets of another essence at any particular point in time. It does this with PTP, which is detailed in IEEE 1588 and also in SMPTE ST 2059-2. Two standards are needed to make this work because the IEEE defined how to derive and carry timing over the network, SMPTE then detailed how to match the PTP times to phases of media. Wes highlights that care needs to be used when using PTP and AES67 as the audio standard requires specific timing parameters.

The next section moves into the video portion of 2110 dealing with video encapsulation on the networks pixel grouping and the headers needed for the packets. Wes then spends some time walking us through calculating the bitrate of a stream. Whilst for most people using a look-up table of standard formats would suffice, understanding how to calculate the throughput helps develop a very good understanding of the way 2110 is carried on the wire as you have to take note not only of the video itself (4:2:2 10 bit, for instance) but also the pixel groupings, UDP, RTP and IP headers.

Timing of packets on the wire isn’t anything new as it is also important for compressed applications, but it is of similar importance to ensure that packets are sent properly paced on wire. This is to say that if you need to send 10 packets, you send them one at a time with equal time between them, not all at once right next to each other. Such ‘micro bursting’ can cause problems not only for the receiver which then needs to use more buffers, but also when mixed with other streams on the network it can affect the efficiency of the routers and switches leading to jitter and possibly dropped packets. 2110-21 sets standards to govern the timing of network pacing for all of the 2110 suite.

Referring back to his warning earlier regarding timing and AES67, Wes now goes into detail on the 2110-30 standard which describes the use of audio for these uncompressed workflows. He explains how the sample rates and packet times relate to the ability to carry multiple audios with some configurations allowing 64 audios in one stream rather than the typical 8.

‘Essences’, rather than media, is a word often heard when talking about 2110. This is an acknowledgement that metadata is just as important as the media described in 2110. It’s sent separately as described by 2110-40. Wes explains the way captions/subtitles, ad triggers, timecode and more can be encapsulated in the stream as ancillary ‘ANC’ packets.

2110-22 is an exciting new addition as this enables the use of compressed video such as VC-2 and JPEG-XS which are ultra low latency codecs allowing the video stream to be reduced by half, a quarter or more. As described in this talk the ability to create workflows on a single IP infrastructure seamlessly moving into and out of compressed video is allowing remote production across countries allowing for equipment to be centralised with people and control surfaces elsewhere.

Noted as ‘forthcoming’ by Wes, but having since been published, is RP 2110-23 which adds back in a feature that was lost when migrating from 2022-6 into 2110 – the ability to send a UHD feed as 4x HD feeds. This can be useful to allow for UHD to be used as a production format but for multiviewers to only need to work in HD mode for monitoring. Wes explains the different modes available. The talk finishes by looking at RTP timestamps and SDPs.

Watch now!
The slides for this talk are available here
Speakers

Wes Simpson Wes Simpson
President,
Telecom Product Consulting