The broadcast industry spent a lot of time getting synchronous cuts working in analogue and SDI. Now IP is being used more and more, there’s a question to be asked about whether video switching should be done in the network itself or at the video level within the receiver. Carl Ostrom from the VSF talks us through the pros and cons of video switching within the network itself along with Brad Gilmer
First off, switching video at a precise point within the stream is known as ‘deterministic switching’. The industry has become used to solid-state crosspoint switching which can be precisely timed so that the switch happens within the vertical blanking interval of the video providing a hitless switch. This isn’t a hitless switch in the meaning of SMPTE ST 2022-7 which allows kit to switch from one identical stream to another to deal with packet loss, this is switching between two different streams with, typically, different content. With the move to ST 2110, we have the option of changing the destination of packets on the fly which can achieve this same switching with the benefit of saving bandwidth. For a receiving device to do a perfect switch, it would need to be receiving both the original video and next video simultaneously, doubling the incoming bandwidth. Not only does this increase the bandwidth, but it can also lead to uneven bandwidth.
Carl’s open question to the webinar attendees is whether network switching is needed and invites Thomas Edwards from the audience to speak. Thomas has previously done a lot of work proposing switching techniques and has also demonstrated that the P4 programming language for switches can actually successfully manipulate SMPTE ST 2110 traffic in real-time as seen in this demo. Thomas comments that bandwidth within networks built for 2110 doesn’t seem to a problem so subscribing to two streams is working well. We hear further comments regarding network-based switching and complexity. possibly also driving up the costs of the switches themselves. Make before break can also be a simpler technology to fault find when a problem occurs.
Is SMPTE ST 2110 suitable for inter-site connectivity over the WAN? ST 2110 is putting the early adopter phase behind it with more and more installations and OB vans bringing 2110 into daily use yet most sites works independently. The market is already seeing a strong need to continue to derive cost and efficiency savings from the infrastructure in the larger broadcasters who have multiple facilities spread around one country or even in many. To do this, though there are a number of challenges still to be overcome and moving a large number of essence flows long distances and between PTP time domains is one of them.
Nevion’s Andy Rayner is chair of the VSF Activity Group looking into transporting SMPTE ST 2110 over WAN and is here to give an update on the achievements of the past two years. He underlines that the aim of the ST 2110 over WAN activity group is to detail how to securely share media and control between facilities. The key scenarios being considered are 1) special events/remote production/REMIs. 2) Facility sharing within a company. 3) Sharing facilities between companies. He also notes that there is a significant cross over in this work and that happening in the Ground-Cloud-Cloud-Ground (GCCG) activity group which is also co-chairs.
The group has produced drafts of two documents under TR-09. The first, TR-09-01 discusses the data plane and has been largely discussed previously. It defines data protection methods as the standard 2022-7 which uses multiple, identical, flows to deal with packet loss and also a constrained version of FEC standard ST 2022-5 which provides a low-latency FEC for the protection of individual data streams.
GRE trunking over RTP was previously announced as the recommended way to move traffic between sites, though Andy notes that no one aspect of the document is mandatory. The benefits of using a trunk are that all traffic is routed down the same path which helps keep the propagation delay for each essence identical, bitrate is kept high for efficient application of FEC, the workflow and IT requirements are simpler and finally, the trunk has now been specified so that it can transparently carry ethernet headers between locations.
Andy also introduces TR-09-02 which talks about sharing of control. The control plane in any facility is not specified and doesn’t have to be NMOS. However NMOS specifications such IS-04 and IS-05 are the basis chosen for control sharing. Andy describes the control as providing a constrained NMOS interface between autonomous locations and discusses how it makes available resources and metadata to the other location and how that location then has the choice of whether or not to consume the advertised media and control. This allows facilities to pick and choose what is shared.
Moving high bitrate flows such as uncompressed media through cloud infrastructure = which is designed for scale rather than real-time throughput requires more thought than simply using UDP and multicast. That traditional approach can certainly work, but is liable to drop the occasional packet compromising the media.
In this video, Thomas Edwards and Evan Statton outline the work underway at Amazon Web Services (AWS) for reliable real-time delivery. On-prem 2110 network architectures usually have two separate networks. Media essences are sent as single, high bandwidth flows over both networks allowing the endpoint to use SMPTE ST 2022-7 seamless switching to deal with any lost packets. Network architectures in the cloud differ compared to on-prem networks. They are usually much wider and taller providing thousands of possible paths to get to any one destination.
AWS have been working to find ways of harnessing the cloud network architectures and have come up with two protocols. The first to discuss is Scalable Reliable Delivery, SRD, a protocol created by Amazon which guarantees delivery of packets. Delivery is likely to be out of order, so packet order needs to be restored by a layer above SRD. Amazon have custom network cards called ‘Nitro’ and it’s these cards which run the SRD protocol to keep the functionality as close to the physical layer as possible.
SRD capitalises on hyperscale networks by splitting each media flow up into many smaller flows. A high bandwidth uncompressed video flow could be over 1 Gbps. SRD would deliver this over one or more hundred ‘flowlets’ each leaving on a different path. Paths are partially controlled using ECMP, Equal Cost Multipath, routing whereby the egress port used on a switch is chosen by hashing together a number of parameters such as the source IP and destination port. The sender controls the ECMP path selection by manipulating packet encapsulation. SRD employs a specialized congestion control algorithm that helps further decrease the chance of packet drops and minimize retransmit times, by keeping queuing to a minimum. SRD keeps an eye on the RTT (round trip time) of each of the flowlets and adjusts the bandwidth appropriately. This is particularly useful as a way to deal with the problem where upstream many flowlets may end up going through the same interface which is close to being overloaded, known as ‘incast congestion’. In this way, SRD actively works to reduce latency and congestion. SRD is able to monitor round trip time since it also has a very small retransmit buffer so that any packets which get lost can be resent. Similar to SRT and RIST, SRD does expect to receive acknowledgement packets and by looking at when these arrive and the timing between packets, RTT and bandwidth estimations can be made.
CDI, the Cloud Digital Interface, is a layer on top of SRD which acts as an interface for programmers. Available on Github under a BSD licence, it gives access to the incoming essence streams in a way similar to SMPTE’s ST 2110 making it easy to deal with pixel data, get access to RGB graphics including an alpha layer as well as providing metadata information for subtitles or SCTE 104 signalling.
Principal Solutions Architect & Evangelist,
Amazon Web Services
Amazon Web Services (AWS)
Early adopters of IP are benefiting from at least one of density, flexibility and scalability which are some of the promises of the technology. For OB vans, the ability to switch hundreds of feeds within only a couple rack units is incredibly useful, for others being able to quickly reconfigure a room is very valuable. So whilst IP isn’t yet right for everyone, those that have adopted it are getting from it benefits which SDI can’t deliver. Unfortunately, there are aspects of IP which are more complex than older technology. A playback machine plugged into an SDI router needed no configuration. However, the router and control system would need to be updated manually to say that a certain input was now a VT machine. In the IP world, the control system can discover the new device itself reducing manual intervention. In this situation, the machine also needs an IP configuration which can be done manually or automatically. If manual, this is more work than before. If automatic, this is another service that needs to be maintained and understood.
Just like the IT world is built on layers of protocols, standards and specifications, so is a modern broadcast workflow. And like the OSI model which helps break down networking into easy to understand, independent layers such as cabling (layer 1), point to point data links (layer 2), the network layer (3) etc. It’s useful to understand IP systems in a similar way as this helps reduce complexity. The ‘Networked Media System Big Picture’ is aimed at helping show how a professional IP media system is put together and how the different parts of it are linked – and how they are not linked. It allows a high-level view to help explain the concepts and enables you to add detail to explain how each and every protocol, standard and specification are used and their scope. The hope is that this diagram will aid everyone in your organisation to speak in a common language and support conversations with vendors and other partners to avoid misunderstandings.
Brad Gilmer takes us through the JT-NM’s diagram which shows that security is the bottom layer for the whole system meaning that security is all-encompassing and important to everything. Above the security layer is the monitoring layer. Naturally, if you can’t measure how the rest of your system is behaving, it’s very hard to understand what’s wrong. For lager systems, you’ll be wanting to aggregate the data and look for trends that may point to worsening performance. Brad explains that next are the control layer and the media & infrastructure layer. The media and infrastructure layer contains tools and infrastructure needed to create and transport professional media.
Towards the end of this video, Brad shows how the diagram can be filled in and highlighted to show, for instance, the work that AMWA has done with NMOS including work in progress. He also shows the parts of the system that are within the scope of the JT-NM TR 1001 document. These are just two examples of how to use the diagram to frame and focus discussions demonstrating the value of the work undertaken.
Executive Director, Video Services Forum
Executive Director, Advanced Media Workflow Association (AMWA)
Moderator: Wes Simpson
Subscribe to get daily updates
Views and opinions expressed on this website are those of the author(s) and do not necessarily reflect those of SMPTE or SMPTE Members.
This website is presented for informational purposes only. Any reference to specific companies, products or services does not represent promotion, recommendation, or endorsement by SMPTE