Video: Deterministic Video Switching in IP Networks

The broadcast industry spent a lot of time getting synchronous cuts working in analogue and SDI. Now IP is being used more and more, there’s a question to be asked about whether video switching should be done in the network itself or at the video level within the receiver. Carl Ostrom from the VSF talks us through the pros and cons of video switching within the network itself along with Brad Gilmer

First off, switching video at a precise point within the stream is known as ‘deterministic switching’. The industry has become used to solid-state crosspoint switching which can be precisely timed so that the switch happens within the vertical blanking interval of the video providing a hitless switch. This isn’t a hitless switch in the meaning of SMPTE ST 2022-7 which allows kit to switch from one identical stream to another to deal with packet loss, this is switching between two different streams with, typically, different content. With the move to ST 2110, we have the option of changing the destination of packets on the fly which can achieve this same switching with the benefit of saving bandwidth. For a receiving device to do a perfect switch, it would need to be receiving both the original video and next video simultaneously, doubling the incoming bandwidth. Not only does this increase the bandwidth, but it can also lead to uneven bandwidth.

 

 

Carl’s open question to the webinar attendees is whether network switching is needed and invites Thomas Edwards from the audience to speak. Thomas has previously done a lot of work proposing switching techniques and has also demonstrated that the P4 programming language for switches can actually successfully manipulate SMPTE ST 2110 traffic in real-time as seen in this demo. Thomas comments that bandwidth within networks built for 2110 doesn’t seem to a problem so subscribing to two streams is working well. We hear further comments regarding network-based switching and complexity. possibly also driving up the costs of the switches themselves. Make before break can also be a simpler technology to fault find when a problem occurs.

Watch now!
Speakers

Carl Ostrom Carl Ostrom
Vice President,
VSF
Brad Gilmer Brad Gilmer
Executive Director, Video Services Forum
Executive Director, Advanced Media Workflow Association (AMWA)

Video: Uncompressed Video in the Cloud

Moving high bitrate flows such as uncompressed media through cloud infrastructure = which is designed for scale rather than real-time throughput requires more thought than simply using UDP and multicast. That traditional approach can certainly work, but is liable to drop the occasional packet compromising the media.

In this video, Thomas Edwards and Evan Statton outline the work underway at Amazon Web Services (AWS) for reliable real-time delivery. On-prem 2110 network architectures usually have two separate networks. Media essences are sent as single, high bandwidth flows over both networks allowing the endpoint to use SMPTE ST 2022-7 seamless switching to deal with any lost packets. Network architectures in the cloud differ compared to on-prem networks. They are usually much wider and taller providing thousands of possible paths to get to any one destination.

 

 

AWS have been working to find ways of harnessing the cloud network architectures and have come up with two protocols. The first to discuss is Scalable Reliable Delivery, SRD, a protocol created by Amazon which guarantees delivery of packets. Delivery is likely to be out of order, so packet order needs to be restored by a layer above SRD. Amazon have custom network cards called ‘Nitro’ and it’s these cards which run the SRD protocol to keep the functionality as close to the physical layer as possible.

SRD capitalises on hyperscale networks by splitting each media flow up into many smaller flows. A high bandwidth uncompressed video flow could be over 1 Gbps. SRD would deliver this over one or more hundred ‘flowlets’ each leaving on a different path. Paths are partially controlled using ECMP, Equal Cost Multipath, routing whereby the egress port used on a switch is chosen by hashing together a number of parameters such as the source IP and destination port. The sender controls the ECMP path selection by manipulating packet encapsulation. SRD employs a specialized congestion control algorithm that helps further decrease the chance of packet drops and minimize retransmit times, by keeping queuing to a minimum. SRD keeps an eye on the RTT (round trip time) of each of the flowlets and adjusts the bandwidth appropriately. This is particularly useful as a way to deal with the problem where upstream many flowlets may end up going through the same interface which is close to being overloaded, known as ‘incast congestion’. In this way, SRD actively works to reduce latency and congestion. SRD is able to monitor round trip time since it also has a very small retransmit buffer so that any packets which get lost can be resent. Similar to SRT and RIST, SRD does expect to receive acknowledgement packets and by looking at when these arrive and the timing between packets, RTT and bandwidth estimations can be made.

CDI, the Cloud Digital Interface, is a layer on top of SRD which acts as an interface for programmers. Available on Github under a BSD licence, it gives access to the incoming essence streams in a way similar to SMPTE’s ST 2110 making it easy to deal with pixel data, get access to RGB graphics including an alpha layer as well as providing metadata information for subtitles or SCTE 104 signalling.

Thomas Edwards Thomas Edwards
Principal Solutions Architect & Evangelist,
Amazon Web Services
Evan Statton Evan Statton
Principal Architect,
Amazon Web Services (AWS)

Video: How to Deploy an IP-Based Infrastructure

An industry-wide move to any new technology takes time and there is a steady flow of people new to the technology. This video is a launchpad for anyone just coming into IP infrastructures whether because their company is starting or completing an IP project or because people are starting to ask the question “Should we go IP too?”.

Keycode Media’s Steve Dupaix starts with an overview of how SMPTE’s suite of standards called ST 2110 differs from other IP-based video and audio technologies such as NDI, SRT, RIST and Dante. The key takeaways are that NDI provides compressed video with a low delay of around 100ms with a suite of free tools to help you get started. SRT and RIST are similar technologies that are usually used to get AVC or HEVC video from A to B getting around packet loss, something that NDI and ST 2110 don’t protect for without FEC. This is because SRT and RIST are aimed at moving data over lossy networks like the internet. Find out more about SRT in this SMPTE video. For more on NDI, this video from SMPTE and VizRT gives the detail.

 

 

ST 2110’s purpose is to get high quality, usually lossless, video and audio around a local area network originally being envisaged as a way of displacing baseband SDI and was specced to work flawlessly in live production such as a studio. It brings with it some advantages such as separating the essences i.e. video, audio, timing and ancillary data are separate streams. It also brings the promise of higher density for routing operations, lower-cost infrastructure since the routers and switches are standard IT products and increased flexibility due to the much-reduced need to move/add cables.

Robert Erickson from Grass Valley explains that they have worked hard to move all of their product lines to ‘native IP’ as they believe all workflows will move IP whether on-premise or in the cloud. The next step, he sees is enabling more workflows that move video in and out of the cloud and for that, they need to move to JPEG XS which can be carried in ST 2110-20. Thomas Edwards from AWS adds their perspective agreeing that customers are increasingly using JPEG XS for this purpose but within the cloud, they expect the new CDI which is a specification for moving high-bandwidth traffic like 2110-20 streams of uncompressed video from point to point within the cloud.

John Mailhot from Imagine Communications is also the chair of the VSF activity group for ground-cloud-cloud-ground. This aims to harmonise the ways in which vendors provide movement of media, whatever bandwidth, into and out of the cloud as well as from point to point within. From the Imagine side, he says that ST 2110 is now embedded in all products but the key is to choose the most appropriate transport. In the cloud, CDI is often the most appropriate transport within AWS and he agrees that JPEG XS is the most appropriate for cloud<->ground operations.

The panel takes a moment to look at the way that the pandemic has impacted the use of video over IP. As we heard earlier this year, the New York Times had been waiting before their move to IP and the pandemic forced them to look at the market earlier than planned. When they looked, they found the products which they needed and moved to a full IP workflow. So this has been the theme and if anything has driven, and will continue to drive, innovation. The immediate need provided the motivation to consider new workflows and now that the workflow is IP, it’s quicker, cheaper and easier to test new variation. Thomas Edwards points out that many of the current workflows are heavily reliant on AVC or HEVC despite the desire to use JPEG XS for the broadcast content. For people at home, JPEG XS bandwidths aren’t practical but RIST with AVC works fine for most applications.

Interoperability between vendors has long been the focus of the industry for ST 2110 and, in John’s option, is now pretty reliable for inter-vendor essence exchanges. Recently the focus has been on doing the same with NMOS which both he and Robert report is working well from recent, multi-vendor projects they have been involved in. John’s interest is working out ways that the cloud and ground can find out about each other which isn’t a use case yet covered in AMWA’s NMOS IS-04.

The video ends with a Q&A covering the following:

  • Where to start in your transition to IP
  • What to look for in an ST 2110-capable switch
  • Multi-Level routing support
  • Using multicast in AWS
  • Whether IT equipment lifecycles conflict with Broadcast refresh cycles
  • Watch now!
    Speakers

    John Mailhot John Mailhot
    CTO & Director of Product Management, Infrastructure & Networking,
    Imagine Communications
    Ciro Noronha Ciro Noronha
    Executive Vice-President of Engineering,
    Cobalt Digital
    Thomas Edwards Thomas Edwards
    Principal Solutions Architect & Evangelist,
    Amazon Web Services
    Robert Erickson Robert Erickson
    Strategic Account Manager Sports and Venues,
    Grass Valley
    Steve Dupaix Steve Dupaix
    Senior Account Executive,
    Key Code Media

    Video: P4 Tutorial

    P4 is a powerful programming language which runs on network switches themselves allowing realtime manipulation of the data traffic. In broadcast, this can be used to alter SMPTE 2110 video in real time as demonstrated by Thomas Edwards at the EBU Network Technology Seminar year and that can be seen in this short video. “This shows how even on an ethernet switch now, we can program it to make these switching decisions based on any header [including] the application layer of the broadcast data”

    This video explains what P4 is and how it works taking us all the way from the core principles to ways of programming it and harnessing its power. Watching the beginning of the video is sufficient for most in order to get a feel for P4 and how it could be (and is) applied to broadcast.

    The speakers, from Cisco and Barefoot Networks (who work with Thomas Edwards from Fox), cove these topics:

    • What is the Data plane
    • Software Defined Networking (SDN) & Openflow
    • Benefits of programming your own dataplane
    • Typical Applications of P4
    • Novel Applications
    • Basics of the P4 language
    • P4 Software tools

    Watch now!

    Speakers

    Antonin Bas Antonin Bas
    Software Engineer,
    Barefoot Networks
    Andy Fingerhut Andy Fingerhut
    Principal Engineer,
    Cisco Systems