Video: Enhanced Redundancy of ST 2059-2 Time Transfer over ST 2022-7 Redundant Networks

We’re all starting to get the hang of the basics: that PTP is the new Black and Burst, that we still need sync to make studios work and that PTP (IEEE1588) is standardised under ST 2059 for use in the broadcast industry. So given its importance, how can we make it redundant?

Thomas Kernen from Mellanox and Chair within the STMPE standards community takes about his real-lift work on implementing PTP with an eye on redundancy methods

Thomas covers the following and more:

  • Whether 2022-7 works for PTP
  • BMCA Redundancy Model
  • Multiple Grand master use
  • Adjusting to dynamic variations in timing feeds
  • IEEE 1588 v2.1
  • Timing Differences in basic networks

Speakers

Thomas Kernen Thomas Kernen
Staff Software Architect, Mellanox Technologies
Co-chair SMPTE 32NF Network Facilities Technology Committee

Video: Deterministic Video Switching in IP Networks

The broadcast industry spent a lot of time getting synchronous cuts working in analogue and SDI. Now IP is being used more and more, there’s a question to be asked about whether video switching should be done in the network itself or at the video level within the receiver. Carl Ostrom from the VSF talks us through the pros and cons of video switching within the network itself along with Brad Gilmer

First off, switching video at a precise point within the stream is known as ‘deterministic switching’. The industry has become used to solid-state crosspoint switching which can be precisely timed so that the switch happens within the vertical blanking interval of the video providing a hitless switch. This isn’t a hitless switch in the meaning of SMPTE ST 2022-7 which allows kit to switch from one identical stream to another to deal with packet loss, this is switching between two different streams with, typically, different content. With the move to ST 2110, we have the option of changing the destination of packets on the fly which can achieve this same switching with the benefit of saving bandwidth. For a receiving device to do a perfect switch, it would need to be receiving both the original video and next video simultaneously, doubling the incoming bandwidth. Not only does this increase the bandwidth, but it can also lead to uneven bandwidth.

 

 

Carl’s open question to the webinar attendees is whether network switching is needed and invites Thomas Edwards from the audience to speak. Thomas has previously done a lot of work proposing switching techniques and has also demonstrated that the P4 programming language for switches can actually successfully manipulate SMPTE ST 2110 traffic in real-time as seen in this demo. Thomas comments that bandwidth within networks built for 2110 doesn’t seem to a problem so subscribing to two streams is working well. We hear further comments regarding network-based switching and complexity. possibly also driving up the costs of the switches themselves. Make before break can also be a simpler technology to fault find when a problem occurs.

Watch now!
Speakers

Carl Ostrom Carl Ostrom
Vice President,
VSF
Brad Gilmer Brad Gilmer
Executive Director, Video Services Forum
Executive Director, Advanced Media Workflow Association (AMWA)

Video: ST 2110 Over WAN — The Conclusion of Act 1

Is SMPTE ST 2110 suitable for inter-site connectivity over the WAN? ST 2110 is putting the early adopter phase behind it with more and more installations and OB vans bringing 2110 into daily use yet most sites works independently. The market is already seeing a strong need to continue to derive cost and efficiency savings from the infrastructure in the larger broadcasters who have multiple facilities spread around one country or even in many. To do this, though there are a number of challenges still to be overcome and moving a large number of essence flows long distances and between PTP time domains is one of them.

Nevion’s Andy Rayner is chair of the VSF Activity Group looking into transporting SMPTE ST 2110 over WAN and is here to give an update on the achievements of the past two years. He underlines that the aim of the ST 2110 over WAN activity group is to detail how to securely share media and control between facilities. The key scenarios being considered are 1) special events/remote production/REMIs. 2) Facility sharing within a company. 3) Sharing facilities between companies. He also notes that there is a significant cross over in this work and that happening in the Ground-Cloud-Cloud-Ground (GCCG) activity group which is also co-chairs.
 

 

The group has produced drafts of two documents under TR-09. The first, TR-09-01 discusses the data plane and has been largely discussed previously. It defines data protection methods as the standard 2022-7 which uses multiple, identical, flows to deal with packet loss and also a constrained version of FEC standard ST 2022-5 which provides a low-latency FEC for the protection of individual data streams.

GRE trunking over RTP was previously announced as the recommended way to move traffic between sites, though Andy notes that no one aspect of the document is mandatory. The benefits of using a trunk are that all traffic is routed down the same path which helps keep the propagation delay for each essence identical, bitrate is kept high for efficient application of FEC, the workflow and IT requirements are simpler and finally, the trunk has now been specified so that it can transparently carry ethernet headers between locations.

Andy also introduces TR-09-02 which talks about sharing of control. The control plane in any facility is not specified and doesn’t have to be NMOS. However NMOS specifications such IS-04 and IS-05 are the basis chosen for control sharing. Andy describes the control as providing a constrained NMOS interface between autonomous locations and discusses how it makes available resources and metadata to the other location and how that location then has the choice of whether or not to consume the advertised media and control. This allows facilities to pick and choose what is shared.

Watch now!
Speakers

Andy Rayner Andy Rayner
Chief Technologist, Nevion,
Chair, WAN IP Activity Group, VSF

Video: Uncompressed Video in the Cloud

Moving high bitrate flows such as uncompressed media through cloud infrastructure = which is designed for scale rather than real-time throughput requires more thought than simply using UDP and multicast. That traditional approach can certainly work, but is liable to drop the occasional packet compromising the media.

In this video, Thomas Edwards and Evan Statton outline the work underway at Amazon Web Services (AWS) for reliable real-time delivery. On-prem 2110 network architectures usually have two separate networks. Media essences are sent as single, high bandwidth flows over both networks allowing the endpoint to use SMPTE ST 2022-7 seamless switching to deal with any lost packets. Network architectures in the cloud differ compared to on-prem networks. They are usually much wider and taller providing thousands of possible paths to get to any one destination.

 

 

AWS have been working to find ways of harnessing the cloud network architectures and have come up with two protocols. The first to discuss is Scalable Reliable Delivery, SRD, a protocol created by Amazon which guarantees delivery of packets. Delivery is likely to be out of order, so packet order needs to be restored by a layer above SRD. Amazon have custom network cards called ‘Nitro’ and it’s these cards which run the SRD protocol to keep the functionality as close to the physical layer as possible.

SRD capitalises on hyperscale networks by splitting each media flow up into many smaller flows. A high bandwidth uncompressed video flow could be over 1 Gbps. SRD would deliver this over one or more hundred ‘flowlets’ each leaving on a different path. Paths are partially controlled using ECMP, Equal Cost Multipath, routing whereby the egress port used on a switch is chosen by hashing together a number of parameters such as the source IP and destination port. The sender controls the ECMP path selection by manipulating packet encapsulation. SRD employs a specialized congestion control algorithm that helps further decrease the chance of packet drops and minimize retransmit times, by keeping queuing to a minimum. SRD keeps an eye on the RTT (round trip time) of each of the flowlets and adjusts the bandwidth appropriately. This is particularly useful as a way to deal with the problem where upstream many flowlets may end up going through the same interface which is close to being overloaded, known as ‘incast congestion’. In this way, SRD actively works to reduce latency and congestion. SRD is able to monitor round trip time since it also has a very small retransmit buffer so that any packets which get lost can be resent. Similar to SRT and RIST, SRD does expect to receive acknowledgement packets and by looking at when these arrive and the timing between packets, RTT and bandwidth estimations can be made.

CDI, the Cloud Digital Interface, is a layer on top of SRD which acts as an interface for programmers. Available on Github under a BSD licence, it gives access to the incoming essence streams in a way similar to SMPTE’s ST 2110 making it easy to deal with pixel data, get access to RGB graphics including an alpha layer as well as providing metadata information for subtitles or SCTE 104 signalling.

Thomas Edwards Thomas Edwards
Principal Solutions Architect & Evangelist,
Amazon Web Services
Evan Statton Evan Statton
Principal Architect,
Amazon Web Services (AWS)