Video: Public Internet Transport of Live Broadcast Video – SRT, NDI and RIST for Compressed Video

Getting video over the internet and around the cloud has well-established solutions, but not only are they continuing to evolve, they are still new to some. This video looks at workflows that are possible teaming up SRT, RIST and NDI by getting a glimpse into projects that have gone live in 2020. We also get a deeper look at RIST’s features with a Q&A.

This video from SMPTE’s New York section starts with Bryan Nelson from Alpha Video who’s been involved in many cloud-based NDI projects many of which also use SRT to get in and out of the cloud. NDI’s a lightly compressed, low-delay codec suitable for production and works well on 1GbE networks. Not dependant on multicast, it’s a technology that lends itself to cloud-based production where it’s found many uses. Bryan looks at a number of workflows that are also enabled by the Sienna production system which can use many video formats including NDI.

For more information on SRT and RIST, have a look at this SMPTE video outlining how they work and the differences. For a deeper dive into NDI, this SMPTE webinar with VizRT explains how its works and also gives demos of the same software that Bryan uses. To get a feel for how NDI fits in with live production compared to SMPTE’s uncompressed ST 2110, this IBC Panel discussion ‘Where can SMPTE ST 2110 and NDI Co-exist’? explores the topic further.

Bryan’s first example is the 2020 NFL draft is first up which used remote contribution on iPhones streaming using SRT. All streams were aggregated in AWS and converted to NDI feeding NDI multiviewers and routed. These were passed down to on-prem NDI processors which used HP ProLiant servers to output as SDI for handoff to other broadcast workflows. The router could be controlled by soft panels but also hardware panels on-prem. Bryan explores an extension to this idea where multiple cloud domains can be used, with NDI being the handoff between them. In one cloud system, VizRT vision mixing and graphics can be added with multiviewers and other outputs being sent via SRT to remote directors, producers etc. Another cloud system could be controlled by a third party with other processing ahead of then being sent to side and being decoded to SDI on-prem. This can be totally separate to acquisition from SDI & NDI with cameras located elsewhere. SRT & NDI become the mediators between this decentralised production environment.

Bryan finishes off by talking about remote NLE monitoring and various types of MCR monitoring. NLE editing is made easy through NDI integration within Adobe Premiere and Avid Media Composer. It’s possible to bring all of these into a processing engine and move them over the public internet for viewing elsewhere via Apple TV or otherwise.

 

 

Ciro Noronha from Cobalt Digital takes the last half of the video to talk about RIST. In addition to the talks mentioned above, Ciro recently gave a talk exploring the many RIST use cases. A good written overview of RIST can be found here.

Ciro looks at the two published profiles that form RIST, the simple and main profile. The simple profile defines RTP interoperability with error correction, using re-requested packets with the option of bonding links. Ciro covers its use of RTCP for maintaining the channel and handling the negative acknowledgements (NACKs) which are based on RFC 4585. RIST can bond multiple links or use 2022-7 seamless switching.

The Main profile builds on the simple profile by adding encryption, authentication and tunnelling. Tunnels allow multiple flows down one connection which simplifies firewall configuration, encryption and allows either end to initiate the bi-directional link. The tunnel can also carry non-RIST traffic for any other purpose. The tunnels are FRE over UDP (RFC 8086). DTLS is used for encryption which is almost identical to TLS used to secure websites. DTLS uses certificates meaning you get to authenticate the other end, not just encrypt the data. Alternatively, you can send a password that avoids the need for certificates when that’s not needed or for one-to-many distribution. Ciro concludes by showing that it can work with up to 50% packet loss and answers many questions in the Q&A.

Watch now!
Speakers

Byran Nelson Bryan Nelson
Sales Account Executive,
Alpha Video
Ciro Noronha Ciro Noronha
President, RIST Forum
Executive Vice President of Engineering, Cobalt Digital

SRT – How the hot new UDP video protocol actually works under the hood

It’s been a great year at The Broadcast Knowledge growing to over four thousand followers on social media and packing in 250 new articles. So what better time to look back at 2020’s most popular articles as we head into the new year?

It’s fair to say that SRT has seen a lot of interest this year. This was always going to be the case as top-tier broadcasters are now adopting a ‘code as infrastructure’ approach. whereby transmission chains, post-production and live-production workflows are generated via APIs in the cloud, ready for temporary or permanent use. Seen before as the perfect place to put your streaming service, the cloud is increasingly viewed as a viable option for nearly all parts of the production chain.

Getting video in and out of the cloud can be done without SRT, but SRT is a great option as it seamlessly corrects for missing packets which can get lost on the route. How it does this, is the topic of this talk from Alex Converse from Twitch. In the original article on this site, one of the highest-ranking this year, it’s also pitched as an RTMP replacement.

RTMP is still heavily used around the world and like many established technologies, there’s an element of ‘better the devil you know’ mixed in with a reality that much equipment out there will never be updated to do anything else. However, new equipment is being delivered with technologies such as SRT which means that getting from your encoder to the cloud, can now be done with less latency, with better reliability and with a wider choice of codecs than RTMP.

SRT, along with RIST, is helping transform the broadcast industry. To learn more, watch Alex’s video and then look at our other articles and videos on the topic.

Speaker

Alex Converse Alex Converse
Streaming Video Software Engineer,
Twitch

Video: SRT Protocol Overview

SRT’s ability to make lossy networks seem like perfect video circuits is increasingly well known, testified to by the SRT Alliance having just surpassed 400 member companies. But this isn’t your average ‘overview’, it dispenses with the technology introductions and goes straight into the detail so is ideal for people who already know the basics and want some deeper knowledge plus a look at the new features to come.

For those wanting an introduction, this article What is SRT? is a good starter which also links to two other intro videos. But today we’re going to join Haivision’s Maxim Sharabayko to look below the surface of SRT.

Maxim starts by introducing the open-source Git repository and the open-source integrations available before heading into the feature matrix. This shows what is and isn’t in SRT. We see that on top of ARQ, it has FEC, encryption, stream multiplexing and, soon, connection bonding. Addressing the major feature areas one by one, we start with connectivity.

SRT has two modes to establish a connection which Maixm shows on handshake diagrams. We can see that establishment need only take 2x round trips so is quick to establish. This allows Maxim to show how firewall traversal is accomplished, though NAT traversal is not yet implemented.

Next on the list of topics is access control whereby we need to ensure that only authorised users can gain access. This is achieved using the Stream ID field within SRT control packets which can contain up to 512 characters meaning it can be used to transfer usernames, passwords (in the form of keys) and requests. Maxim then explains the AES PSK encryption function and discusses the potential implementation of TLS and DTLS.

Content delivery is next under the magnifying glass starting with the structure of SRT packets and the difference between the two types: Data and Control, the former being restricted to only containing payload or FEC data. Maxim covers the positive acknowledgement which is contained with SRT with the range of received packets being acknowledged every 10ms and, where 64 packets come in less than 10ms, a low-overhead acknowledgement being sent for each group of 64 data packets. But of course, it’s the NAK packets which are the most important part of the protocol. Maxim explains they are able to send back one sequence number or a range of lost packets and talks about when they are sent. We see how this then fits into the Timestamp Based Packet Delivery (TSBPD) mechanism which itself is a feature of SRT which delivers packets to the receiver with the same timing as they arrived at the sender. The last thing we look at in the section is a worked example of Too-Late Packet Drop which explains when and why packets are dropped.

ARQ isn’t the only recovery mechanism in SRT, it also provides FEC and, soon, channel bonding. FEC’s can be useful but do have downsides which should be understood. There is a permanent bandwidth overhead, even when the circuit is working well, and a further latency is needed in order to generate the necessary recovery packets. Bonding allows you to stream the same stream over more than one circuit and use data from circuit B to fill in any gaps in circuit A, this technique is used in SMPTE ST 2022-7. Connection bonding, though, can also be used with multiple connections at once and having dynamic balancing across them. Maxim sums up the pros and cons of the different techniques in the table below.

Pros and cons of different packet recovery techniques. Source: Haivision

The talk finishes with a look at stream multiplexing, congestion control and ways in which you can use the SRT statistics which are constantly updated to manage your connectivity.

Watch now!
Speakers

Maxim Sharabayko Maxim Sharabayko
Senior Software Developer,
Havision

Video: How to Build an SRT Streaming Flow from Encoder to Edge

SRT is an enabler for contribution over the internet – whether point to point, or cloud egress/ingress. In recent weeks here on The Broadcast Knowledge we have seen different takes on how SRT, short for Secure Reliable Transport, and RIST can be used including from Open Broadcast Systems.

Here, Karel Boek, CEO of Raskenlund a Norwegian consultancy company for streaming, explains SRT and builds a workflow as a live demo showing how you can implement it quickly and easily. He starts by explaining where SRT sits and what it’s for. SRT makes contribution over the internet possible because it has a very light-touch away of recovering missing packets which are inevitable on internet links. Karel covers Haivision’s creation of SRT and the SRT Alliance that has grown out of that which now boast 350 members. The protocol being Open Source – and now an IETF Draft – means that a lot of companies have been happy to adopt the protocol. There are frequent plugfests, one has just concluded, where vendors test compatibility with the increasing set of features offered in SRT.

‘Secure’ is the ‘S’ in SRT’s name which is because the stream can be easily encrypted as part of the protocol. This is an important aspect in enabling sports and enterprise contribution in the cloud given the security that no-one can watch the feed before it gets to its destination.

‘Reliable’ is the key offer for SRT as that’s the number one problem with the internet and other networks where not all packets get delivered. TCP/IP is a great protocol on which most webpages are delivered. It’s fantastic for file delivery since every single packet gets acknowledged and there really isn’t any way that a file can get to the other end without being completely intact. Live streams can’t afford the overhead of counting in and counting out every packet so SRT’s ability to request only the missing packets is very important. It should be noted that this ability is also in Zixi, ARQ and RIST.

Karel compares SRT with other protocols including RTMP, MPEG-2 Transport Streams amongst others. He is careful to separate HLS, MPEG-DASH and WebRTC as ‘last-mile protocols’ in order to make a differentiation between those which content providers use to move video around as part of production and those which are used for distribution. RTMP’s use is still notable but diminishing particularly in Europe and the American markets. But the idea of MPEG-TS over UDP is still the best way to deliver within a building. Outside of the building, you would then want to protect it at least with FEC, with SMPTE 2022-7 or, better, with a protocol such as RIST or SRT. Karel mentions the details of the Simple Profile of RIST which was, by design, missing the features Karel notes are absent. We’ve heard here on The Broadcast Knowledge that these features have been delivered as planned in the Main Profile.

In the final part of this talk, Karel builds, live, an example workflow which combines both Wowza and SRTHub to create an end-to-end workflow. This is a great way of demonstrating how quickly you can create a workflow with SRT. There are plenty of SRT-enabled encoders and senders which is one of the ways we can judge the success of the SRT Alliance. Similarly whilst Haivision’s SRTHub is a useful product which brings things together in the cloud or on-prem, but Techex’s MWEdge and Videoflow’s DVG can do similar or more, each with their own advantages.

Overwell the takeaway from this talk from Raskenlund is that internet contribution is sorted, it’s now for your to choose how to do it and with whom. To that end, the talk ends with a Q&A from people wondering exactly that.

Watch now!
Speaker

Karel Boek Karel Boek
CEO,
Raskenlund