Video: SRT Protocol Overview

SRT’s ability to make lossy networks seem like perfect video circuits is increasingly well known, testified to by the SRT Alliance having just surpassed 400 member companies. But this isn’t your average ‘overview’, it dispenses with the technology introductions and goes straight into the detail so is ideal for people who already know the basics and want some deeper knowledge plus a look at the new features to come.

For those wanting an introduction, this article What is SRT? is a good starter which also links to two other intro videos. But today we’re going to join Haivision’s Maxim Sharabayko to look below the surface of SRT.

Maxim starts by introducing the open-source Git repository and the open-source integrations available before heading into the feature matrix. This shows what is and isn’t in SRT. We see that on top of ARQ, it has FEC, encryption, stream multiplexing and, soon, connection bonding. Addressing the major feature areas one by one, we start with connectivity.

SRT has two modes to establish a connection which Maixm shows on handshake diagrams. We can see that establishment need only take 2x round trips so is quick to establish. This allows Maxim to show how firewall traversal is accomplished, though NAT traversal is not yet implemented.

Next on the list of topics is access control whereby we need to ensure that only authorised users can gain access. This is achieved using the Stream ID field within SRT control packets which can contain up to 512 characters meaning it can be used to transfer usernames, passwords (in the form of keys) and requests. Maxim then explains the AES PSK encryption function and discusses the potential implementation of TLS and DTLS.

Content delivery is next under the magnifying glass starting with the structure of SRT packets and the difference between the two types: Data and Control, the former being restricted to only containing payload or FEC data. Maxim covers the positive acknowledgement which is contained with SRT with the range of received packets being acknowledged every 10ms and, where 64 packets come in less than 10ms, a low-overhead acknowledgement being sent for each group of 64 data packets. But of course, it’s the NAK packets which are the most important part of the protocol. Maxim explains they are able to send back one sequence number or a range of lost packets and talks about when they are sent. We see how this then fits into the Timestamp Based Packet Delivery (TSBPD) mechanism which itself is a feature of SRT which delivers packets to the receiver with the same timing as they arrived at the sender. The last thing we look at in the section is a worked example of Too-Late Packet Drop which explains when and why packets are dropped.

ARQ isn’t the only recovery mechanism in SRT, it also provides FEC and, soon, channel bonding. FEC’s can be useful but do have downsides which should be understood. There is a permanent bandwidth overhead, even when the circuit is working well, and a further latency is needed in order to generate the necessary recovery packets. Bonding allows you to stream the same stream over more than one circuit and use data from circuit B to fill in any gaps in circuit A, this technique is used in SMPTE ST 2022-7. Connection bonding, though, can also be used with multiple connections at once and having dynamic balancing across them. Maxim sums up the pros and cons of the different techniques in the table below.

Pros and cons of different packet recovery techniques. Source: Haivision

The talk finishes with a look at stream multiplexing, congestion control and ways in which you can use the SRT statistics which are constantly updated to manage your connectivity.

Watch now!
Speakers

Maxim Sharabayko Maxim Sharabayko
Senior Software Developer,
Havision

Video: Demystifying Video Delivery Protocols

Let’s face it, there are a lot of streaming protocols out there both for contribution and distribution. Internet ingest in RTMP is being displaced by RIST and SRT, whilst low-latency players such as CMAF and LL-HLS are vying for position as they try to oust HLS and DASH in existing services streaming to the viewer.

This panel, hosted by Jason Thibeault from the Streaming Video Alliance, talks about all these protocols and attempts to put each in context, both in the broadcast chain and in terms of its features. Two of the main contribution technologies are RIST and SRT which are both UDP-based protocols which implement a method of recovering lost packets whereby packets which are lost are re-requested from the sender. This results in a very high resilience to packet loss – ideal for internet deployments.

First, we hear about SRT from Maxim Sharabayko. He lists some of the 350 members of the SRT Alliance, a group of companies who are delivering SRT in their products and collaborating to ensure interoperability. Maxim explains that, based on the UDT protocol, it’s able to do live streaming for contribution as well as optimised file transfer. He also explains that it’s free for commercial use and can be found on github. SRT has been featured a number of times on The Broadcast Knowledge. For a deeper dive into SRT, have a look at videos such as this one, or the ones under the SRT tag.

Next Kieran Kunhya explains that RIST was a response to an industry request to have a vendor-neutral protocol for reliable delivery over the internet or other dedicated links. Not only does vendor-neutrality help remove reticence for users or vendors to adopt the technology, but interoperability is also a key benefit. Kieran calls out hitless switching across multiple ISPs and cellular. bonding as important features of RIST. For a summary of all of RIST’s features, read this article. For videos with a deeper dive, have a look at the RIST tag here on The Broadcast Knowledge.

Demystifying Video Delivery Protocols from Streaming Video Alliance on Vimeo.

Barry Owen represents WebRTC in this webinar, though Wowza deal with many protocols in their products. WebRTC’s big advantage is sub-second delivery which is not possible with either CMAF or LL-HLS. Whilst it’s heavily used for video conferencing, for which it was invented, there are a number of companies in the streaming space using this for delivery to the user because of it’s almost instantaneous delivery speed. Whilst a perfect rendition of the video isn’t guaranteed, unlike CMAF and LL-HLS, for auctions, gambling and interactive services, latency is always king. For contribution, Barry explains, the flexibility of being able to contribute from a browser can be enough to make this a compelling technology although it does bring with it quality/profile/codec restrictions.

Josh Pressnell and Ali C Begen talk about the protocols which are for delivery to the user. Josh explains how smoothstreaming has excited to leave the ground to DASH, CMAF and HLS. They discuss the lack of a true CENC – Common Encryption – mechanism leading to duplication of assets. Similarly, the discussion moves to the fact that many streaming services have to have duplicate assets due to target device support.

Looking ahead, the panel is buoyed by the promise of QUIC. There is concern that QUIC, the Google-invented protocol for HTTP delivery over UDP, is both under standardisation proceedings in the IETF and is also being modified by Google separately and at the same time. But the prospect of a UDP-style mode and the higher efficiency seems to instil hope across all the participants of the panel.

Watch now to hear all the details!
Speakers

Ali C. Begen Ali C. Begen
Technical Consultant, Comcast
Kieran Kunhya Kieran Kunhya
Founder & CEO, Open Broadcast Systems
Director, RIST Forum
Barry Owen Barry Owen
VP, Solutions Engineering
Wowza Media Systems
Joshua Pressnell Josh Pressnell
CTO,
Penthera Technologies
Maxim Sharabayko Maxim Sharabayko
Senior Software Developer,
Haivision
Jason Thibeault Moderator: Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: Latency Still Sucks (and What You Can Do About It)

The streaming industry is on an ever-evolving quest to reduce latency to bring it in line with, or beat linear broadcasts and to allow business models such as gaming to flourish. When streaming started, latency of a minute or more was not uncommon and whilst there are some simple ways to improve that, getting down to the latency of digital TV, approximately 5 seconds, is not without challenges. Whilst the target of 5 seconds works for many use cases, it’s still not enough for auctions, gambling or ‘gamification’ which need sub-second latency.

In this panel, Jason Thielbaut explores how to reduce latency with Casey Charvet from Gigcasters, Rob Roskin from CenturyLink and Haivision VP Engineering, Marc Cymontkowski. This wide-ranging discussion covers CDN caching, QUIC and HTTP/3, encoder settings, segmented Vs. non-segmented streaming, ingest and distribution protocols.

Key to the discussion is differentiating the ingest protocol from the distribution protocol. Often, just getting the content into the cloud quickly is enough to bring the latency into the customer’s budget. Marc from Haivision explains how SRT achieves low-latency contribution. SRT allows unreliable networks like the Internet to be used for reliable, encrypted video contribution. Created by Haivision and now an Open Source technology with an IETF draft spec, the alliance of SRT users continues to grow as the technology continues to develop and add features. SRT is a ‘re-request’ technology meaning it achieves its reliability by re-requesting from the encoder any data it missed. This is in contrast to TCP/IP which acknowledges every single packet of data and is sent missing data when acknowledgements aren’t received. Doing it the SRT, way really makes the protocol much more efficient and able to cope with real-time media. SRT can also encrypt all traffic which, when sending over the internet, is extremely important even if you’re not sending live-sports. In this video, Marc makes the point that SRT also recovers the timing of the stream so that the data comes out the SRT pipe in the same ‘shape’ as it went in. Particularly with VBR encoding, your decoder needs to receive the same peaks and troughs as the encoder created to save it having to buffer the input as much. All this included, SRT manages to deliver a transport latency of around 2.5 times the round trip time.

Haivision are members of RIST which is a similar technology to SRT. Marc explains that RIST is approaching the problem from a standards perspective; taking IETF RFCs and applying them to RTP. SRT took a more pragmatic way forward by creating a binary which implemented the features and by making this open source for interoperability.

The video finishes with a Q&A covering HTTP Header compression, recommended size of HLS chunks, peer-to-peer streaming and latency requirements for VoD.

Watch now!
Speakers

Rob Roskin Rob Roskin
Principal Solutions Architect,
Level3 Communications
Marc Cymontkowski Marc Cymontkowski
VP Engineering – Cloud,
Haivision
Casey Charvet Casey Charvet
Managing Director,
Gigcasters
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: SRT – The Simple Solution for Your Remote and At-Home Workforce

SRT allows unreliable networks like the Internet to be used for reliable, encrypted video contribution. Created by Haivision and now an Open Source technology with an IETF draft spec, the alliance of SRT users continues to grow as the technology continues to develop and add features. Haivision are members of RIST which Kieran Kunhya spoke about in yesterday’s article.

Being open-source, SRT is widely deployed in across hundreds of manufacturers so there is a lot of choice, although Haivision do focus on their products in this video. The important part is in how the protocol works to keep the data intact which is dealt with in the second segment from Haivision’s Selwyn Jans. Lastly, we hear of some examples of real-world use cases to whet the appetite and start the thought process about how SRT could benefit you.

The fundamental aspect of SRT, as Selwyn explains, is that the packets are counted in at the remote end and if one packet is missing, it’s re-requested from the source. Whilst this is how normal file transfers work, using TCP, this has been optimised to ensure real-time media isn’t unduly delayed. TCP would acknowledge every single packet and the sender should take note when a packet acknowledgement doesn’t arrive. SRT is more efficient whereby acknowledgements are minimised, only re-requests which keeps overheads down. A buffer is set up in the destination so that there is still data available while we’re waiting for these packets to be resent. Depending on the network quality, we may need to have enough buffer to deal with several re-requests for the sane packet.

How SRT Works

Selwyn expands upon this re-request mechanism and looks the way SRT can be sent, or ‘pushed’, as well as working in as a ‘listener’ so that the sender waits to be contacted. before it starts sending any data. You can choose the best one to use to fit around your firewalls. Where there is a NAT firewall, SRT can always be sent out but receiving requests would need firewall modification. One of the benefits of SRT is its ability to be deployed anywhere, including in a home, quickly and easily so firewall changes would not be welcome. For a more in-depth description of SRT, check out this talk from SF Video Technology.

The last section features Corey Behnke from streaming company Live X talking about where they have been using SRT. Replacing satellite is one important use of SRT since in many places, there is sufficient bandwidth available to stream over the internet. Before technologies such as SRT, this was likely to lead to breakups on air, so satellite was the clear winner. Now, there’s money to be saved by not buying satellite space. Could ingress and egress is also a very important workflow for SRT, and similar protocols. The panelists explain how this works using as an example the Haivision Media Gateway, though other products such as Techex and Videoflow.

Watch now!
Speakers

Marcus Schioler Marcus Schioler
Vice President, Product Marketing
Haivision
Selwyn Jans Selwyn Jans
Technical Video Engineer,
Haivision
Corey Behnke Corey Behnke
Producer & Co-Founder,
Live X