Video: Demystifying Video Delivery Protocols

Let’s face it, there are a lot of streaming protocols out there both for contribution and distribution. Internet ingest in RTMP is being displaced by RIST and SRT, whilst low-latency players such as CMAF and LL-HLS are vying for position as they try to oust HLS and DASH in existing services streaming to the viewer.

This panel, hosted by Jason Thibeault from the Streaming Video Alliance, talks about all these protocols and attempts to put each in context, both in the broadcast chain and in terms of its features. Two of the main contribution technologies are RIST and SRT which are both UDP-based protocols which implement a method of recovering lost packets whereby packets which are lost are re-requested from the sender. This results in a very high resilience to packet loss – ideal for internet deployments.

First, we hear about SRT from Maxim Sharabayko. He lists some of the 350 members of the SRT Alliance, a group of companies who are delivering SRT in their products and collaborating to ensure interoperability. Maxim explains that, based on the UDT protocol, it’s able to do live streaming for contribution as well as optimised file transfer. He also explains that it’s free for commercial use and can be found on github. SRT has been featured a number of times on The Broadcast Knowledge. For a deeper dive into SRT, have a look at videos such as this one, or the ones under the SRT tag.

Next Kieran Kunhya explains that RIST was a response to an industry request to have a vendor-neutral protocol for reliable delivery over the internet or other dedicated links. Not only does vendor-neutrality help remove reticence for users or vendors to adopt the technology, but interoperability is also a key benefit. Kieran calls out hitless switching across multiple ISPs and cellular. bonding as important features of RIST. For a summary of all of RIST’s features, read this article. For videos with a deeper dive, have a look at the RIST tag here on The Broadcast Knowledge.

Demystifying Video Delivery Protocols from Streaming Video Alliance on Vimeo.

Barry Owen represents WebRTC in this webinar, though Wowza deal with many protocols in their products. WebRTC’s big advantage is sub-second delivery which is not possible with either CMAF or LL-HLS. Whilst it’s heavily used for video conferencing, for which it was invented, there are a number of companies in the streaming space using this for delivery to the user because of it’s almost instantaneous delivery speed. Whilst a perfect rendition of the video isn’t guaranteed, unlike CMAF and LL-HLS, for auctions, gambling and interactive services, latency is always king. For contribution, Barry explains, the flexibility of being able to contribute from a browser can be enough to make this a compelling technology although it does bring with it quality/profile/codec restrictions.

Josh Pressnell and Ali C Begen talk about the protocols which are for delivery to the user. Josh explains how smoothstreaming has excited to leave the ground to DASH, CMAF and HLS. They discuss the lack of a true CENC – Common Encryption – mechanism leading to duplication of assets. Similarly, the discussion moves to the fact that many streaming services have to have duplicate assets due to target device support.

Looking ahead, the panel is buoyed by the promise of QUIC. There is concern that QUIC, the Google-invented protocol for HTTP delivery over UDP, is both under standardisation proceedings in the IETF and is also being modified by Google separately and at the same time. But the prospect of a UDP-style mode and the higher efficiency seems to instil hope across all the participants of the panel.

Watch now to hear all the details!

Ali C. Begen Ali C. Begen
Technical Consultant, Comcast
Kieran Kunhya Kieran Kunhya
Founder & CEO, Open Broadcast Systems
Director, RIST Forum
Barry Owen Barry Owen
VP, Solutions Engineering
Wowza Media Systems
Joshua Pressnell Josh Pressnell
Penthera Technologies
Maxim Sharabayko Maxim Sharabayko
Senior Software Developer,
Jason Thibeault Moderator: Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: RIST for high-end live media workflows

RIST overcomes the propensity of the internet to lose packets. It makes possible very-high-bandwidth, low-latency contribution over the internet into a studio or directly into the cloud as part of a streaming workflow. Broadcasters have long dreamed of using the increasingly ubiquitous internet to deliver programmes at a lower cost than fixed lines, satellite or microwave. Back in the day, FEC tended to save the day but it had limits meaning the internet was still not so appetising. Now with RIST, the internet is a safe medium for contribution. As ever, two paths are advised!

In this talk, Love Thyresson explains how NetInsight use RIST to deliver high bandwidth contribution for their customers. Love focusses on the lower-tier spots events which would attract an audience, but when the audience is small, the budgets are also small meaning that if you can’t use the internet to get the sports game back to your production centre, the costs – often just on connectivity – are too high to make the programme viable. So whether we are trying to cut costs on a big production or make new programming viable (which might even be the catalyst for a whole new business model or channel), internet-contribution is the only way to go.

Love talks about the extension done in RIST to the standard RTP timestamp which, when using high bandwidth streams, quickly runs out of numbers. Expanding it from 16 to 32 bits was the way to allow for more packets to be delivered before having to start the timer from zero again. Indeed, it’s this extra capacity which allows the RIST main profile to deliver JPEG 2000 or JPEG XS. JPEG XS, in particular, is key to modern remote-production workflows. Ingest into the cloud may end up being the most common use for RIST despite the high-value use cases for delivering from events to broadcasters or between broadcasters’ buildings.

After a quick retransmission 101, Love Thyresson closes by looking at the features available now in the simple and main profile of RIST.

For more information, have a look at this article or these videos

Watch now!

Love Thyresson Love Thyresson
Former Head of Internet Media Transport, NetInsight

Video: Latency Still Sucks (and What You Can Do About It)

The streaming industry is on an ever-evolving quest to reduce latency to bring it in line with, or beat linear broadcasts and to allow business models such as gaming to flourish. When streaming started, latency of a minute or more was not uncommon and whilst there are some simple ways to improve that, getting down to the latency of digital TV, approximately 5 seconds, is not without challenges. Whilst the target of 5 seconds works for many use cases, it’s still not enough for auctions, gambling or ‘gamification’ which need sub-second latency.

In this panel, Jason Thielbaut explores how to reduce latency with Casey Charvet from Gigcasters, Rob Roskin from CenturyLink and Haivision VP Engineering, Marc Cymontkowski. This wide-ranging discussion covers CDN caching, QUIC and HTTP/3, encoder settings, segmented Vs. non-segmented streaming, ingest and distribution protocols.

Key to the discussion is differentiating the ingest protocol from the distribution protocol. Often, just getting the content into the cloud quickly is enough to bring the latency into the customer’s budget. Marc from Haivision explains how SRT achieves low-latency contribution. SRT allows unreliable networks like the Internet to be used for reliable, encrypted video contribution. Created by Haivision and now an Open Source technology with an IETF draft spec, the alliance of SRT users continues to grow as the technology continues to develop and add features. SRT is a ‘re-request’ technology meaning it achieves its reliability by re-requesting from the encoder any data it missed. This is in contrast to TCP/IP which acknowledges every single packet of data and is sent missing data when acknowledgements aren’t received. Doing it the SRT, way really makes the protocol much more efficient and able to cope with real-time media. SRT can also encrypt all traffic which, when sending over the internet, is extremely important even if you’re not sending live-sports. In this video, Marc makes the point that SRT also recovers the timing of the stream so that the data comes out the SRT pipe in the same ‘shape’ as it went in. Particularly with VBR encoding, your decoder needs to receive the same peaks and troughs as the encoder created to save it having to buffer the input as much. All this included, SRT manages to deliver a transport latency of around 2.5 times the round trip time.

Haivision are members of RIST which is a similar technology to SRT. Marc explains that RIST is approaching the problem from a standards perspective; taking IETF RFCs and applying them to RTP. SRT took a more pragmatic way forward by creating a binary which implemented the features and by making this open source for interoperability.

The video finishes with a Q&A covering HTTP Header compression, recommended size of HLS chunks, peer-to-peer streaming and latency requirements for VoD.

Watch now!

Rob Roskin Rob Roskin
Principal Solutions Architect,
Level3 Communications
Marc Cymontkowski Marc Cymontkowski
VP Engineering – Cloud,
Casey Charvet Casey Charvet
Managing Director,
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: RIST: Enabling Remote Work with Reliable Live Video Over Unmanaged Networks

Last week’s article on RIST, here on The Broadcast Knowledge, stirred up some interest about whether we view RIST as being against SRT & Zixi, or whether it’s an evolution thereof. Whilst the talk covered the use of RIST and the reasons one company chose to use it, this talk explains what RIST achieves in terms of features showing that it has a ‘simple’ and ‘main’ profile which bring different features to the table.

Rick Ackermans is the chair of the RIST Activity Group which is the group that develops the specifications. Rick explains some of the reasons motivating people to look at the internet and other unmanaged networks to move their video. The traditional circuit-based contribution and distribution infrastructure on which broadcasting relied has high fixed costs. Whilst this can be fully justifiable for transmitter links, though still expensive, for other ad-hoc circuits you are paying all the time for something which is only occasionally used, satellite space in the C-band is reducing squeezing people out. And, of course, remote working is much in the spotlight so technologies like RIST which don’t have a high latency (unlike HLS) are in demand.

RIST manages to solve many of the problems with using the internet such as protecting your content from theft and from packet loss. It’s a joint effort between many companies including Zixi and Haivision. The aim is to create choice in the market by removing vendor bias and control. Vendors are more likely to implement an open specification than one which has ties to another vendor so this should open up the market creating more demand for this type of solution.

In the next section, we see how RIST as a group is organised and who it fits in to the Video Services Forum, VSF. We then look at the profiles available in RIST. A full implementation aims at being a 3-layer onion with the ‘Simple Profile’ in the middle. This has basic network resilience and interoperability. On top of that, the ‘Main Profile’ is built which adds encryption, authentication and other features. The future sees an ‘Enhanced Profile’ which may bring with it channel management.

Rick then dives down into each of these profiles to uncover the details of what’s there and explain the publication status. The simple profile allows full RTP interoperability for use as a standard sender, but also adds packet recovery plus seamless switching. The Main profile introduces the use of GRE tunnels where a single connection is setup between two devices. Like a cable, multiple signals can then be sent down the cable together. From an IT perspective this makes life so much easier as the number of streams is totally transparent to the network so firewall configuration, for example, is made all the simpler. However it also means that by just running encryption on the tunnel, everything is encrypted with no further complexity. Encryption works better on higher bitrate streams so, again, running on the aggregate has a benefit than on each stream individually. Rick talks about the encryption modes with DTLS and Pre-shared Key being available as well as the all important, but often neglected, step of authenticating – ensuring you are sending to the endpoint you expected to be sending to.

The last part of the talk covers interoperability, including a comparison between RIST and SRT. Whilst there are many similarities, Rick claims RIST can cope with higher percentages of packet loss. He also says that 2022-7 doesn’t work with SRT, though The Broadcast Knowledge is aware of interoperable implementations which do allow 2022-7 to work even through SRT. The climax of this section is explaining the setup of the RIST NAB demo, a multi-vendor, international demo which proved the reliability claims. Rick finishes by examining some case studies and with a Q&A.

Watch now!

Merrick Ackermans Rick Ackermans
MVA Broadcast Consulting
RIST Activity Group Chair