Video: Getting Your Virtual Hands On RIST

RIST is one of a number of error correction protocols that provide backwards error correction. These are commonly used to transport media streams into content providers but are increasingly finding use in other parts of the broadcast workflow including making production feeds, such as multiviewers and autocues available to staff at internet-connected locations, such as the home.

The RIST protocol (Reliable Internet Stream Protocol) is being created by a working group in the VSF (Video Services Forum) to provide an open and interoperable specification, available for the whole industry to adopt. This article provides a brief summary, whereas this talk from FOSDEM20 goes into some detail.

We’re led through the topic by Sergio Ammirata, CTO of DVEO who are members of the RIST Forum and collaborating to make the protocol. What’s remarkable about RIST is that several companies which have created their own error-correcting streaming protocols such as DVEO’s Dozer, which Sergio created, have joined together to share their experience and best practices.

Press play to watch:

Sergio starts by explaining why RIST is based on UDP – a topic explored further in this article about RIST, SRT and QUIC – and moves on to explaining how it works through ‘NACK’ messages, also known as ‘Negative Acknowledgement’ messages.

We hear next about the principles of RIST, of which the main one is interoperability. There are two profiles, simple and main. Sergio outlines the Simple profile which provides RTP and error correction, channel bonding. There is also the Main profile, which has been published as VSF TR-06-2. This includes encryption, NULL packet removal, FEC and GRE tunnelling. RIST uses a tunnel to multiplex many feeds into one stream. Using Cisco’s Generic Routing Encapsulation (GRE), RIST can bring together multiple RIST streams and other arbitrary data streams into one tunnel. The idea of a tunnel is to hide complexity from the network infrastructure.

Tunnelling allows for bidirectional data flow under one connection. This means you can create your tunnel in one direction and send data in the opposite direction. This gets around many firewall problems since you can create your tunnel in the direction which is easiest to achieve without having to worry about the direction of dataflow. Setting up GRE tunnels is outside of the scope of RIST.

Sergio finishes by introducing librist, demo applications and answerin questions from the audience.

Watch now!
Speaker

Sergio Ammirata Sergio Ammirata
Chief Technical Officer of DVEO
Managing Partner of SipRadius LLC.

Video: Demystifying Video Delivery Protocols

Let’s face it, there are a lot of streaming protocols out there both for contribution and distribution. Internet ingest in RTMP is being displaced by RIST and SRT, whilst low-latency players such as CMAF and LL-HLS are vying for position as they try to oust HLS and DASH in existing services streaming to the viewer.

This panel, hosted by Jason Thibeault from the Streaming Video Alliance, talks about all these protocols and attempts to put each in context, both in the broadcast chain and in terms of its features. Two of the main contribution technologies are RIST and SRT which are both UDP-based protocols which implement a method of recovering lost packets whereby packets which are lost are re-requested from the sender. This results in a very high resilience to packet loss – ideal for internet deployments.

First, we hear about SRT from Maxim Sharabayko. He lists some of the 350 members of the SRT Alliance, a group of companies who are delivering SRT in their products and collaborating to ensure interoperability. Maxim explains that, based on the UDT protocol, it’s able to do live streaming for contribution as well as optimised file transfer. He also explains that it’s free for commercial use and can be found on github. SRT has been featured a number of times on The Broadcast Knowledge. For a deeper dive into SRT, have a look at videos such as this one, or the ones under the SRT tag.

Next Kieran Kunhya explains that RIST was a response to an industry request to have a vendor-neutral protocol for reliable delivery over the internet or other dedicated links. Not only does vendor-neutrality help remove reticence for users or vendors to adopt the technology, but interoperability is also a key benefit. Kieran calls out hitless switching across multiple ISPs and cellular. bonding as important features of RIST. For a summary of all of RIST’s features, read this article. For videos with a deeper dive, have a look at the RIST tag here on The Broadcast Knowledge.

Demystifying Video Delivery Protocols from Streaming Video Alliance on Vimeo.

Barry Owen represents WebRTC in this webinar, though Wowza deal with many protocols in their products. WebRTC’s big advantage is sub-second delivery which is not possible with either CMAF or LL-HLS. Whilst it’s heavily used for video conferencing, for which it was invented, there are a number of companies in the streaming space using this for delivery to the user because of it’s almost instantaneous delivery speed. Whilst a perfect rendition of the video isn’t guaranteed, unlike CMAF and LL-HLS, for auctions, gambling and interactive services, latency is always king. For contribution, Barry explains, the flexibility of being able to contribute from a browser can be enough to make this a compelling technology although it does bring with it quality/profile/codec restrictions.

Josh Pressnell and Ali C Begen talk about the protocols which are for delivery to the user. Josh explains how smoothstreaming has excited to leave the ground to DASH, CMAF and HLS. They discuss the lack of a true CENC – Common Encryption – mechanism leading to duplication of assets. Similarly, the discussion moves to the fact that many streaming services have to have duplicate assets due to target device support.

Looking ahead, the panel is buoyed by the promise of QUIC. There is concern that QUIC, the Google-invented protocol for HTTP delivery over UDP, is both under standardisation proceedings in the IETF and is also being modified by Google separately and at the same time. But the prospect of a UDP-style mode and the higher efficiency seems to instil hope across all the participants of the panel.

Watch now to hear all the details!
Speakers

Ali C. Begen Ali C. Begen
Technical Consultant, Comcast
Kieran Kunhya Kieran Kunhya
Founder & CEO, Open Broadcast Systems
Director, RIST Forum
Barry Owen Barry Owen
VP, Solutions Engineering
Wowza Media Systems
Joshua Pressnell Josh Pressnell
CTO,
Penthera Technologies
Maxim Sharabayko Maxim Sharabayko
Senior Software Developer,
Haivision
Jason Thibeault Moderator: Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: A State-of-the-Industry Webinar: Apple’s LL-HLS is finally here

Even after restrictions are lifted, it’s estimated that overall streaming subscriptions will remain 10% higher than before the pandemic. We’ve known for a long time that streaming is here to stay and viewers want their live streams to arrive quickly and on-par with broadcast TV. There have been a number of attempts at this, the streaming community extended HLS to create LHLS which brought down latency quite a lot without making major changes to the defacto standard.

MPEG’s DASH also has created a standard for low-latency streaming allowing CMAF to be used to get the latency down even further than LHLS. Then Apple, the inventors of the original HLS, announced low-latency HLS (LL-HLS). We’ve looked at all of these previously here on The Broadcast Knowledge. This Online Streaming Primer is a great place to start. If you already know the basics, then there’s no better than Will Law to explain the details.

The big change that’s happened since Will Law’s talk above, is that Apple have revised their original plan. This talk from CTO and Founder of THEOplayer, Pieter-Jan Speelmans, explains how Apple’s modified its approach to low-latency. Starting with a reminder of the latency problem with HLS, Pieter-Jan explains how Apple originally wanted to implement LL-HLS with HTTP/2 push and the problems that caused. This has changed now, and this talk gives us the first glimpse of how well this works.

Pieter-Jan talks about how LL-DASH streams can be repurposed to LL-HLS, explains the protocol overheads and talks about the optimal settings regarding segment and part length. He explains how the segment length plays into both overall latency but also start-up latency and the ability to navigate the ABR ladder without buffering.

There was a lot of frustration initially within the community at the way Apple introduced LL-HLS both because of the way it was approached but also the problems implementing it. Now that the technical issues have been, at least partly, addressed, this is the first of hopefully many talks looking at the reality of the latest version. With an expected ‘GA’ date of September, it’s not long before nearly all Apple devices will be able to receive LL-HLS and using the protocol will need to be part of the playbook of many streaming services.

Watch now to get the full detail

Speaker

Pieter-Jan Speelmans Pieter-Jan Speelmans
CTO & Founder
THEOplayer

Video: Optimising Video for Everyone at Once

CDNs are all about scale. Their raison d’ëtre is to help you scale, but that’s no trivial task which is why companies like Akamai exist so you only have to concentrate on your core product, for this talk, online streaming. Akamai’s main game is to move content you provide to them to the ‘edge’ of the network, as close to the user as possible.

The pandemic certainly put the CDNs, as well as telcos, through their paces. In this talk, Peter Chave from Akami talks about the challenges in the scale they’re achieving on a day-to-day basis. Whilst it’s lucky that 2020 was due to be a ‘big’ year in terms of sporting events, the Winter Olympics being but one example, meaning that large capacity had already been planned for, the whole industry has been iterating to get things right as the load has shifted and increased.

In March, Akamai saw a years-worth of growth. The shift in traffic was not just in magnitude but also it was a rebalancing of upload vs download. With video conferences and VPNs used all the more, the asymmetrical design of consumer internet services was put to the test.

Peter explains that companies like Netflix volunteered to reduce the burden by reducing bitrates. This was done in two main ways. One was to simply remove the top level from manifests. The other was to update the players to be much more conservative as they worked their way up through the bitrates. It’s also made some companies consider a switch to HEVC or otherwise which, whilst not being immediate, can have the effect of reducing overall bitrates across your service.

The CDN can also adjust the manifest which is much more flexible since, rather than editing a central file, in the edge in certain geographies and at certain times of day, the CDN can adjust the manifests on the fly. Lastly, Peter explains how Akamai have been throttling the speed at which video chunks are served. For times when a person has a lot more available bitrate than it needs for a video, there is no reason for it to download chunks at 100Mbps, so throttling the download speed helps reduce peaks.

Watch now!
Speakers

Peter Chave Peter Chave
Principal Architect,
Akamai Technologies