Video: Demystifying Video Delivery Protocols

Let’s face it, there are a lot of streaming protocols out there both for contribution and distribution. Internet ingest in RTMP is being displaced by RIST and SRT, whilst low-latency players such as CMAF and LL-HLS are vying for position as they try to oust HLS and DASH in existing services streaming to the viewer.

This panel, hosted by Jason Thibeault from the Streaming Video Alliance, talks about all these protocols and attempts to put each in context, both in the broadcast chain and in terms of its features. Two of the main contribution technologies are RIST and SRT which are both UDP-based protocols which implement a method of recovering lost packets whereby packets which are lost are re-requested from the sender. This results in a very high resilience to packet loss – ideal for internet deployments.

First, we hear about SRT from Maxim Sharabayko. He lists some of the 350 members of the SRT Alliance, a group of companies who are delivering SRT in their products and collaborating to ensure interoperability. Maxim explains that, based on the UDT protocol, it’s able to do live streaming for contribution as well as optimised file transfer. He also explains that it’s free for commercial use and can be found on github. SRT has been featured a number of times on The Broadcast Knowledge. For a deeper dive into SRT, have a look at videos such as this one, or the ones under the SRT tag.

Next Kieran Kunhya explains that RIST was a response to an industry request to have a vendor-neutral protocol for reliable delivery over the internet or other dedicated links. Not only does vendor-neutrality help remove reticence for users or vendors to adopt the technology, but interoperability is also a key benefit. Kieran calls out hitless switching across multiple ISPs and cellular. bonding as important features of RIST. For a summary of all of RIST’s features, read this article. For videos with a deeper dive, have a look at the RIST tag here on The Broadcast Knowledge.

Demystifying Video Delivery Protocols from Streaming Video Alliance on Vimeo.

Barry Owen represents WebRTC in this webinar, though Wowza deal with many protocols in their products. WebRTC’s big advantage is sub-second delivery which is not possible with either CMAF or LL-HLS. Whilst it’s heavily used for video conferencing, for which it was invented, there are a number of companies in the streaming space using this for delivery to the user because of it’s almost instantaneous delivery speed. Whilst a perfect rendition of the video isn’t guaranteed, unlike CMAF and LL-HLS, for auctions, gambling and interactive services, latency is always king. For contribution, Barry explains, the flexibility of being able to contribute from a browser can be enough to make this a compelling technology although it does bring with it quality/profile/codec restrictions.

Josh Pressnell and Ali C Begen talk about the protocols which are for delivery to the user. Josh explains how smoothstreaming has excited to leave the ground to DASH, CMAF and HLS. They discuss the lack of a true CENC – Common Encryption – mechanism leading to duplication of assets. Similarly, the discussion moves to the fact that many streaming services have to have duplicate assets due to target device support.

Looking ahead, the panel is buoyed by the promise of QUIC. There is concern that QUIC, the Google-invented protocol for HTTP delivery over UDP, is both under standardisation proceedings in the IETF and is also being modified by Google separately and at the same time. But the prospect of a UDP-style mode and the higher efficiency seems to instil hope across all the participants of the panel.

Watch now to hear all the details!
Speakers

Ali C. Begen Ali C. Begen
Technical Consultant, Comcast
Kieran Kunhya Kieran Kunhya
Founder & CEO, Open Broadcast Systems
Director, RIST Forum
Barry Owen Barry Owen
VP, Solutions Engineering
Wowza Media Systems
Joshua Pressnell Josh Pressnell
CTO,
Penthera Technologies
Maxim Sharabayko Maxim Sharabayko
Senior Software Developer,
Haivision
Jason Thibeault Moderator: Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: Bandwidth Prediction for Multi-Bitrate Streaming at Low Latency

Low latency protocols like CMAF are wreaking havoc with traditional ABR algorithms. We’re having to come up with new ways of assessing if we’re running out of bandwidth. Traditionally, this is done by looking at how long a video chunk takes to download and comparing that with its playback duration. If you’re downloading at the same speed it’s playing, it’s time consider changing stream to a lower-bandwidth one.

As latencies have come down, servers will now start sending data from the beginning of a chunk as it’s being written which means it’s can’t be downloaded any quicker. To learn more about this, look at our article on ISO BMFF and this streaming primer. Since the file can’t be downloaded any quicker, we can’t ascertain if we should move up in bitrate to a better quality stream, so while we can switch down if we start running out of bandwidth, we can’t find a time to go up.

Ali C. Begen and team have been working on a way around this. The problem is that with the newer protocols, you pre-request files which start getting sent when they are ready. As such you don’t actually know the time the chunk starts downloading to you. Whilst you know when it’s finished, you don’t have access, via javascript, to when the file started being sent to you robbing you of a way of determining the download time.

Ali’s algorithm uses the time the last chunk finished downloading in place of the missing timestamp figuring that the new chunk is going to load pretty soon after the old. Now, looking at the data, we see that the gap between one chunk finishing and the next one starting does vary. This lead Ali’s team to move to a sliding window moving average taking the last 3 download durations into consideration. This is assumed to be enough to smooth out some of those variances and provides the data to allow them to predict future bandwidth and make a decision to change bitrate or not. There have been a number of alternative suggestions over the last year or so, all of which perform worse than this technique called ACTE.

In the last section of this talk, Ali explores the entry he was part of into a Twitch-sponsored competition to keep playback latency close to a second in test conditions with varying bitrate. Playback speed is key to much work in low-latency streaming as it’s the best way to trim off a little bit of latency when things are going well and allows you to buy time if you’re waiting for data; the big challenge is doing it without the viewer noticing. The entry used a heuristics and a machine learning approach which worked so well, they were runners up in the contest.

Watch now!
Speaker

Ali C. Begen
Ali C. Begen,
Technical Consultant, Comcast
Professor, Computer Science, Özyeğin University

Video: Latency Still Sucks (and What You Can Do About It)

The streaming industry is on an ever-evolving quest to reduce latency to bring it in line with, or beat linear broadcasts and to allow business models such as gaming to flourish. When streaming started, latency of a minute or more was not uncommon and whilst there are some simple ways to improve that, getting down to the latency of digital TV, approximately 5 seconds, is not without challenges. Whilst the target of 5 seconds works for many use cases, it’s still not enough for auctions, gambling or ‘gamification’ which need sub-second latency.

In this panel, Jason Thielbaut explores how to reduce latency with Casey Charvet from Gigcasters, Rob Roskin from CenturyLink and Haivision VP Engineering, Marc Cymontkowski. This wide-ranging discussion covers CDN caching, QUIC and HTTP/3, encoder settings, segmented Vs. non-segmented streaming, ingest and distribution protocols.

Key to the discussion is differentiating the ingest protocol from the distribution protocol. Often, just getting the content into the cloud quickly is enough to bring the latency into the customer’s budget. Marc from Haivision explains how SRT achieves low-latency contribution. SRT allows unreliable networks like the Internet to be used for reliable, encrypted video contribution. Created by Haivision and now an Open Source technology with an IETF draft spec, the alliance of SRT users continues to grow as the technology continues to develop and add features. SRT is a ‘re-request’ technology meaning it achieves its reliability by re-requesting from the encoder any data it missed. This is in contrast to TCP/IP which acknowledges every single packet of data and is sent missing data when acknowledgements aren’t received. Doing it the SRT, way really makes the protocol much more efficient and able to cope with real-time media. SRT can also encrypt all traffic which, when sending over the internet, is extremely important even if you’re not sending live-sports. In this video, Marc makes the point that SRT also recovers the timing of the stream so that the data comes out the SRT pipe in the same ‘shape’ as it went in. Particularly with VBR encoding, your decoder needs to receive the same peaks and troughs as the encoder created to save it having to buffer the input as much. All this included, SRT manages to deliver a transport latency of around 2.5 times the round trip time.

Haivision are members of RIST which is a similar technology to SRT. Marc explains that RIST is approaching the problem from a standards perspective; taking IETF RFCs and applying them to RTP. SRT took a more pragmatic way forward by creating a binary which implemented the features and by making this open source for interoperability.

The video finishes with a Q&A covering HTTP Header compression, recommended size of HLS chunks, peer-to-peer streaming and latency requirements for VoD.

Watch now!
Speakers

Rob Roskin Rob Roskin
Principal Solutions Architect,
Level3 Communications
Marc Cymontkowski Marc Cymontkowski
VP Engineering – Cloud,
Haivision
Casey Charvet Casey Charvet
Managing Director,
Gigcasters
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: Outlook on the future codec landscape

VVC has now been released, MPEG’s successor to HEVC. But what is it? And whilst it brings 50% bitrate savings over HEVC, how does it compare to other codecs like AV1 and the other new MPEG standards? This primer answers these questions and more.

Christian Feldmann from Bitmovin starts by looking at four of the current codecs, AVC, HEVC, VP9 and AV1. VP9 isn’t often heard about in traditional broadcast circles, but it’s relatively well used online as it’s supported on Android phones and brings bitrate savings over AVC. Google use VP9 on Youtube for compatible players and see a higher retention rate. Netflix and Twitch also use it. AV1 is also in use by the tech giants, though its use outside of those who built it (Netflix, Facebook etc.) is not yet apparent. Christian looks at the compatibility of the codecs, hardware decoding, efficiency and cost.

Looking now at the other upcoming MPEG codecs, Christian examines MPEG-5 Essential Video Coding (EVC) which has two profiles: Baseline and Main. The baseline profile only uses technologies which are old enough to be outside of patent claims. This allows you to use the codec without the concern that you may be asked for a fee from a patent holder who comes out of the woodwork. The main profile, however, does have patented technology and performs better. Businesses which wish to use this codec can then pay licences but if an unexpected patent holder appears, each individual tool in the codec can be disabled, allowing you to protect continue using, albeit without that technology. Whilst it is a shame that patents are so difficult to account for, this shows MPEG has taken seriously the situation with HEVC which famously has hundreds of licensable patents with over a third of eligible companies not part of a patent pool. EVC performs 32% better than AVC using the baseline profile and 25% better than HEVC with the main profile.

Next under the magnifying glass is Low Complexity Enhancement Video Coding (LCEVC). We’ve already heard about this on The Broadcast Knowledge from Guido, CEO of V-Nova who gave a deeper look at Demuxed 2019 and more recently at Streaming Media West. Whilst those are detailed talks, this is a great overview of the technology which is actually a hybrid approach to encoding. It allows you to take any existing codec such as AVC, AV1 etc. and put LCEVC on top of it. Using both together allows you to run your base encoder at a lower resolution (say HD instead of UHD) and then deliver to the decoder this low-resolution encode plus a small stream of enhancement information which the decoder uses to bring the video back up to size and add back in the missing detail. The big win here, as the name indicates, is that this method is very flexible and can take advantage of all sorts of available computing power in embedded technology as and in servers. In set-top boxes, parts of the SoC which aren’t used can be put to use. In phones, both the onboard HEVC decoding chip and the CPU can be used. It’s also useful in for automated workflows as the base codec stream is smaller and hence easier to decode, plus the enhancement information concentrates on the edges of objects so can be used on its own by AI/machine learning algorithms to more readily analyse video footage. Encoding time drops by over a third for AVC and EVC.

Now, Christian looks at the codec-du-jour, Versatile Video Codec (VVC), explaining that its enhancements over HEVC come not just from bitrate improvements but techniques which better encode screen content (i.e. computer games), allow for better 360 degree video and reduce delay. Subjective results show up to 50% improvement. For more detail on VVC, check out this talk from Microsoft’s Gary Sullivan.

The talk finishes with answers so audience questions: Which will be the winner, what future device & hardware support will be and which is best for real-time streaming.

Watch now!
Speakers

Christian Feldmann Christian Feldmann
Team lead, Encoding,
Bitmovin