Low latency can be a differentiator for a live streaming service, or just a way to ensure you’re not beaten to the punch by social media or broadcast TV. Either way, it’s seen as increasingly important for live streaming to be punctual breaking from the past where latencies of thirty to sixty seconds were not uncommon. As the industry has matured and connectivity has enough capacity for video, simply getting motion on the screen isn’t enough anymore.
Steve Heffernan from MUX takes us through the thinking about how we can deliver low latency video both into the cloud and out to the viewers. He starts by talking about the use cases for sub-second latency – anything with interaction/conversations – and how that’s different from low-latency streaming which is one to many, potentially very large scale distribution. If you’re on a video call with ten people, then you need sub-second latency else the conversation will suffer. But distributing to thousands or millions of people, the sacrifice in potential rebuffering of operating sub-second, isn’t worth it, and usually 3 seconds is perfectly fine.
Steve talks through the low-latency delivery chain starting with the camera and encoder then looking at the contribution protocol. RTMP is still often the only option, but increasingly it’s possible to use WebRTC or SRT, the latter usually being the best for streaming contribution. Once the video has hit the streaming infrastructure, be that in the cloud or otherwise, it’s time to look at how to build the manifest and send the video out. Steve talks us through the options of Low-Latency HLS (LHLS) CMAF DASH and Apple’s LL-HLS. Do note that since the talk, Apple removed the requirement for HTTP/2 push.
The talk finishes off with Steve looking at the players. If you don’t get the players logic right, you can start off much farther behind than necessary. This is becoming less of a problem now as players are starting to ‘bend time’ by speeding up and slowing down to bring their latency within a certain target range. But this only underlines the importance of the quality of your player implementation.
As we saw yesterday, there’s an increasingly buoyant market for video codecs and whilst this is a breath of fresh air after AVC’s multi-decade dominance, we will likely never again see a market which isn’t fragmented with several dominant players, say AV1, AVC, VVC and VP9, each sharing 85% market share relatively equally and then ‘the rest’ bringing up the rear. So multi-codec distribution to home viewers is going to have to deal with delivering different codecs to different people.
fuboTV do this today and Nick Krzemienski is here to tell us how. Starting with an overview of fuboTV primarily streams both live and on VOD. Nick shows us the workflow they use and then explains how their AVC & HEVC combined workflow is set up. Starting with the ideal case where a single fmp4 is encoded into both AVD and HEVC, he proposes you would simply package both into an HLS and DASH manifest and let players work out the rest. Depending on your players, you may have to split out your manifests into single-codec files.
DRM’s very important for a sports broadcaster so Nick looks at how this might be achieved. CMAF allows you to deliver m3u8 and mpd files using CENC (Common ENCryption). This promises a single DRM process ahead of packaging, but the reality, we hear from Nick, is that you’ll need two sets of media for HLS and DASH if you’re going to use CENC.
When you’re delivering multiple manifest and, hence, multiple sources, how do you manage this? Nick outlines, and shows the code, of how he achieves this at the edge. Using Lamda, he’s able to look at the incoming requests and existing files at the CDN to deliver the right asset with the logic done close to the viewer. Nick closes by with his thoughts on the future for streaming and answering questions from the audience.
Jameson Steiner from Bitmovin starts by explaining why there is a motivation to cut the latency. One big motivation, aside from the standard live sports examples, is user-generated content like on Twitch where it’s very clear to the streamer, and quite off-putting, when there is large amounts of delay. Whilst delay can be adapted to, the more there is the less interaction is possible. In this situation, it’s the ‘handwaving’ latency that comes in to play. You want the hand on the screen to wave pretty much at the same time as your hand waves in front of the camera. Jameson places different types of distribution on a chart showing latency and we see that low-latency of 5 seconds or less will not only match traditional TV broadcasts, but also work well for live streamers.
Naturally, to fix a problem you need to understand the problem, so Jameson breaks down the legacy methods of delivery to show why the latency exists. The issue comes down to how video is split into sections, say 6 seconds, so that the player downloads a section at a time, reassembles and plays them. Looking from the player’s perspective, if the network suddenly broke or reduced its throughput, it makes sense to have several chunks in reserve. Having three 6-second chunks, a sensible precaution, makes you 18 seconds behind the curve from the off.
Clearly reducing the segement size is a winner in this scenario. Three 3 second segments will give you just 9 seconds latency; why not go to 1 second? Well encoding inefficiency is one reason. If you reduce the amount of time a temporal codec has of a video, its efficiency will drop and bitrate will increase to maintain quality. Jameson explains the other knock-on effects such as CDN inefficiencies and network requests. The standardised way to avoid these problems is to use CMAF (Common Media Application Format) which is based on MPEG DASH and ISO BMFF. CMAF, and DASH in general, has the benefit of coming from a standards body whose aim was to remove vendor lock-in that may be felt with HLS and was certainly felt with RTMP. Check out MPEG’s short white paper on the topic (zipped .docx file)
CMAF uses chunked transfer meaning that as the encoder writes the data to the disk, the web server sends it to the client. This is different to the default where a file is only sent after it’s been completely written. This has the effect of the not having to wait up to 6 seconds to a 6-second chunk to start being sent; the download time also needs to be counted. Rather, almost as soon as the chunk has been finished by the encoder, it’s arrived at the destination. This is a feature of HTTP 1.1 and after so is not new, but it still needs to be enabled and considered as part of the delivery.
CMAF goes beyond simple HTTP 1.1 chunked transfer which is a technique used in low-latency HLS, covered later, by creating extra structure within the 6-second segment (until now, called a chunk in this article). This extra structure allows the segment to be downloaded in smaller chunks decoupling the segment length from the player latency. Chunked transfer does cause a notable problem however which has not yet been conclusively solved. Jameson explains how traditionally each large segment typically arrives faster than realtime. By measuring how fast it arrives, given the player knows the duration, it can estimate the bandwidth available at that time on the network. With chunked transfer, as we saw, we are receiving data as it’s being created. By definition, we are now getting it in realtime so there is no opportunity to receive it any quicker. The bandwidth estimation element, as shown the presentation, is used to work out if the player needs to go down or could go up to another stream at a different bitrate – part of standard ABR. So the catastrophe here is the going down in latency has hampered our ability to switch bitrates and whilst the viewer can see the video close to real-time, who’s to say if they are seeing it at the best quality?
Apple is on its second major revision of LL-HLS which has responded to many of the initial complaints from the community. Whilst it can use HTTP/2 to help push segments out, this caused problems in practice so it can now preload hints, as Jameson explains in order to remove round-trip times from requests. Jameson looks at the other of Apple’s techniques and shows how they look in manifest files.
The final section looks at problems in implementing these features such as chunks being fragmented across TCP packets, the bandwidth estimation question and dealing with playback speed in order to adjust the players position in time – speed-ups and slow-downs of 5 to 10% can be possible depending on content.
Of course without live ingest of content into the cloud, there is no live streaming so why would we leave such an important piece of the puzzle to an unsupported protocol like RTMP which has no official support for newer codecs. Whilst there are plenty of legacy workflows that still successfully use RTMP, there are clear benefits to be had from a modern ingest format.
Rufael Mekuria from Unified Streaming, introduces us to DASH-IF’s CMAF-based live ingest protocol which promises to solve many of these issues. Based on the ISO BMFF container format which underpins MPEG DASH. Whilst CMAF isn’t intrinsically low-latency, it’s able to got to much lower latencies than standard HLS and LHLS.
This work to create a standard live ingest protocol was born out of an analysis, Rufael explains, of which part of the content delivery chain were most ripe for standardisation. It was felt that live ingest was an obvious choice partly because of the decaying RTMP protocol which was being sloppy replaced by individual companies doing their own thing, but also because there everyone contributing in the same way is of a general benefit to the industry. It’s not typically, at the protocol level, an area where individual vendors differentiate to the detriment of interoperability and we’ve already seen the, then, success of RMTP being used inter-operably between vendor equipment.
MPEG DAHS and HLS can be delivered in a pull method as well as pushed, but not the latter is not specified. There are other aspects of how people have ‘rolled their own’ which benefit from standardisation too such as timed metadata like ad triggers. Rufael, explaining that the proposed ingest protocol is a version of CMAF plus HTTP POST where no manifest is defined, shows us the way push and pull streaming would work. As this is a standardisation project, Rufael takes us through the timeline of development and publication of the standard which is now available.
As we live in the modern world, ingest security has been considered and it comes with TLS and authentication with more details covered in the talk. Ad insertion such as SCTE 35 is defined using binary mode and Rufael shows slides to demonstrate. Similarly in terms of ABR, we look at how switching sets work. Switching sets are sets of tracks that contain different representations of the same content that a player can seamlessly switch between.
Head of Research & Standardisation,
Subscribe to get daily updates
Views and opinions expressed on this website are those of the author(s) and do not necessarily reflect those of SMPTE or SMPTE Members.
This website is presented for informational purposes only. Any reference to specific companies, products or services does not represent promotion, recommendation, or endorsement by SMPTE