Video: AV1 – A Reality Check

Released in 2018, AV1 had been a little over two years in the making at the Alliance of Open Media founded by industry giants including Google, Amazon, Mozilla, Netflix. Since then work has continued to optimise the toolset to bring both encoding and decoding down to real-world levels.

This talk brings together AOM members Mozilla, Netflix, Vimeo and Bitmovin to discus where AV1’s up to and to answer questions from the audience. After some introductions, the conversation turns to 8K. The Olympics are the broadcast industry’s main driver for 8K at the moment, though it’s clear that Japan and other territories aim to follow through with further deployments and uses.

“AV1 is the 8K codec of choice” 

Paul MacDougall, Bitmovin
 CES 2020 saw a number of announcements like this from Samsung regarding AV1-enabled 8K TVs. In this talk from Google, Matt Frost from Google Chrome Media explains how YouTube has found that viewer retention is higher with VP9-delivered videos which he attributes to VP9’s improved compression over AVC which leads to quicker start times, less buffering and, often, a higher resolution being delivered to the user. AV1 is seen as providing these same benefits over AVC without the patent problems that come with HEVC.

 
It’s not all about resolution, however, points out Paul MacDougall from BitMovin. Resolution can be useful, for instance in animations. For animated content, resolution is worth having because it accentuates the lines which add intelligibility to the picture. For some content, with many similar textures, grass, for instance, then quality through bitrate may be more useful than adding resolution. Vittorio Giovara from Vimeo agrees, pointing out that viewer experience is a combination of many factors. Though it’s trivial to say that a high-resolution screen of unintended black makes for a bad experience, it is a great reminder of things that matter. Less obviously, Vittorio highlights the three pillars of spatial, temporal and spectral quality. Temporal refers to upping the bitrate, spatial is, indeed, the resolution and spectral refers to bit-depth and colour-depth know as HDR and Wide Colour Gamut (WCG).

Nathan Egge from Mozilla acknowledges that in their 2018 code release at NAB, the unoptimized encoder which was claimed by some to be 3000 times slower than HEVC, was ’embarrassing’, but this is the price of developing in the open. The panel discusses the fact that the idea of developing compression is to try out approaches until you find a combination that work well. While you are doing that, it would be a false economy to be constantly optimising. Moreover, Netflix’s Anush Moorthy points out, it’s a different set of skills and, therefore, a different set of people who optimise the algorithms.

Questions fielded by the panel cover whether there are any attempts to put AV1 encoding or decoding into GPU. Power consumption and whether TVs will have hardware or software AV1 decoding. Current in-production AV1 uses and AVC vs VVC (compression benefit Vs. royalty payments).

Watch now!
Speakers

Vittorio Giovara Vittorio Giovara
Manager, Engineering – Video Technology
Vimeo
Nathan Egge Nathan Egge
Video Codec Engineer,
Mozilla
Paul MacDougall Paul MacDougall
Principal Sales Engineer,
Bitmovin
Anush Moorthy Anush Moorthy
Manager, Video and Image Encoding
Netflix
Tim Siglin Tim Siglin
Founding Executive Director
Help Me Stream, USA

Video: Making Live Streaming More ‘Live’ with LL-CMAF

Squeezing streaming latency down to just a few seconds is possible with CMAF. Bitmovin guides us through what’s possible now and what’s yet to come.

CMAF represents an evolution of the tried and tested technologies HLS and DASH. With massive scalability and built upon the well-worn tenants of HTTP, Netflix and a whole industry was born and is thriving on these still-evolving technologies. But the push to reduce latency further and further has resulted in CMAF which can be used to deliver streams with five to ten times lower latencies.

Paul MacDougall is a Solutions Architect with Bitmovin so is well placed to explain the application of CMAF. Starting with a look at what we mean by low latency, he shows that it’s still quite possible to find HLS latencies of up to a minute but more common latencies now are closer to 30 seconds. But 5 seconds is the golden latency which matches many broadcast mechanisms including digital terrestrial, so it’s no surprise that this is where low latency CMAF is aimed.

CMAF itself is simply a format which unites HLS and DASH under one standard. It doesn’t, in and of itself, mean your stream will be low latency. In fact, CMAF was born out of MPEG’s MP4 standard – officially called ISO BMFF . But you can use CMAF in a low-latency mode which is what this talk focusses on.

Paul looks at what makes up the latency of a typical feed discussing encoding times, playback latency and the other key places. With this groundwork laid, it’s time to look at the way CMAF is chunked and formatted showing that the smaller chunk sizes allow the encoder and player to be more flexible reducing several types of latency down to only a few seconds.

In order to take full advantage of CMAF, the play needs to understand CMAF and Paul explains these adaptations before moving on to the limitations and challenges of using CMAF today. One important change, for instance, is that chunked streaming players (i.e. HLS) have always timed the download of each chunk to get a feel for whether bandwidth was plentiful (download was quicker than time taken to play the chunk) or bandwidth was constrained (the chunk arrived slower than real-time). Based on this, the player could choose to increase or decrease the bandwidth of the stream it was accessing which, in HLS, means requesting a chunk from a different playlist. Due to the improvements in downloading smaller chunks and using real-time transfer techniques such as HTTP/1.1 Chunked Transfer the chunks are all arriving at the download speed. This makes it very hard to make ABR work for LL-CMAF, though there are approaches being tested and trialed not mentioned in the talk.

Watch now!

Speakers

Paul MacDougall Paul MacDougall
Solutions Architect,
Bitmovin

Webinar: Managing Transition to HEVC/VP9/AV1 with Multi-Codec Streaming

In this talk from Streaming Media East 2018 Paul MacDougall from Bitmovin discusses moving from h.264 to newer codecs.

Video streaming is in a transition towards the next generation of video codecs, offering to double the quality while lowering the required bandwidth. As the successor crown to the ubiquitous AVC/H.264 is still up for grabs, major content providers and device manufacturers are throwing their weights behind competing formats – HEVC/VP9/AV1 – leading to market fragmentation, specifically within web environments. To deal with this challenge, OTT services need to support multiple codecs in an efficient way.

In this presentation, Paul talks about how to evaluate the benefits and the tradeoffs of embracing next-generation compression technologies in your media workflow. He looks at the state of the browser market and compatibility, current deployment percentages and then how to decide whether to do multiple encoding on an asset or not. Paul finished with advice on playback and the state of smart TVs.

Watch now!