Video: The Past, Present and Future of AV1

AV1 has strong backing from tech giants but is still seldom seen in the wild. Find out what the plans are for the future with Google’s Debargha Mukherjee.

Debargha’s intent in this talk is simple: to frame a description of what AV1 can do and is doing today in terms of the history of the codec and looking forward to the future and a potential AV2.

The talk starts by demonstrating the need for better video codecs not least of which is the statistic that by 2021, 81% of the internet’s traffic is expected to be video. But on top of that, there is a frustration with the slow decade-long refresh process which is traditional for video codecs. In order to match the new internet landscape with fast-evolving services, it seemed appropriate to have a codec which, not only delivered better encoding but also saw a quicker five-year refresh cycle.

As a comparison to the royalty-free AV1, Debargha then looks at VP9 it is deployed. Furthermore, VP10 who’s development was stopped and diverted into the AV1 effort which is then the topic for the next part of the talk; the Alliance for Open Media, the standardisation process and then a look at some of the encoding tools available to archive the stated aims.

To round off the description of what’s presently happening with AV1 trials of VP9, HEVC and AV1 are shown demonstrating AV1s ability to improve compression for a certain quality. Bitmovin and Facebook’s tests are also highlighted along with speed tests.

Looking, now, to the future, the talk finishes by explaining the future roadmap for hardware decoding and other expected milestones in the coming years plus the software work such as SVT-AV1 and DAV1D for optimised encoding and decoding. With the promised five-year cycle, we need to look forward now to AV2 and Debargha discusses what it might be and what it would need to achieve.

Watch now!
Speaker

Debargha Mukherjee Debargha Mukherjee
Principal Software Engineer,
Google

Video: Per-Title Encoding, @Scale Conference

Per-title encoding with machine learning is the topic of this video from MUX.

Nick Chadwick explains that rather than using the same set of parameters to encode every video, the smart money is to find the best balance of bitrate and resolution for each video. By analysing a large number of combinations of bitrate and resolution, Nick shows you can build what he calls a ‘convex hull’ when graphing against quality. This allows you to find the optimal settings.

Doing this en mass is difficult, and Nick spends some time looking at the different ways of implementing it. In the end, Nick and data scientist Ben Dodson built a system which optimses bitrate for each title using neural nets trained on data sets. This resulted in 84% of videos looking better using this method rather than a static ladder.

Watch now!
Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux

Video: Delivering for Large-Scale Events

From event to event it’s not a surprise that streaming traffic increases, but this look at the Wolrd Cup 2018 shows a very sharp rise beating many expecatations. Joachim Hengge tells us what hte World Cup looked like from Akamai’s perspective.

Joachim takes us through the stats for streaming the World Cup where they peaked at 23Tbps of throuhgput with nearly 10 million concurrent viewers. The bandwidth was significantly higher than the last World Cup but looking at the data, we can learn a few more things about the market.

After looking at a macth-by-match breakdown we look at a sytsem architecture for one customer who delivered the World Cup to highlight the importance of stable content ingest, latency and broadcast quality. Encoding and packaging into HLS with 4-second chunks were tasks done on site with the rest happening within Akamai and being fed to other CDNs. Joachim pulls this together into three key recommendations for anyone looking at streaming large events before delvingin to some Sweden-specific streaming stats where over 81% of feeds were played back at the highest quality.

Watch now!
Free registration required

This talk is from Streaming Tech Sweden, an annual conference run by Eyevinn Technology. Videos from the event are available to paid attendees but are released free of charge after several months. As with all videos on The Broadcast Knowledge, this is available free of charge after registering on the site.

Speaker

Joachim Hengge Joachim Hengge
Senior Product Manager, Media Services,
Akamai

Video: HDR Formats and Trends

As HDR continues its slow march into use, its different forms both in broadcast and streaming can be hard to keep track of and even differentiate. This talk from the Seattle Video Tech meetup aims to tease out these details. Whilst HDR has long been held up as a perfect example of ‘better pixels’ and many have said they would prefer to deploy HD video plus HDR rather than moving in to UHD at the same time as introducing HDR, few have followed through.

Brian Alvarez from Amazon Prime Video starts with a very brief look at how HDR has been created to sit on top of the existing distribution formats: HLS, DASH, HEVC, VP9, AV1, ATSC 3.0 and DVB. The way it does this is in a form based on either HLG or PQ.

Brian takes some time to discuss the differences between the two approaches to HDR. First off, he looks at HLG which is an ARIB standard freely available, though still with licencing. This standard is, technically, backwards compatible with SDR but most importantly doesn’t require metadata which is a big benefit in the live environment and simplifies broadcast. PQ is next, and we hear about the differences in approach from HLG with the suggestion that this gives better visual performance than HLG. In the PQ ecosystem, Brian works through the many standards explaining how they differ and we see that the main differences are in in colour space and bit-depth.

The next part of the talk looks at the, now famous, venn diagrams (by Yoeri Geutskens) showing which companies/products support each variant of HDR. This allows us to visualise and understand the adoption of HDR10 vs HLG for instance, to see how much broadcast TV is in PQ and HLG, to see how the film industry is producing exclusively in PQ and much more. Brian comments and gives context to each of the scenarios as he goes.

Finally a Q&A session talks about displays, end-to-end metadata flow, whether customers can tell the difference, the drive for HDR adoption and a discussion on monitors for grading HDR.

Watch now! / Download the Slides

Speaker

Brian Alvarez Brian Alvarez
Principal Product Manager,
Amazon Prime Video