Video: Tidying Up (Bits on the Internet)

Netflix’s Anne Aaron explains how VMAF came about and how AV1 is going to benefit both the business and the viewers.

VMAF is a method for computers to calculate the quality of a video in a way which would match a human’s opinion. Standing for Video Multi-Method Assessment Fusion, Anne explains that it’s a combination (fusion) of more than one metric each harnessing different aspects. She presents data showing the increased correlation between VMAF and real-life tests.

Anne’s job is to maximise enjoyment of content through efficient use of bandwidth. She explains there are many places with wireless data is limited so getting the maximum amount of video through that bandwidth cap is an essential part of Netflix’s business health.

This ties in with why Netflix is part of the Alliance for Open Media who are in the process of specifying AV1, the new video codec which promises bitrate improvements over-and-above HEVC. Anne expands on this and presents the aim to deliver 32 hours of video using AV1 for 4Gb subscribers.

Watch now!
Speaker

Anne Aaron

Video: A Survey Of Per-Title Encoding Technologies

Optimising encoding by per-title encoding is very common nowadays, though per-scene is slowly pushing it aside. But with so many companies offering per-title encoding, how do we determine which way to turn?

Jan Ozer experimented with them, so we didn’t have to. Jan starts by explaining the principles of per-title encoding and giving an overview of the market. He then explains some of the ways in which it works including the importance of changing resolution as much as changing

As well as discussing the results, with Bitmovin being the winner, Jan explains ‘Capped CRF’ – how it works, how it differs from CBR & VBR and why it’s good.

Finally, we are left with some questions to ask when searching for our own per-title technology to solve the problem we have such as “can it adjust rung resolutions?”, “Can you apply traditional data rate controls?” amongst others.

Watch now!

Speaker

Jan Ozer Jan Ozer
Principal,
Streaming Learning Center

Video: Per-title Encoding at Scale

MUX is a very pro-active company pushing forward streaming technology. At NAB 2019 they have announced Audience Adaptive Encoding which is offers encodes tailored to both your content but also the typical bitrate of your viewing demographic. Underpinning this technology is machine learning and their Per-title encoding technology which was released last year.

This talk with Nick Chadwick looks at what per-title encoding is, how you can work out which resolutions and bitrates to encode at and how to deliver this as a useful product.

Nick takes some time to explain MUX’s ‘convex hulls’ which give a shape to the content’s performance at different bitrates and helps visualise the optimum encoding parameters the content. Moreover we see that using this technique, we see some surprising circumstances when it makes sense to start at high resolutions, even for low bitrates.

Looking then at how to actually work out on a title-by-title basis, Nick explains the pros and cons of the different approaches going on to explain how MUX used machine learning to generate the model they created to make this work.

Finishing off with an extensive Q&A, this talk is a great overview on how to pick great encoding parameters, manually or otherwise.

Watch now!

Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux Inc.

Video: VMAF – the Journey Continues

VMAF is a video quality metric created by Netflix which allows computers to indicate what quality a video is. This is an important part of evaluating how good your encoder or streaming service is so it’s no surprise that Netflix has invested years of research into this. Other metrics such as PSNR and MS-SSIM all have their problems – and let’s accept that no metric is perfect – but what the industry has long grappled with is that a video that has a strong fidelity to the source doesn’t necessarily look better than one that less-faithfully replicates the source.

Imagine you had a video of an overcast day and one encoder rendered the video a bit brighter and a bit more blue. Well, for that clip, people watching might prefer that encoder even though the video is quite different from the source. The same is true of noisy pictures where replicating the noise isn’t always the best idea as some people, for some content, would prefer the cleaner look even though some details may have been lost.

As such, metrics have evolved from PSNR which is much more about fidelity to metrics which try harder to model what ‘looks good’ and VMAF is an example of that.

Zhi Li explains the history of VMAF and explains some of the new features which were released in August 2018, when this talk was given, which gives an insight into the way VMAF works. Plus, there’s a look ahead at new features on the road map. This talk was given at a SF Video Technology meet up.

Watch now!

Speakers

Zhi Li Zhi Li
Senior Software Engineer – Video Algorithms and Research
Netflix