Video: Delivering Better Manifests with Effective VMAF

Measuring video quality is done daily around the world between two video assets. But what happens when you want to take the aggregate quality of a whole manifest? With VMAF being a well regarded metric, how can we use that in an automatic way to get the overview we need?

In this talk, Nick Chadwick from Mux shares the examples and scripts he’s been using to analyse videos. Starting with an example where everything is equal other than quality, he explains the difficulties in choosing the ‘better’ option when the variables are much less correlated. For instance, Nick also examines the situations where a video is clearly better, but where the benefit is outweighed by the minimal quality benefit and the disproportionately high bitrate requirement.

So with all of this complexity, it feels like comparing manifests may be a complexity too far, particularly where one manifest has 5 renditions, the other only 4. The question being, how do you create an aggregate video quality metric and determine whether that missing rendition is a detriment or a benefit?

Before unveiling the final solution, Nick makes the point of looking at how people are going to be using the service. Depending on the demographic and the devices people tend to use for that service, you will find different consumption ratios for the various parts of the ABR ladder. For instance, some services may see very high usage on 2nd screens which, in this case, may take low-resolution video and also lot of ‘TV’ size renditions at 1080p50 or above with little in between. Similarly other services may seldom ever see the highest resolutions being used, percentage-wise. This shows us that it’s important not only to look at the quality of each rendition but how likely it is to be seen.

To bring these thoughts together into a coherent conclusion, Nick unveils an open-source analyser which takes into account not only the VMAF score and the resolution but also the likely viewership such that we can now start to compare, for a given service, the relative merits of different ABR ladders.

The talk ends with Nick answering questions on the tendency to see jumps between different resolutions – for instance if we over-optimise and only have two renditions, it would be easy to see the switch – how to compare videos of different resolutions and also on his example user data.

Watch now!
Speakers

Nick Chadwick Nick Chadwick
Software Engineer,
Mux