If we ever had a time when most displays were the same resolution, those days are long gone with smartphone and tablets with extremely high pixel density nestled in with laptop screens of various resolutions and 1080-line TVs which are gradually being replaced with UHD variants. This means that HD videos are nearly always being upscaled which makes ‘getting upscaling right’ a really worthwhile topic. The well-known basic up/downscaling algorithms have been around for a while, and even the best-performing Lanczos is well over 20 years old. The ‘new kid on the block’ isn’t another algorithm, it’s a whole technique of inferring better upscaling using machine learning called ‘super resolution’.
Nick Chadwick from Mux has been running the code and the numbers to see how well super resolution works. Taking to the stage at Demuxed SF, he starts by looking at where scaling is used and what type it is. The most common algorithms are nearest neighbour, bi-cubic, bi-linear and lanczos with nearest neighbour being the most basic and least-well performing. Nick shows, using VMAF that using these for up and downscaling, that the traditional opinions of how well these algorithms perform are valid. He then introduces some test videos which are designed to let you see whether your video path is using bi-linear or bi-cubic upscaling, presenting his results of when bi-cubic can be seen (Safari on a MacBook Pro) as opposed to bi-linear (Chrome on a MacBook Pro). The test videos are available here.
In the next part of the talk, Nick digs a little deeper into how super resolution works and how he tested ffmpeg’s implementation of super resolution. Though he hit some difficulties in using this young filter, he is able to present some videos and shows that they are, indeed, “better to view” meaning that the text looks sharper and is easier to see with details being more easy pick out. It’s certainly possible to see some extra speckling introduced by the process, but VMAF score is around 10 points higher matching with the subjective experience.
The downsides are a very significant increase in computational power needed which limits its use in live applications plus there is a need for good, if not very good, understanding of ML concepts and coding. And, of course, it wouldn’t be the online streaming community if clients weren’t already being developed to do super-resolution on the decode despite most devices not being practically capable of it. So Nick finishes off his talk discussing what’s in progress and papers relating to the implementation of super resolution and what it can borrow from other developing technologies.
Measuring video quality is done daily around the world between two video assets. But what happens when you want to take the aggregate quality of a whole manifest? With VMAF being a well regarded metric, how can we use that in an automatic way to get the overview we need?
In this talk, Nick Chadwick from Mux shares the examples and scripts he’s been using to analyse videos. Starting with an example where everything is equal other than quality, he explains the difficulties in choosing the ‘better’ option when the variables are much less correlated. For instance, Nick also examines the situations where a video is clearly better, but where the benefit is outweighed by the minimal quality benefit and the disproportionately high bitrate requirement.
So with all of this complexity, it feels like comparing manifests may be a complexity too far, particularly where one manifest has 5 renditions, the other only 4. The question being, how do you create an aggregate video quality metric and determine whether that missing rendition is a detriment or a benefit?
Before unveiling the final solution, Nick makes the point of looking at how people are going to be using the service. Depending on the demographic and the devices people tend to use for that service, you will find different consumption ratios for the various parts of the ABR ladder. For instance, some services may see very high usage on 2nd screens which, in this case, may take low-resolution video and also lot of ‘TV’ size renditions at 1080p50 or above with little in between. Similarly other services may seldom ever see the highest resolutions being used, percentage-wise. This shows us that it’s important not only to look at the quality of each rendition but how likely it is to be seen.
To bring these thoughts together into a coherent conclusion, Nick unveils an open-source analyser which takes into account not only the VMAF score and the resolution but also the likely viewership such that we can now start to compare, for a given service, the relative merits of different ABR ladders.
The talk ends with Nick answering questions on the tendency to see jumps between different resolutions – for instance if we over-optimise and only have two renditions, it would be easy to see the switch – how to compare videos of different resolutions and also on his example user data.
Per-title encoding is a common method of optimising quality and compression by changing the encoding options on a file-by-file basis. Although some would say the start of per-scene encoding is the death knell for per-title encoding, either is much better than the more traditional plan of applying exactly the same settings to each video.
This talk with Mux’s Nick Chadwick and Ben Dodson looks at what per-title encoding is and how to go about doing it. The initial work involves doing many encodes of the same video and analysing each for quality. This allows you to out which resolutions and bitrates to encode at and how to deliver the best vide.
Ben Dodson explains the way they implemented this at Mux using machine learning. This was done by getting computers to ‘watch’ videos and extract metadata. That metadata can then be used to inform the encoding parameters without the computer watching the whole of a new video.
Nick takes some time to explain MUX’s ‘convex hulls’ which give a shape to the content’s performance at different bitrates and helps visualise the optimum encoding parameters the content. Moreover we see that using this technique, we can explore how to change resolution to create the best encode. This doesn’t always mean reducing the resolution; there are some surprising circumstances when it makes sense to start at high resolutions, even for low bitrates.
The next stage after per-title encoding is to segment the video and encode each segment differently which Nick explores and explains how to deliver different resolutions throughout the stream seamlessly switching between them. Ben takes over and explains how this can be implemented and how to chose the segment boundaries correctly, again, using a machine learning approach to analysis and decision making.
With the demise of RTMP, what can WebRTC – its closest equivalent – learn from it? RTC stands for Real-Time Communications and hails from the video/voice teleconferencing world. RTC traditionally has ultra-low latency (think sub-second; real-time) so as broadcasters and streaming companies look to reduce latency it’s the obvious technology to look at. However, RTC comes from a background of small meetings, mixed resolutions, mixed bandwidths and so the protocols underpinning it can be lacking what broadcast-style streamers need.
Nick Chadwick from MUX looks at the pros and cons of the venerable RTMP (Real Time Messaging Protocol). What was in it that was used and unused? What did need that it didn’t have? What gap is being left by its phasing out?
Filling these increasing gaps is the focus of the streaming community and whether that comes through WebRTC, fragmented MP4 delivered over web sockets, Low-Latency HLS, Apple’s Low-Latency HLS, SASH, CMAF or something else…it still needs to be fulfilled.
Nick finishes with two demos which show the capabilities of WebRTC which outstrip RTMP – live mixing on a browser. WebRTC clearly has a future for more adventurous services which don’t simply want to deliver a linear channel to sofa-dwelling humans. But surely Nick’s message is WebRTC needs to step up to the plate for broadcasters in general to enable them to achieve <1 second end-to-end latency in a way which is compatible with broadcast workflows.
Views and opinions expressed on this website are those of the author(s) and do not necessarily reflect those of SMPTE or SMPTE Members.
This website is presented for informational purposes only. Any reference to specific companies, products or services does not represent promotion, recommendation, or endorsement by SMPTE