Video: Towards Measuring Perceptual Video Quality & Why

In the ongoing battle to find the minimum bitrate for good looking video, automation is key to achieving this quickly and cheaply. However, metrics like PSNR don’t always give the best answers meaning that eyes are still better the job than silicon.

In this talk from the Demuxed conference, Intel’s Vasavee Vijayaraghavan shows us examples of computer analysis failing to identify lowest bitrate leaving the encoder spending many megabits encoding video so that it looks imperceptibly better. Further more it’s clear that MOS – the Mean Opinion Score – which has a well defined protocol behind it continues to produce the best results, though setting up and co-ordinating takes orders of magnitude more time and money.

Vasavee shows how she’s managed to develop a hybrid workflow which combines metrics and MOS scores to get much of the benefit of computer-generated metrics fed into the manual MOS process. This allows a much more targeted subjective perceptual quality MOS process thereby speeding up the whole process but still getting that human touch where it’s most valuable.

Watch now!
Speaker

Vasavee Vijayaraghavan Vasavee Vijayaraghavan
Cloud Media Solutions Architect,
Intel

Video: An introduction to Biological Compression

The search for better codecs is ever lasting so it’s no surprise that with AI’s recent advances, we see a codec based on AI/machine learning. The AI approach not only frees up the maths from, say, upscaling using a fixed algorithm to doing it however it sees fit, but also gives it a holistic view of the image.

Considering the image as a whole whilst encoding it allows the encoder to better apportion bitrate and detail to the needed areas whereas other codecs have trouble breaking out of the procedural ‘one block at a time’ mode which tends to treat each macro block separately.

Co-founder of Deep Render Aim Christian Besenbruch gives us examples of his company’s ‘biological compression’ codec against the latest BGP codec which is an HEVC-based still image codec which delivers smaller images than JPEG.

Watch now!
Speaker

Christian Besenbruch Christian Besenbruch
Co-Founder,
Deep Render AI

Video: Quantitative Evaluation and Attribute of Overall Brightness in a HDR World

HDR has long being heralded as a highly compelling and effective technology as high dynamic range can improve video of any resolution and much better mimics the natural world. HDR continues its relatively slow growth into real-world use, but continues to show progress.

HDR is so compelling because it can feed our senses more light and it’s no secret that TV shops know we like nice, bright pictures on our TV sets. But the reality of production in HDR is that you have to contend with human eyes which have a great ability to see dark and bright images – but not at the same time. The total ability of the eye to simultaneously distinguish brightness is about 12 stops, which is only two thirds of its non-simultaneous total range.
 

 
The fact that our eyes constantly adapt and, let’s face it, interpret what they see, makes understanding brightness in videos tricky. There are dependencies on overall brightness of a picture at any one moment, the previous recent brightness, the brightness of local adjacent parts of the image, the ambient background and much more to consider.

Selios Ploumis steps into this world of varying brightness to creat a ways of quantitatively evaluating brightness for HDR. The starting place is the Average Picture Level (APL) which is what the SDR world uses to indicate brightness. With the greater dynamic range in HDR and the way this is implemented, it’s not clear that APL is up to the job.

Stelios explains his work in analysing APL in SDR and HDR and shows the times that simply taking the average of a picture can trick you into seeing two images as practically the same, whereas the brain clearly sees one as more ‘bright’ than the other. On the same track, he also explains ways in which we can work to differentiate signals better, for instance taking in to account the spread of the brightness values as opposed to APL’s normalised average of all pixels’ values.

The talk wraps up with a description of how the testing was carried out and a summary of the proposals to improve the quantitive analysis of HDR video.

Watch now!
Speakers

Stelios Ploumis Stelios Ploumis
PhD Research Candidate
MTT Innovation Inc.

Video: Per-title Encoding at Scale

MUX is a very pro-active company pushing forward streaming technology. At NAB 2019 they have announced Audience Adaptive Encoding which is offers encodes tailored to both your content but also the typical bitrate of your viewing demographic. Underpinning this technology is machine learning and their Per-title encoding technology which was released last year.

This talk with Nick Chadwick looks at what per-title encoding is, how you can work out which resolutions and bitrates to encode at and how to deliver this as a useful product.

Nick takes some time to explain MUX’s ‘convex hulls’ which give a shape to the content’s performance at different bitrates and helps visualise the optimum encoding parameters the content. Moreover we see that using this technique, we see some surprising circumstances when it makes sense to start at high resolutions, even for low bitrates.

Looking then at how to actually work out on a title-by-title basis, Nick explains the pros and cons of the different approaches going on to explain how MUX used machine learning to generate the model they created to make this work.

Finishing off with an extensive Q&A, this talk is a great overview on how to pick great encoding parameters, manually or otherwise.

Watch now!

Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux Inc.