Video: A Survey Of Per-Title Encoding Technologies

Optimising encoding by per-title encoding is very common nowadays, though per-scene is slowly pushing it aside. But with so many companies offering per-title encoding, how do we determine which way to turn?

Jan Ozer experimented with them, so we didn’t have to. Jan starts by explaining the principles of per-title encoding and giving an overview of the market. He then explains some of the ways in which it works including the importance of changing resolution as much as changing

As well as discussing the results, with Bitmovin being the winner, Jan explains ‘Capped CRF’ – how it works, how it differs from CBR & VBR and why it’s good.

Finally, we are left with some questions to ask when searching for our own per-title technology to solve the problem we have such as “can it adjust rung resolutions?”, “Can you apply traditional data rate controls?” amongst others.

Watch now!

Speaker

Jan Ozer Jan Ozer
Principal,
Streaming Learning Center

Video: Per-title Encoding at Scale

MUX is a very pro-active company pushing forward streaming technology. At NAB 2019 they have announced Audience Adaptive Encoding which is offers encodes tailored to both your content but also the typical bitrate of your viewing demographic. Underpinning this technology is machine learning and their Per-title encoding technology which was released last year.

This talk with Nick Chadwick looks at what per-title encoding is, how you can work out which resolutions and bitrates to encode at and how to deliver this as a useful product.

Nick takes some time to explain MUX’s ‘convex hulls’ which give a shape to the content’s performance at different bitrates and helps visualise the optimum encoding parameters the content. Moreover we see that using this technique, we see some surprising circumstances when it makes sense to start at high resolutions, even for low bitrates.

Looking then at how to actually work out on a title-by-title basis, Nick explains the pros and cons of the different approaches going on to explain how MUX used machine learning to generate the model they created to make this work.

Finishing off with an extensive Q&A, this talk is a great overview on how to pick great encoding parameters, manually or otherwise.

Watch now!

Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux Inc.

Video: VMAF – the Journey Continues

VMAF is a video quality metric created by Netflix which allows computers to indicate what quality a video is. This is an important part of evaluating how good your encoder or streaming service is so it’s no surprise that Netflix has invested years of research into this. Other metrics such as PSNR and MS-SSIM all have their problems – and let’s accept that no metric is perfect – but what the industry has long grappled with is that a video that has a strong fidelity to the source doesn’t necessarily look better than one that less-faithfully replicates the source.

Imagine you had a video of an overcast day and one encoder rendered the video a bit brighter and a bit more blue. Well, for that clip, people watching might prefer that encoder even though the video is quite different from the source. The same is true of noisy pictures where replicating the noise isn’t always the best idea as some people, for some content, would prefer the cleaner look even though some details may have been lost.

As such, metrics have evolved from PSNR which is much more about fidelity to metrics which try harder to model what ‘looks good’ and VMAF is an example of that.

Zhi Li explains the history of VMAF and explains some of the new features which were released in August 2018, when this talk was given, which gives an insight into the way VMAF works. Plus, there’s a look ahead at new features on the road map. This talk was given at a SF Video Technology meet up.

Watch now!

Speakers

Zhi Li Zhi Li
Senior Software Engineer – Video Algorithms and Research
Netflix

Video: AV1 vs. HEVC: Perceptual Evaluation of Video Encoders

Zhou Wang explains how to compare HEVC & AVC with AV1 and shares his findings. Using various metrics such as VMAF, PSNR and SSIMPlus he explores the affects of resolution on bitrate savings and then turns his gaze to computation complexity.

This talk was given at the Mile High Video conference in Denver CO, 2018.

Speakers

Zhou Wang Zhou Wang
Chief Science Officer,
SSIMWAVE Inc.