Video: What to do after per-title encoding

Per-title encoding is a common method of optimising quality and compression by changing the encoding options on a file-by-file basis. Although some would say the start of per-scene encoding is the death knell for per-title encoding, either is much better than the more traditional plan of applying exactly the same settings to each video.

This talk with Mux’s Nick Chadwick and Ben Dodson looks at what per-title encoding is and how to go about doing it. The initial work involves doing many encodes of the same video and analysing each for quality. This allows you to out which resolutions and bitrates to encode at and how to deliver the best vide.

Ben Dodson explains the way they implemented this at Mux using machine learning. This was done by getting computers to ‘watch’ videos and extract metadata. That metadata can then be used to inform the encoding parameters without the computer watching the whole of a new video.

Nick takes some time to explain MUX’s ‘convex hulls’ which give a shape to the content’s performance at different bitrates and helps visualise the optimum encoding parameters the content. Moreover we see that using this technique, we can explore how to change resolution to create the best encode. This doesn’t always mean reducing the resolution; there are some surprising circumstances when it makes sense to start at high resolutions, even for low bitrates.

The next stage after per-title encoding is to segment the video and encode each segment differently which Nick explores and explains how to deliver different resolutions throughout the stream seamlessly switching between them. Ben takes over and explains how this can be implemented and how to chose the segment boundaries correctly, again, using a machine learning approach to analysis and decision making.

Watch now!
Speakers

Nick Chadwick Nick Chadwick
Software Engineer,
Mux
Ben Dodson Ben Dodson
Data Scientist,
Mux

Video: Per-Title Encoding, @Scale Conference

Per-title encoding with machine learning is the topic of this video from MUX.

Nick Chadwick explains that rather than using the same set of parameters to encode every video, the smart money is to find the best balance of bitrate and resolution for each video. By analysing a large number of combinations of bitrate and resolution, Nick shows you can build what he calls a ‘convex hull’ when graphing against quality. This allows you to find the optimal settings.

Doing this en mass is difficult, and Nick spends some time looking at the different ways of implementing it. In the end, Nick and data scientist Ben Dodson built a system which optimses bitrate for each title using neural nets trained on data sets. This resulted in 84% of videos looking better using this method rather than a static ladder.

Watch now!
Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux

Webinar: Engaging users and boosting advertising with AI

Honing the use of AI and Machine Learning continues apace. Streaming services are particularly ripe areas for AI, but the winners will be those that have managed to differentiate themselves and innovate in their use of it.

Artificial Intelligence (AI) and Machine Learning (ML) are related technologies which deal with replicating ‘human’ ways of recognising patterns and seeking patterns in large data sets to help deal with similar data in the future. It does this without using traditional methods like using a ‘database’. For the consumer, it doesn’t actually matter whether they’re benefitting from AI or ML, they’re simply looking for better recommendations, wanting better search and accurate subtitles (captions) on all their videos. If these happened because of humans behind the scenes, it would all be the same. But for the streaming provider, everything has a cost, and there just isn’t the ability to afford people to do these tasks plus, in some cases, humans simply couldn’t do the job. This is why AI is here to stay.

Date: Thursday 8th August, 16:00 BST / 11am EDT

In this webinar from IBC365, Media Distillery, Liberty Global and Grey Media come together to discuss the benefits of extracting images, metadata and other context from video, analysis of videos for contextual advertising, content-based search & recommendations and ways to maintain younger viewers.

AI will be here to stay touching the whole breadth of our lives, not just in broadcast. So it’s worth learning how it can be best used to produce television, for streaming and in your business.

Register now!
Speakers

Martin Prins Martin Prins
Product Owner,
Media Distillery
Susanne Rakels Susanne Rakels
Senior Manager, Discovery & Personalisation,
Liberty Global
Ruhel Ali Ruhel Ali
Founder/Director,
Grey Media

Video: Automated Tagging of Image and Video Collections using Face Recognition

Real-world examples of using Machine Learning to detect faces in archives is discussed here by Andrew Brown and Ernesto Goto from The University of Oxford. Working with the British Film Institute (BFI) and BBC News, they show the value of facial recognition and metadata comparisons.

Andrew Brown was given the cast lists of thousands of films and shows how they managed to not only discover errors and forgotten cast members, but also develop a searchable interface to find all instances of an actor.

Ernesto Goto shows the searchable BBC News archives interface he developed which used google images results of a famous person to find all the ocurrences in over 10,000 hours of video and jump straight to that point in the video.

A great video from the No Time To Wait 3 conference which looked at all aspects of archives for preservation.

Watch now!