Video: Per-Title Encoding, @Scale Conference

Per-title encoding with machine learning is the topic of this video from MUX.

Nick Chadwick explains that rather than using the same set of parameters to encode every video, the smart money is to find the best balance of bitrate and resolution for each video. By analysing a large number of combinations of bitrate and resolution, Nick shows you can build what he calls a ‘convex hull’ when graphing against quality. This allows you to find the optimal settings.

Doing this en mass is difficult, and Nick spends some time looking at the different ways of implementing it. In the end, Nick and data scientist Ben Dodson built a system which optimses bitrate for each title using neural nets trained on data sets. This resulted in 84% of videos looking better using this method rather than a static ladder.

Watch now!
Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux

Webinar: Engaging users and boosting advertising with AI

Honing the use of AI and Machine Learning continues apace. Streaming services are particularly ripe areas for AI, but the winners will be those that have managed to differentiate themselves and innovate in their use of it.

Artificial Intelligence (AI) and Machine Learning (ML) are related technologies which deal with replicating ‘human’ ways of recognising patterns and seeking patterns in large data sets to help deal with similar data in the future. It does this without using traditional methods like using a ‘database’. For the consumer, it doesn’t actually matter whether they’re benefitting from AI or ML, they’re simply looking for better recommendations, wanting better search and accurate subtitles (captions) on all their videos. If these happened because of humans behind the scenes, it would all be the same. But for the streaming provider, everything has a cost, and there just isn’t the ability to afford people to do these tasks plus, in some cases, humans simply couldn’t do the job. This is why AI is here to stay.

Date: Thursday 8th August, 16:00 BST / 11am EDT

In this webinar from IBC365, Media Distillery, Liberty Global and Grey Media come together to discuss the benefits of extracting images, metadata and other context from video, analysis of videos for contextual advertising, content-based search & recommendations and ways to maintain younger viewers.

AI will be here to stay touching the whole breadth of our lives, not just in broadcast. So it’s worth learning how it can be best used to produce television, for streaming and in your business.

Register now!
Speakers

Martin Prins Martin Prins
Product Owner,
Media Distillery
Susanne Rakels Susanne Rakels
Senior Manager, Discovery & Personalisation,
Liberty Global
Ruhel Ali Ruhel Ali
Founder/Director,
Grey Media

Video: Automated Tagging of Image and Video Collections using Face Recognition

Real-world examples of using Machine Learning to detect faces in archives is discussed here by Andrew Brown and Ernesto Goto from The University of Oxford. Working with the British Film Institute (BFI) and BBC News, they show the value of facial recognition and metadata comparisons.

Andrew Brown was given the cast lists of thousands of films and shows how they managed to not only discover errors and forgotten cast members, but also develop a searchable interface to find all instances of an actor.

Ernesto Goto shows the searchable BBC News archives interface he developed which used google images results of a famous person to find all the ocurrences in over 10,000 hours of video and jump straight to that point in the video.

A great video from the No Time To Wait 3 conference which looked at all aspects of archives for preservation.

Watch now!

Video: Integrating Machine Learning with ABR streaming at YouTube

In another great talk from Demuxed 2018, Steve Robertson from YouTube sheds light on trials they have been running, some with Machine Learning, to understand viewer’s appreciation of quality. Tests involve profiling the ways – and hence environments – users watch in, using different UIs, occasionally resetting a quality level preference and others. Some had big effects, whilst others didn’t.

The end-game here is acknowledging that mobile data costs money for consumers, but clearly YouTube would like to reduce their bandwidth costs too. So when quality is not needed, don’t supply it.

The talk starts with a brief AV1 update, YouTube being an early adopter of it in production.

Watch now!