Date: November 14, 2019 / 8am PST / 11am EST / 16:00 GMT
Behind The Stream is an online show containing three webinars designed for sports media broadcasters, athletic teams, and digital rights holders.
The first of the three sessions here covers creating the right experience for the service. Particularly in sports, there are different ways to present graphics and stats, to have interactivity and to innovate in order to keep the audience with you and interested.
The second session is an intriguing look into using machine learning to analyse the video to create metadata, including player tracking and then how to process and display that data to add an extra layer of interest for the audience.
Lastly, but the longest session of the three, is an hour spent whiteboarding the streaming system itself, how the different elements in the cloud work together and the things to look out for when implementing this for yourself.
Whilst these sessions are specifically about AWS services, much of the principles can be carried over to other cloud providers. With this factor and AWS being synonymous, for many, with ‘cloud’, learning the AWS way of doing things is a fantastic way to learn about operating in the cloud in general.
Real-world examples of using Machine Learning to detect faces in archives is discussed here by Andrew Brown and Ernesto Goto from The University of Oxford. Working with the British Film Institute (BFI) and BBC News, they show the value of facial recognition and metadata comparisons.
Andrew Brown was given the cast lists of thousands of films and shows how they managed to not only discover errors and forgotten cast members, but also develop a searchable interface to find all instances of an actor.
Ernesto Goto shows the searchable BBC News archives interface he developed which used google images results of a famous person to find all the ocurrences in over 10,000 hours of video and jump straight to that point in the video.
A great video from the No Time To Wait 3 conference which looked at all aspects of archives for preservation.
In another great talk from Demuxed 2018, Steve Robertson from YouTube sheds light on trials they have been running, some with Machine Learning, to understand viewer’s appreciation of quality. Tests involve profiling the ways – and hence environments – users watch in, using different UIs, occasionally resetting a quality level preference and others. Some had big effects, whilst others didn’t.
The end-game here is acknowledging that mobile data costs money for consumers, but clearly YouTube would like to reduce their bandwidth costs too. So when quality is not needed, don’t supply it.
The talk starts with a brief AV1 update, YouTube being an early adopter of it in production.