Real-world examples of using Machine Learning to detect faces in archives is discussed here by Andrew Brown and Ernesto Goto from The University of Oxford. Working with the British Film Institute (BFI) and BBC News, they show the value of facial recognition and metadata comparisons.
Andrew Brown was given the cast lists of thousands of films and shows how they managed to not only discover errors and forgotten cast members, but also develop a searchable interface to find all instances of an actor.
Ernesto Goto shows the searchable BBC News archives interface he developed which used google images results of a famous person to find all the ocurrences in over 10,000 hours of video and jump straight to that point in the video.
A great video from the No Time To Wait 3 conference which looked at all aspects of archives for preservation.
In another great talk from Demuxed 2018, Steve Robertson from YouTube sheds light on trials they have been running, some with Machine Learning, to understand viewer’s appreciation of quality. Tests involve profiling the ways – and hence environments – users watch in, using different UIs, occasionally resetting a quality level preference and others. Some had big effects, whilst others didn’t.
The end-game here is acknowledging that mobile data costs money for consumers, but clearly YouTube would like to reduce their bandwidth costs too. So when quality is not needed, don’t supply it.
The talk starts with a brief AV1 update, YouTube being an early adopter of it in production.
In this webinar, visual effects and digital production company Digital Domain will share their experience developing AI-based toolsets for applying deep learning to their content creation pipeline. AI is no longer just a research project but also a valuable technology that can accelerate labor-intensive tasks, giving time and control back to artists.
The webinar starts with a brief overview of deep learning and dive into examples of convolutional neural networks (CNNs), generative adversarial networks (GANS), and autoencoders. These examples will include flavors of neural networks useful for everything from face swapping and image denoising to character locomotion, facial animation, and texture creation.
By attending this webinar, you will:
Get a basic understanding of how deep learning works
Learn about research that can be applied to content creation
See examples of deep learning–based tools that improve artist efficiency
Hear about Digital Domain’s experience developing AI-based toolsets
Machine Learning is new and gets lots of attention and this webinar brings real-life examples of machine learning in use for broadcast.
AWS Elemental and GrayMeta discuss
• Ways to enrich content to increase its value
• Using clip production for targeted personalization
• Creating ad pods for effective monetisation
• A variety of workflows, including content indexing, metadata generation, content retrieval, action metadata, and content monetisation
This webinar will highlight a key use case from the Sky News “Royal Wedding: Who’s Who Live” app. Hear from GrayMeta about the innovative workflow that used machine learning functionality to create an enhanced experience for users during the Royal Wedding.