Video: Automated Tagging of Image and Video Collections using Face Recognition

Real-world examples of using Machine Learning to detect faces in archives is discussed here by Andrew Brown and Ernesto Goto from The University of Oxford. Working with the British Film Institute (BFI) and BBC News, they show the value of facial recognition and metadata comparisons.

Andrew Brown was given the cast lists of thousands of films and shows how they managed to not only discover errors and forgotten cast members, but also develop a searchable interface to find all instances of an actor.

Ernesto Goto shows the searchable BBC News archives interface he developed which used google images results of a famous person to find all the ocurrences in over 10,000 hours of video and jump straight to that point in the video.

A great video from the No Time To Wait 3 conference which looked at all aspects of archives for preservation.

Watch now!

Video: Integrating Machine Learning with ABR streaming at YouTube

In another great talk from Demuxed 2018, Steve Robertson from YouTube sheds light on trials they have been running, some with Machine Learning, to understand viewer’s appreciation of quality. Tests involve profiling the ways – and hence environments – users watch in, using different UIs, occasionally resetting a quality level preference and others. Some had big effects, whilst others didn’t.

The end-game here is acknowledging that mobile data costs money for consumers, but clearly YouTube would like to reduce their bandwidth costs too. So when quality is not needed, don’t supply it.

The talk starts with a brief AV1 update, YouTube being an early adopter of it in production.

Watch now!

On-Demand Webinar: AI for Media and Entertainment

In this webinar, visual effects and digital production company Digital Domain will share their experience developing AI-based toolsets for applying deep learning to their content creation pipeline. AI is no longer just a research project but also a valuable technology that can accelerate labor-intensive tasks, giving time and control back to artists.

The webinar starts with a brief overview of deep learning and dive into examples of convolutional neural networks (CNNs), generative adversarial networks (GANS), and autoencoders. These examples will include flavors of neural networks useful for everything from face swapping and image denoising to character locomotion, facial animation, and texture creation.

By attending this webinar, you will:

  • Get a basic understanding of how deep learning works
  • Learn about research that can be applied to content creation
  • See examples of deep learning–based tools that improve artist efficiency
  • Hear about Digital Domain’s experience developing AI-based toolsets

Watch Now!

Add Presenter 1's Head Shot Image URL (ex: http://info.nvidianews.com/rs/156-OFN-742/images/dan_m.jpg)
DOUG ROBLEM
Senior Director of Software R&D, Digital Domain
Add Presenter 2's Head Shot Image URL (ex: http://info.nvidianews.com/rs/156-OFN-742/images/dan_m.jpg)
RICK CHAMPAGNE
Global Media and Entertainment Strategy and Marketing, NVIDIA
Add Presenter 3's Head Shot Image URL (ex: http://info.nvidianews.com/rs/156-OFN-742/images/dan_m.jpg)
RICK GRANDY
Senior Solutions Architect, Professional Visualization, NVIDIA
Add Presenter 4's Head Shot Image URL (ex: http://info.nvidianews.com/rs/156-OFN-742/images/dan_m.jpg)
GARY BURNETT
Solutions Architect, Professional Visualization, NVIDIA

Webinar: Making Your Video Service Smarter With Machine Learning

Machine Learning is new and gets lots of attention and this webinar brings real-life examples of machine learning in use for broadcast.

AWS Elemental and GrayMeta discuss
• Ways to enrich content to increase its value
• Using clip production for targeted personalization
• Creating ad pods for effective monetisation
• A variety of workflows, including content indexing, metadata generation, content retrieval, action metadata, and content monetisation

This webinar will highlight a key use case from the Sky News “Royal Wedding: Who’s Who Live” app. Hear from GrayMeta about the innovative workflow that used machine learning functionality to create an enhanced experience for users during the Royal Wedding.

Register Now!

MODERATOR SPEAKERS
image image image image
Eric Schumacher-Rasmussen
VP / Editor
Streaming Media
Kiran Patel
Solutions Marketing Manager
AWS Elemental
Chris Kuthan
Business Development Manager
Amazon Web Services
Josh Wiggins
Chief Commercial Officer
GrayMeta

Webinar: Transforming live sports production with Artificial Intelligence


Webinar: Thursday 24 May, 16:00 BST

In this webinar from IBC365, sponsored by Tedial, we hear about how Artificial Intelligence in live broadcast from Daniel McDonnell of Timeline TV, Dr. Rob Oldfield from Salsa sound and Jérôme Wauthoz from Tedial.
They discuss the fact that producing and managing the most valuable live sport broadcasts is being transformed through AI-enabled tools automating critical parts the production process. From AI cameras focusing in on the action, automated multi-channel sound mixes, to powerful media management tools linking live action with real-time metadata to speed production and automate match highlights creation.

You’ll hear about cutting-edge tools for live sports production and hear from leading broadcasters that are using them.

Register Now!

Speakers

Daniel McDonnell

Daniel McDonnell, Managing Director, Timeline Television

Daniel McDonnell began his career as a trainee at the BBC in 1989. He worked for the corporation for 17 years as an engineer and VT operator and in later years as an editor.

He formed Timeline in 2006 to specialise in providing server systems and shared edit systems to the broadcast sector. The company has now grown to have more than 130 full time staff. Covering large sporting events such as Wimbledon, Timeline TV provides outside broadcasts, post-production and studio-based services to major UK and International networks such as the BBC and ITV, as well as to the independent sector.

For the America’s Cup the company developed robotic, waterproof camera systems to capture the action from the competing yachts. It also recently designed and built the world’s most advanced IP 4K HDR outside broadcast truck.

Dr Rob Oldfield

Dr Rob Oldfield, Co-Founder, Salsa Sound

Once an academic, now an entrepreneur; Rob completed his PhD at the University of Salford in Audio technology, and continued working at the university in a research and consultancy for several years until he co-founded Salsa Sound Ltd.

Rob’s interests are primarily in improving broadcast audio quality and developing new audio capture technologies for a better end-user experience.

Patented technology from his research led to the recent formation of spin-out company (Salsa Sound) which received backing from the Royal Academy of Engineering who appointed Rob as one of their Enterprise Fellows. Salsa Sound focuses specifically on sports broadcast and Rob’s goal is to make the sound of sport on the TV more engaging, more cinematic and more interactive.

Jérôme Wauthoz

Jérôme Wauthoz, Vice President Products, Tedial 

Jerome Wauthoz joined Tedial in 2017 following more than 22 years at EVS Broadcast Equipment.

He has a deep understanding of live production workflows and extensive experience analysing customer needs across global markets. He launched his career at EVS as a software engineer and subsequently held management-level positions, including R&D manager; product manager and market solutions manager.

Prior to joining Tedial he served as vice president of products, responsible for overseeing the team tasked with developing the company’s next-generation solutions. Jerome holds a degree in Masters in Engineering in Electro-Mechanics from Liège University, Belgium where he also served as an assistant teacher.

Moderator

Robert Ambrose

Robert Ambrose, Managing Consultant, High Green Media

Rob Ambrose (@rambrose) is a consultant, industry analyst, writer and technologist providing strategic advice and content creation to media companies and their technology vendors.

As Founder and Managing Consultant at High Green Media, Rob has expert knowledge of media business systems and the content supply chain – ranging from scheduling and rights management through to content operations, media asset management and workflow. He’s focused on the transformative impact of cloud and data analytics on the media technology landscape.

He’s an active speaker and presenter on media and entertainment industry trends and has extensive international experience, working directly with media companies and vendors in Europe, the Middle East, Africa, Asia and North America. Rob holds an MBA with Distinction from Imperial College, London.

Register Now!

Video: The Role of Metadata in Content Discovery; Improving the User Experience and Reducing Churn

Rainy days increase viewing of horror movies, discovers Arash Pendari, CEO and Founder of Vionlabs. How can we market better to people? Using machine learning and advanced analytics, Arash talks about the role of metadata in content discovery ensuring that viewers are not pigeonholed and recommendation engines provide suggestions with depth.

This presentation is from Streaming Tech Sweden 2017, the tech-conference for the Streaming Tech Community. With a dedicated focus on the technology for video streaming, this is the meeting place to be educated and inspired by experts in this area, network with the community and bring home new thoughts and ideas. With a no-sponsors policy STSWE can independently choose the topics and speakers we and the community find most important and relevant.

Watch now

Video: How Machines Learn – AI follow up


A 2 part look at how we actually teach computers to do things; recognise faces, understand video content etc. A perfect follow on from yesterday’s explanation of what Deep Learning, Machine Learning and other words from the world of AI mean.
CGP Grey is great at explaining complex topics (e.g. Voting methods) in a very accessible way. Here we see a the different ways computers can learn which should allow us to ask deeper questions next time a company boasts an AI-powered feature.

Watch Part 1
Don’t forget Part 2!

1

Video: Understanding AI, Analytics, and Machine Learning


What is the difference between Deep Leaning and Machine leaning?
This is great grounding in AI-related terms from Richard Walsh of Sundog Media so you can navigate the hype and the understand the real technology coming to the broadcast industry. From a talk given at the SMPTE Conference symposium on Artificial Intelligence.
Watch Now