The Broadcast Knowledge
Menu Close
  • Diversity in Broadcast
    • Diversity in Broadcast – The Problems and Opportunities
    • Rise – Women in Broadcast
    • Diversity Index
  • Twitter
  • LinkedIn
  • Facebook
  • Email
  • Submissions

Video: AI – more ways it will revolutionise our industry

Posted on 12th June 2020 by Russell Trafford-Jones

Artificial Intelligence and Machine Learning (ML) dominate many discussions and for good reason, they usually reduce time and reduce costs. In the broadcast industry their are some obvious areas where it will, an already does, help. But what’s the time table? Where are we now? And what are we trying to achieve with the technology?

Edmundo Hoyle from TV Globo explains how they have managed to transform the thumbnail selection for their OTT service from a manual process taking an editor 15 minutes per video to an automated process using machine learning. A good thumbnail is relevant, it is a clear picture and has no nudity or weapons in it. Edmundo explains that they tackled this in a three-step process. The first step uses NLP analysis of the episode summary to understand what’s relevant and to match that with the subtitles (closed captions). Doing this identifies times int he video which should be examined more closely for thumbnails.

The durations identified by the process are then analysed for blur-free frames (amongst other metrics to detect clear videography) which gives them candidate pictures which may contain problematic imagery. the AWS service Rekognition which returns information regarding whether faces, guns or nudity are present in the frame. Edmundo finishes by showing the results which are, in general very positive. Final choice of thumbnails is still moderated by editors, but the process is much more streamlined because they are much less likely to have to find an image manually since the process selects 4 options. Edmundo finishes by explaining some of the chief causes of rejecting an image which are all relatively easy to improve upon and tend to be related to a person looking down or away from the camera.

We’ve seen before on The Broadcast Knowledge the idea of super-resolution which involves up-scaling images/video using machine learning. The result is better than using standard linear filters like lanczos. This is has been covered in a talk from Mux’s Nick Chadwick about LCEVC. Yiannis Andreopoulos from iSize talks next about the machine learning they use to improve video which uses some of these same principles to pre-treat, or as they call it ‘pre-code’ video before it’s encoded using a standard MPEG encoder (whether that be AVC, HEVC or the upcoming VVC). Yiannis explains how they are able to understand the best resolutions to encode at and scale the image intelligently appropriately. This delivers significant gains across all the metrics leading to bandwidth reduction. Furthermore he outlines a system which feeds back to maintain both the structure of the video which avoids it becoming too blurry which can be a consequence of being to subservient to the drive to reduce bitrate and thus simplifying the picture. It can also, though, protect itself from going too far down the sharpness path and only chasing metrics gains. He concludes by outlining future plans.

Grant Franklin Totten then steps up to explain how Al Jazeera have used AI/machine learning to help automate editorial compliance processes. He introduces the idea of ‘Contextual Video Metadata’ which ads a level of context to what would otherwise be stand-alone metadata. To understand this, we need to learn more about what Al Jazeera is trying to achieve.

As a news organisation, Al Jazeera has many aspects of reporting to balance. They are particularly on the look out for bias, good fact-checking & fake news. In order to support this, they are using AI and machine learning. They have both textual and video-based methods of detecting fake news. As an example of their search for bias, they have implemented voice detection and analysed MP’s speech time in Ireland. Irish law requires equal speaking time, yet Al Jazeera can easily show that some MPs get far more time than others. Another challenge is detecting incorrect on-screen text with the example given of naming Trump as Obama by accident on a lower-third graphic. Using OCR, NLP and Face recognition, they can flag issues with the hope the they can be corrected before Tx. In terms of understanding, for example, who is president, Al Jazeera is in the process of refining the Knowledge graph to capture the information they need to check against.

AI and machine learning (ML) aren’t going anywhere. This talk shines a light on two areas where it’s particularly helpful in broadcast. You can count on hearing significant improvements in AI and ML’s effectiveness in the next few years and it’s march into other parts of the workflow.
Watch now!
Speakers

Edmundo Hoyle Edmundo Hoyle
TV System Researcher
TV Globo
Yiannis Andreopoulos Yiannis Andreopoulos
Technical Director,
iSize Technologies
Grant Franklin Totten Grant Franklin Totten
Head of Media & Emerging Platforms,
Al Jazeera Media Networks

Edmundo Hoyle (GLOBO), Yiannis Andreopoulos (iSize Technologies) and Grant Totten (Al Jazeera Media Network).

Related

Video AI, Al Jazeera, Edmundo Hoyle, Globo, Grant Franklin Totten, iSize Technologies, Machine Learning, metadata, V-Nova, Yiannis Andreopoulos

Post navigation

Video: Building A Studio
Video: What is NMOS? with a Secure Control Case Study

Get Updates by Email

Put in your email address to receive new posts by email.

We'll send you an email each time there is a new article and may occasionally send you other emails. By signing up, you agree to our Privacy Policy

Check your inbox or spam folder to confirm your subscription.

Popular Posts

  • Video: SCTE 224? ESNI? What is Everyone Talking About?
  • Video: Where can SMPTE 2110 and NDI co-exist?
  • Video: Video Fundamentals in Depth
  • Video: 5 PTP Implementation Challenges & Best Practices
  • Video: SRT - How the hot new UDP video protocol actually works under the hood

Diversity in Broadcast

Find out about Rise, an organisation which promotes gender diversity for women in technical roles throughout the industry.

Search

About This Site

The Broadcast Knowledge aggregates all the Broadcast industry’s free, educational webinars into one place with daily updates.

Recent Posts

  • Video: ATSC 3.0 OTA Meets OTT
    27th July 2021
  • Video: Remote Production
    26th July 2021
  • Videos: Standards – What are they and how are they changing?
    23rd July 2021
  • Video: IPMX Makes Networks Easy
    22nd July 2021
  • Video: A Cloudy Future For Post Production
    21st July 2021
  • Video: The ROI of Deploying Multiple Codecs
    20th July 2021

Tags

AES67 (33) AI (27) AV1 (59) AVC (35) AWS (33) Bitmovin (39) CDN (26) Cloud (59) CMAF (38) codecs (65) DASH (27) Demuxed (27) DVB (25) Encoding (38) fundamentals (53) Harmonic Inc. (25) HDR (52) HEVC (56) HLS (30) IBC (40) IBC365 (25) IEEE 1588 (25) IP (47) IPShowcase (56) Live Streaming (115) Low latency (54) Mile High Video (25) MPEG DASH (25) NMOS (29) OTT (178) PTP (64) Remote Production (32) RIST (25) SMPTE (106) SRT (28) ST 2059 (29) ST 2110 (136) streaming (125) streaming media (59) Streaming Video Alliance (26) UHD (27) VoD (25) VSF (42) WebRTC (26) Workflows (66)

Subscribe to get daily updates

The Broadcast Knowledge links to free educational videos & webinars focused on the Broadcast Industry.

We'll send you an email each time there is a new article and may occasionally send you other emails. By signing up, you agree to our Privacy Policy.

Thanks! Check your inbox or spam folder to confirm your subscription.

Views and opinions expressed on this website are those of the author(s) and do not necessarily reflect those of SMPTE or SMPTE Members. This website is presented for informational purposes only. Any reference to specific companies, products or services does not represent promotion, recommendation, or endorsement by SMPTE Powered by SMPTE
© 2023 The Broadcast Knowledge. All rights reserved.
Hiero by aThemes
 

Loading Comments...