Video: Tidying Up (Bits on the Internet)

Netflix’s Anne Aaron explains how VMAF came about and how AV1 is going to benefit both the business and the viewers.

VMAF is a method for computers to calculate the quality of a video in a way which would match a human’s opinion. Standing for Video Multi-Method Assessment Fusion, Anne explains that it’s a combination (fusion) of more than one metric each harnessing different aspects. She presents data showing the increased correlation between VMAF and real-life tests.

Anne’s job is to maximise enjoyment of content through efficient use of bandwidth. She explains there are many places with wireless data is limited so getting the maximum amount of video through that bandwidth cap is an essential part of Netflix’s business health.

This ties in with why Netflix is part of the Alliance for Open Media who are in the process of specifying AV1, the new video codec which promises bitrate improvements over-and-above HEVC. Anne expands on this and presents the aim to deliver 32 hours of video using AV1 for 4Gb subscribers.

Watch now!
Speaker

Anne Aaron

Video: Implementing AES67 and ST 2110-30 in Your Plant

AES67 is a flexible standard but with this there is complexity and nuance. Implementing it within ST 2110-30 takes some care and this talk covers lessons learnt in doing exactly that.

AES67 is a standard defined by the Audio Engineering Society to enable high-performance audio-over-IP streaming interoperability between various AoIP systems like Dante, WheatNet-IP and Livewire. It provides comprehensive interoperability recommendations in the areas of synchronization, media clock identification, network transport, encoding and streaming, session description, and connection management.

The SMPTE ST 2110 standards suite makes it possible to separately route and break away the essence streams – audio, video, and ancillary data. ST 2110-30 addresses system requirements and payload formats for uncompressed audio streams and refers to the subset of AES67 standard.

In this video Dominic Giambo from Wheatsone Corporation discusses tips for implementing AES67 and ST 2110-30 standards in a lab environment consisting of over 160 devices (consoles, sufraces, hardware and software I/O blades) and 3 different automation systems. The aim of the test was to pass audio through every single device creating a very long chain to detect any defects.

The following topics are covered:

  • SMPTE ST 2110-30 as a subset of AES67 (support of the PTP profile defined in SMPTE ST 2059-2, an offset value of zero between the media clock and the RTP stream clock, option to force a device to operate in PTP slave-only mode)
  • The importance of using IEEE-1588 PTP v2 master clock for accuracy
  • Packet structure (UDP and RTP header, payload type)
  • Network configuration considerations (mapping out IP and multicast addresses for different vendors, keeping all devices on the same subnet)
  • Discovery and control (SDP stream description files, configuration of signal flow from sources to destinations)

Watch now!

You can download the slides here.

Speaker

Dominic Giambo
Senior Embedded Engineer
Wheatstone Corporation

Video: How speakers and sound systems work: Fundamentals, plus Broadcast and Cinema Implementations

Many of us know how speakers work, but when it comes to phased arrays or object audio we’re losing our footing. Wherever you are in the spectrum, this dive into speakers and sound systems will be beneficial.

Ken Hunold from Dolby Laboratories starts this talk with a short history of sound in both film and TV unveiling the surprising facts that film reverted from stereo back to mono around the 1950s and TV stayed mono right up until the 80s. We follow this history up to now with the latest immersive sound systems and multi-channel sound in broadcasting.

Whilst the basics of speakers are fairly widely known, Ken with looking at how that’s set up and the different shapes and versions of basic speakers and their enclosures then looking at column speakers and line arrays.

Multichannel home audio continues to offer many options for speaker positioning and speaker type including bouncing audio off the ceilings, so Ken explores these options and compares them including the relatively recent sound bars.

Cinema sound has always been critical to the effect of cinema and foundational to the motivation for people to come together and watch films away from their TVs. There have long been many speakers in cinemas and Ken charts how this has changed as immersive audio has arrived and enabled an illusion of infinite speakers with sound all around.

In the live entertainment space, sound, again, is different where the scale is often much bigger and the acoustics so much different. Ken talks about the challenges of delivering sound to so many people, keeping the sound even throughout the auditorium and dealing with delay of the relatively slow-moving sound waves. The talk wraps up with questions and answers.

Watch now!

Speakers

Ken Hunold Ken Hunold
Sr. Broadcast Services Manager, Customer Engineering
Dolby Laboratories, Inc.

Webinar: Engaging users and boosting advertising with AI

Honing the use of AI and Machine Learning continues apace. Streaming services are particularly ripe areas for AI, but the winners will be those that have managed to differentiate themselves and innovate in their use of it.

Artificial Intelligence (AI) and Machine Learning (ML) are related technologies which deal with replicating ‘human’ ways of recognising patterns and seeking patterns in large data sets to help deal with similar data in the future. It does this without using traditional methods like using a ‘database’. For the consumer, it doesn’t actually matter whether they’re benefitting from AI or ML, they’re simply looking for better recommendations, wanting better search and accurate subtitles (captions) on all their videos. If these happened because of humans behind the scenes, it would all be the same. But for the streaming provider, everything has a cost, and there just isn’t the ability to afford people to do these tasks plus, in some cases, humans simply couldn’t do the job. This is why AI is here to stay.

Date: Thursday 8th August, 16:00 BST / 11am EDT

In this webinar from IBC365, Media Distillery, Liberty Global and Grey Media come together to discuss the benefits of extracting images, metadata and other context from video, analysis of videos for contextual advertising, content-based search & recommendations and ways to maintain younger viewers.

AI will be here to stay touching the whole breadth of our lives, not just in broadcast. So it’s worth learning how it can be best used to produce television, for streaming and in your business.

Register now!
Speakers

Martin Prins Martin Prins
Product Owner,
Media Distillery
Susanne Rakels Susanne Rakels
Senior Manager, Discovery & Personalisation,
Liberty Global
Ruhel Ali Ruhel Ali
Founder/Director,
Grey Media