Video: Next Generation TV Audio

Often not discussed, audio is essential to television and film so as the pixels get better, so should the sound. All aspects of audio are moving forward with more processing power at the receiver, better compression at the sender and a seismic shift in how audio is handled, even in the consumer domain. It’s fair to say that Dolby have been busy.

Larry Schindel from Linear Acoustic is here thanks to the SBE to bring us up to date on what’s normally called ‘Next Generation Audio’ (NGA). He starts from the basics looking at how audio has been traditionally delivered by channels. Stereo sound is delivered as two channels, one for each speaker. The sound engineer choosing how the audio is split between them. With the move to 5.1 and beyond, this continued with the delivery of 6, 8 or even more channels of audio. The trouble is this was always fixed at the time it went through the sound suite. Mixing sound into channels makes assumptions on the layout of your speakers. Sometimes it’s not possible to put your speakers in the ideal position and your sound suffers.

Dolby Atmos has heralded a mainstream move to object-based audio where sounds are delivered with information about their position in the soundfield as opposed to the traditional channel approach. Object-based audio leaves the downmixing to the receiver which can be set to take into account its unique room and speaker layout. It represents a change in thinking about audio, a move from thinking about the outputs to the inputs. Larry introduces Dolby Atmos and details the ways it can be delivered and highlights that it can work in a channel or object mode.

Larry then looks at where you can get media with Dolby Atmos. Cinemas are an obvious starting point, but there is a long list of streaming and pay-TV services which use it, too. Larry talks about the upcoming high-profile events which will be covered in Dolby Atmos showing that delivering this enhanced experience is something being taken seriously by broadcasters across the board.

For consumers, they still have the problem of getting the audio in the right place in their awkward, often small, rooms. Larry looks at some of the options for getting great audio in the home which include speakers which bounce sound off the ceiling and soundbars.

One of the key technologies for delivering Dolby Atmos is Dolby AC-4, the improved audio codec taking compression a step further from AC-3. We see that data rates have tumbled, for example, 5.1 surround on AC-3 would be 448Kbps, but can now be done in 144kbps with AC-4. Naturally, it supports channel and object modes and Larry explains how it can deliver a base mix with other audio elements over the top for the decoder to place allowing better customisation. This can include other languages or audio description/video description service. Importantly AC-4, like Dolby E, can be sent so that it doesn’t overlap video frames allowing it to accompany routed audio. Without this awareness of video, any time a video switch was made, the audio would become corrupted and there would be a click.

Dolby Atmos and AC-4 stand on their own and are widely applicable to much of the broadcast chain. Larry finishes this presentation mentioning that Dolby AC-4 will be the audio of choice for ATSC 3.0. We’ve covered ATSC 3.0 extensively here at The Broadcast Knowledge so if you want more detail than there is in this section of the presentation, do dig in further.

Watch now!

Speaker

Larry Schindel Larry Schindel
Senior Product Manager,
Linear Acoustic

Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group

Video: How speakers and sound systems work: Fundamentals, plus Broadcast and Cinema Implementations

Many of us know how speakers work, but when it comes to phased arrays or object audio we’re losing our footing. Wherever you are in the spectrum, this dive into speakers and sound systems will be beneficial.

Ken Hunold from Dolby Laboratories starts this talk with a short history of sound in both film and TV unveiling the surprising facts that film reverted from stereo back to mono around the 1950s and TV stayed mono right up until the 80s. We follow this history up to now with the latest immersive sound systems and multi-channel sound in broadcasting.

Whilst the basics of speakers are fairly widely known, Ken with looking at how that’s set up and the different shapes and versions of basic speakers and their enclosures then looking at column speakers and line arrays.

Multichannel home audio continues to offer many options for speaker positioning and speaker type including bouncing audio off the ceilings, so Ken explores these options and compares them including the relatively recent sound bars.

Cinema sound has always been critical to the effect of cinema and foundational to the motivation for people to come together and watch films away from their TVs. There have long been many speakers in cinemas and Ken charts how this has changed as immersive audio has arrived and enabled an illusion of infinite speakers with sound all around.

In the live entertainment space, sound, again, is different where the scale is often much bigger and the acoustics so much different. Ken talks about the challenges of delivering sound to so many people, keeping the sound even throughout the auditorium and dealing with delay of the relatively slow-moving sound waves. The talk wraps up with questions and answers.

Watch now!

Speakers

Ken Hunold Ken Hunold
Sr. Broadcast Services Manager, Customer Engineering
Dolby Laboratories, Inc.

Video: What is Immersive Audio and Why is it so Cool?

With all manner of entertainment using the moniker “immersive”, what does this mean with regard to audio?

This webcast will describe what immersive audio is, why it is so cool, and what makes it vital to the industry. Brian Vessa from Sony Pictures Entertainment will take a deep dive into ST 2098-1, which defines immersive audio metadata and is the first document in the suite of SMPTE immersive audio standards.

Since the introduction of immersive audio to the cinema, SMPTE has been working to create standards to foster interoperability between the many variations of playback systems and the many variations in mixing tools. A number of standards documents have been created that address different aspects of the problem.

Find out more!