Video: Next-generation audio in the European market – The state of play

Next-generation audio refers to a range of new technologies which allow for immersive audio like 3D sound, for increased accessibility, for better personalisation and anything which delivers a step-change in the lister experience. NGA technologies can stand on their own but are often part of next-generation broadcast technologies like ATSC 3.0 or UHD/8K transmissions.

This talk from the Sports Video Group and Dolby presents one case study from a few that have happened in 2020 which delivered NGA over the air to homes. First, though, Dolby’s Jason Power brings us up to date on how NGA has been deployed to date and looks at what it is.

Whilst ‘3D sound’ is an easy to understand feature, ‘increased personalisation’ is less so. Jason introduces ideas for personalisation such as choosing which team you’re interested in and getting a different crowd mix dependant on that. The possibilities are vast and we’re only just starting to experiment with what’s possible and determine what people actually want or to change where your mics are, on the pitch or in the stands.

What can I do if I want to hear next-generation audio? Jason explains that four out of five TVs are now shipping with NGA audio and all of the five top manufacturers have support for at least one NGA technology. Such technologies are Dolby’s AC-4 and sADM. AC-4 allows delivery of Dolby Atmos which is an object-based audio format which allows the receiver much more freedom to render the sound correctly based on the current speaker set up. Should you change how many speakers you have, the decoder can render the sound differently to ensure the ‘stereo’ image remains correct.

To find out more about the technologies behind NGA, take a look at this talk from the Telos Alliance.

Next, Matthieu Parmentier talks about the Roland Garros event in 2020 which was delivered using sADM plus Dolby AC-4. sADM is an open specification for metadata interchange, the aim of which is to help interoperability between vendors. The S-ADM metadata is embedded in the SDI and then transported uncompressed as SMPTE 302M.

ATEME’s Mickaël Raulet completes the picture by explaining their approach which included setting up a full end-to-end system for testing and diagnosis. The event itself, we see, had three transmission paths. An SDR satellite backup and two feeds into the DVB-T2 transmitter at the Eiffel Tower.

The session ends with an extensive Q&A session where they discuss the challenges they faced and how they overcame them as well as how their businesses are changing.

Watch now!
Speakers

Jason Power Jason Power
Senior Director of Commercial Partnerships & Standards,
Dolby
Mickaël Raulet Mickaël Raulet
Vice President of Innovation,
ATEME
Matthieu Parmentier Matthieu Parmentier
Head of Data & Artificial Intelligence
France Television
Roger Charlesworth Moderator:Roger Charlesworth
Charlesworth Media

Video: Deployment of Ultra HD Services Around the Globe

In some parts of the industry UHD is entirely absent. Thierry Fautier is here to shine a light on the progress being made around the globe in deploying UHD.

Thierry starts off by defining terms – important because Ultra HD actually hides several, often unmentioned, formats behind the term ‘UHD’. This also shows how all of the different aspects of UHD, which include colour (WCG), HDR, audio (NGA) and frame rate to name only a few, fit together.

There’s then a look at the stats, where is HDR deployed? How is UHD typically delivered? And the famed HDR Venn diagram showing which TVs support which formats.

As ever, live sports is a major testing ground so the talk examines some lessons learnt, and features a BBC case study, from the 2018 World Cup. Not unrelated, there is a discussion on the state of UHD streaming including discussion of CMAF.

Leading nicely onto Content Aware Encoding (CAE), which was also in use at the world cup.

Watch now!
Free registration required

Speaker

Thierry Fautier Thierry Fautier
President-Chair, Ultra HD Forum
VP Video Strategy, Harmonic

Video: Everything You Wanted to Know About ATSC 3.0

ATSC 3.0 is the next sea change in North American broadcasting, shared with South Korea, Mexico and other locations. Depending on your viewpoint, this could be as fundamental as the move to digital lockstep with the move to HD programming all those years ago.

ATSC 3.0 takes terrestrial broadcasting in to the IP world meaning everything transmitted over the air is done over IP and it brings with it the ability to split the bandwidth into separate pipes.

Here, Dr. Richard Chernock presents a detailed description of the available features within ATSC. He explains the new constellations and modulation properties delving into the ability to split your transmission bandwidth into separate ‘pipes’. These pipes can have different modulation parameters, robustness etc. The switch from 8VSB to OFDM allows for Single Frequency Networks which can actually help reception (due to guard intervals).

Additionally, the standard supports HEVC and scalable video (SHVC) whereby a single UHD encode can be sent which has an HD base-layer which can be decoded by every decoder plus an ‘enhancement layer’ which can be optionally decoded to produce a full UHD output for those decoders/displays which an support it.

With the move to IP, there is a blurring of broadcast and broadband. This can be used to deliver extra audios via broadband to be played with the main video and can be used as a return path to the broadcaster which can help with interactivity and audience measurement.

Dr. Chernock covers HDR, better pixels and Next Generation Audio as well as Emergency Alerts functionality improvements and accessibility features.

Speaker

Dr. Richard Chernock Dr. Richard Chernock
Chief Science Officer,
Triveni Digital