The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.
John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.
Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.
Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.
Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.
Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?
This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.
Many of us know how speakers work, but when it comes to phased arrays or object audio we’re losing our footing. Wherever you are in the spectrum, this dive into speakers and sound systems will be beneficial.
Ken Hunold from Dolby Laboratories starts this talk with a short history of sound in both film and TV unveiling the surprising facts that film reverted from stereo back to mono around the 1950s and TV stayed mono right up until the 80s. We follow this history up to now with the latest immersive sound systems and multi-channel sound in broadcasting.
Whilst the basics of speakers are fairly widely known, Ken with looking at how that’s set up and the different shapes and versions of basic speakers and their enclosures then looking at column speakers and line arrays.
Multichannel home audio continues to offer many options for speaker positioning and speaker type including bouncing audio off the ceilings, so Ken explores these options and compares them including the relatively recent sound bars.
Cinema sound has always been critical to the effect of cinema and foundational to the motivation for people to come together and watch films away from their TVs. There have long been many speakers in cinemas and Ken charts how this has changed as immersive audio has arrived and enabled an illusion of infinite speakers with sound all around.
In the live entertainment space, sound, again, is different where the scale is often much bigger and the acoustics so much different. Ken talks about the challenges of delivering sound to so many people, keeping the sound even throughout the auditorium and dealing with delay of the relatively slow-moving sound waves. The talk wraps up with questions and answers.
ATSC 3.0 is the next sea change in North American broadcasting, shared with South Korea, Mexico and other locations. Depending on your viewpoint, this could be as fundamental as the move to digital lockstep with the move to HD programming all those years ago.
ATSC 3.0 takes terrestrial broadcasting in to the IP world meaning everything transmitted over the air is done over IP and it brings with it the ability to split the bandwidth into separate pipes.
Here, Dr. Richard Chernock presents a detailed description of the available features within ATSC. He explains the new constellations and modulation properties delving into the ability to split your transmission bandwidth into separate ‘pipes’. These pipes can have different modulation parameters, robustness etc. The switch from 8VSB to OFDM allows for Single Frequency Networks which can actually help reception (due to guard intervals).
Additionally, the standard supports HEVC and scalable video (SHVC) whereby a single UHD encode can be sent which has an HD base-layer which can be decoded by every decoder plus an ‘enhancement layer’ which can be optionally decoded to produce a full UHD output for those decoders/displays which an support it.
With the move to IP, there is a blurring of broadcast and broadband. This can be used to deliver extra audios via broadband to be played with the main video and can be used as a return path to the broadcaster which can help with interactivity and audience measurement.
Dr. Chernock covers HDR, better pixels and Next Generation Audio as well as Emergency Alerts functionality improvements and accessibility features.
Dr. Richard Chernock
Chief Science Officer,
Webinar Date: 18th March 2019
Time: 14:00 GMT / 15:00 CET
Object oriented audio is a relatively new audio technique which doesn’t simply send audio as one track or two, but it sends individual audio objects – simplistically we can think of these as audio samples – which also come with some position information.
With non-object-orientated audio, there is very little a speaker system can do to adjust the audio to match. It was either created for 8 speakers, 6, or 2 etc. So if you have a system that only has 4 speakers or they are in unusual places, it’s a compromise to it sound right.
Object oriented audio sends the position information for some of the audio which means that the decoder can work out how much of the sound to put in each speaker to best represent that sound for whatever room and speaker set-up it has.
AC-4 from Dolby is one technology which allows objects to be sent with the audio. It still supports conventional 5.1 style sound but can also contain up to 7 audio objects. AC-4 is one NGA technology adopted by DVB for DASH.
In this webinar, Simon Tuff from the BBC discusses what the Audio Video Coding (AVC) experts of DVB have been working on to introduce Next Generation Audio (NGA) to the DVB specifications over recent years. With the latest version of TS 101 154, DVB’s guidelines for the use of video and audio coding in broadcast and broadband applications, being published by ETSI, it seems like a great time to unpack the audio part of the tool box and share the capabilities of NGA via a webinar.