Video: 5 Myths About Dolby Vision & HDR debunked

There seem no let up in the number of technologies coming to market and whilst some, like HDR, have been slowly advancing on us for many years, the technologies that enable them such as Dolby Vision, HDR10+ and the metadata handling technologies further upstream are more recent. So it’s no surprise that there is some confusion over what’s possible and what’s not.

In this video, Bitmovin and Dolby the truth behind 5 myths surrounding the implementation and financial impact of Dolby Vision and HDR in general. Bitmovin sets the scene by with Sean McCarthy giving an overview on their research into the market. He explains why quality remains important, simply put to either keep up with competitors or be a differentiator. Sean then gives an overview of the ‘better pixels’ principle underlining that improving the pixels themselves is often more effective than higher resolution, technologies such as wide colour gamut (WCG) and HDR.

David Brooks then explains why HDR looks better, explaining the biology and psychology behind the effect as well as the technology itself. The trick with HDR is that there are no extra brightness values for the pixels, rather the brightness of each pixel is mapped onto a larger range. It’s this mapping which is the strength of the technology, altering the mapping gives different results, ultimately allowing you to run SDR and HDR workflows in parallel. David explains how HDR can be mapped down to low-brightness displays,

The last half of this video is dedicated to the myths. Each myth has several slides of explanation, for instance, the one suggests that the workflows are very complex. Hangen Last walks through a number of scenarios showing how dual (or even three-way) workflows can be achieved. The other myths, and the questions at the end, talk about resolution, licensing cost, metadata, managing dual SDR/HDR assets and live workflows with Dolby Vision.

Watch now!

David Brooks David Brooks
Senior Director, Professional Solutions,
Dolby Laboratories
Hagan Last Hagan Last
Technology Manager, Content Distribution,
Dolby Laboratories
Sean McCarthy Sean McCarthy
Senior Technical Product Marketing Manager,
Kieran Farr Moderator: Kieran Farr
VP Marketing,

Video: Extension to 4K resolution of a Parametric Model for Perceptual Video Quality

Measuring video quality automatically is invaluable and, for many uses, essential. But as video evolves with higher frame rates, HDR, a wider colour gamut (WCG) and higher resolutions, we need to make sure the automatic evaluations evolve too. Called ‘Objective Metrics’, these computer-based assessments go by the name of PSNR, DMOS, VMAF and others. One use for these metrics is to automatically analyse an encoded video to determine if it looks good enough and should be re-encoded. This allows for the bitrate to be optimised for quality. Rafael Sotelo, from the Universidad de Montevideo, explains how his university helped work on an update to Predicted MOS to do just this.

MOS is the Mean Opinion Score and is a result derived from a group of people watching some content in a controlled environment. They vote to say how they feel about the content and the data, when combined gives an indication of the quality of the video. The trick is to enable a computer to predict what people will say. Rafael explains how this is done looking at some of the maths behind the predicted score.

In order to test any ‘upgrades’ to the objective metric, you need to test it against people’s actual score. So Rafael explains how he set up his viewing environments in both Uruguay and Italy to be compliant with BT.500. BT.500 is a standard which explains how a room should be in order to have viewing conditions which maximise the ability of the viewers to appreciate the pros and cons of the content. For instance, it explains how dim the room should be, how reflective the screens and how they should be calibrated. The guidelines don’t apply to HDR, 4K etc. so the team devised an extension to the standard in order to carryout the testing. This is called ‘subjective testing’.

With all of this work done, Rafael shows us the benefits of using this extended metric and the results achieved.

Watch now!

Rafael Sotelo Rafael Sotelo
Director, ICT Department
Universidad de Montevideo

Video: How many Nits is Color Bars?


Update: This webinar is now available on-demand. Links in this article have been updated to match.

Brightness, luminance, luma, NITS and candela. What are the differences between these similar terms? If you’ve not been working closely with displays and video, you may not know but as HDR grows in adoption, it pays to have at least a passing understanding of the terms in use.

Date: Thursday January 23rd – 11am ET / 16:00 GMT

Last week, The Broadcast Knowledge covered the difference between Luma and Luminance in this video from YouTube channel DisplacedGamers. It’s a wide ranging video which explains many of the related fundamentals of human vision and analogue video much of which is relevant in this webinar.

To explain the detail of not only what these mean, but also how we use them to set up our displays, the IABM have asked Norm Hurst from SRI, often known as Sarnoff, to come in and discuss his work researching test patterns. SRI make many test patterns which show up how your display is/isn’t working and also expose some of the processing that the signal has gone through on its journey before it even got to the display. In many cases these test patterns tell their story without electronic meters or analysers, but when brightness is concerned, there can still be place for photometers, colour analysers and other associated meters.

HDR and its associated Wide Colour Gamut (WCG) bring extra complexity in ensuring your monitor is set up correctly particularly as many monitors can’t show some brightness levels and have to do their best to accommodate these requests from the incoming signal. Being able to operationally and academically assess and understand how the display is performing and affecting the video is of prime importance. Similarly colours, as ever, a prone to shifting as they are processed, attenuated and/or clipped.

This free webinar from the IABM is led by CTO Stan Moote.

Watch now!

Norm Hurst Norm Hurst
Senior Principal Research Engineer,
SRI International SARNOFF
Stan Moote Stan Moote

Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group