Video: User-Generated HDR is Still Too Hard

HDR and wide colour gamuts are difficult enough in professional settings – how can YouTube get it right with user-generated content?

Steven Robertson from Google explains the difficulties that YouTube has faced in dealing with HDR in both its original productions but also in terms of user generated content (UGC). These difficulties stem from the Dolby PQ way of looking at the world with fixed brightnesses and the ability to go all the way up to 10,000 nits of brightness and also from the world of wider colour gamuts with Display P3 and BT.2020 (WCG).

Viewing conditions have been a challenge right from the beginning of TV but ever more so now with screens of many different shapes and sizes being available with very varied abilities to show brightness and colour. Steven spends some time discussing the difficulty of finding a display suitable for colour grading and previewing your work on – particularly for individual users who are without a large production budget.

Interestingly, we then see that one of the biggest difficulties is in visual perception which makes colours you see after having seen bad colours look much better. HDR can deliver extremely bright and extremely wrong colours. Steven shows real examples from YouTube of where the brain has been tricked into thinking colour and brightness are correct but they clearly are not.

Whilst it’s long been known that HDR and WCG are inextricably linked with human vision, this is a great insight into tackling this at scale and the research that has gone on to bring this under automated control.

Watch now!
Free registration required

This talk is from Streaming Tech Sweden, an annual conference run by Eyevinn Technology. Videos from the event are available to paid attendees but are released free of charge after several months. As with all videos on The Broadcast Knowledge, this is available free of charge after registering on the site.

Speaker

Steven Robertson Steven Robertson
Software Engineer, YouTube Player Infrastructure
Google

Video: Intro to 4K Video & HDR

With all the talk of IP, you’d be wrong to think SDI is dead. 12G for 4K is alive and well in many places, so there’s plenty of appetite to understand how it works and how to diagnose problems.

In this double-header, Steve Holmes from Tektronix takes us through the ins and outs of HDR and also SDI for HDR at the SMPTE SF section.

Steve starts with his eye on the SMPTE standards for UHD SDI video looking at video resolutions and seeing that a UHD picture can be made up of 4 HD pictures which gives rise to two well-known formats ‘Quad split’ and ‘2SI’ (2 Sample Interleave).

Colour is the next focus and a discussion on the different colour spaces that UHD is delivered with (spoiler: they’re all in use), what these look like on the vectorscope and look at the different primaries. Finishing up with a roundup and a look at interlink timing, there’s a short break before hitting the next topic…HDR

High Dynamic Range is an important technology which is still gaining adoption and is often provided in 4K programmes. Steve defines the two places HDR is important; in the acquisition and the display of the video then provides a handy lookup table of terms such as HDR, WCG, PQ, HDR10, DMCVT and more.

Steve gives us a primer on what HDR is in terms of brightness ‘NITS’, how these relate to real life and how we talk about it with respect to the displays. We then look at HDR on the waveform monitor and look at features of waveform monitors which allow engineers to visualise and check HDR such as false colour.

The topic of gamma, EOTFs and colour spaces comes up next and is well-explained building on what came earlier. Before the final demo and Q&A, Steve talks about different ways to grade pictures when working in HDR.

A great intro to the topics at hand – just like Steve’s last one: Uncompressed Video over IP & PTP Timing

Watch now!

Speakers

Steve Holmes Steve Holmes
Former Senior Applications Engineer,
Tektronix

Video: Colour

With the advent of digital video, the people in the middle of the broadcast chain have little do to with colour for the most part. Yet those in post production, acquisition and decoding/display are finding it life more and more difficult as we continue to expand colour gamut and deliver on new displays.

Google’s Steven Robertson takes us comprehensively though the challenges of colour from the fundamentals of sight to the intricacies of dealing with REC 601, 709, BT 2020, HDR, YUV transforms and all the mistakes people make in between.

An approachable talk which gives a great overview, raises good points and goes into detail where necessary.

An interesting point of view is that colour subsampling should die. After all, we’re now at a point where we could feed an encoded with 4:4:4 video and get it to compress the colour channels more than the luminance channel. Steven says that this would generate more accurate colour than by stripping it of a fixed amount of data like 4:2:2 subsampling does.

Given at Brightcove HQ as part of the San Francisco Video Tech meet-ups.

Watch now!

Speaker

Steven Robertson Steven Robertson
Software Engineer,
Google