Video: User-Generated HDR is Still Too Hard

HDR and wide colour gamuts are difficult enough in professional settings – how can YouTube get it right with user-generated content?

Steven Robertson from Google explains the difficulties that YouTube has faced in dealing with HDR in both its original productions but also in terms of user generated content (UGC). These difficulties stem from the Dolby PQ way of looking at the world with fixed brightnesses and the ability to go all the way up to 10,000 nits of brightness and also from the world of wider colour gamuts with Display P3 and BT.2020 (WCG).

Viewing conditions have been a challenge right from the beginning of TV but ever more so now with screens of many different shapes and sizes being available with very varied abilities to show brightness and colour. Steven spends some time discussing the difficulty of finding a display suitable for colour grading and previewing your work on – particularly for individual users who are without a large production budget.

Interestingly, we then see that one of the biggest difficulties is in visual perception which makes colours you see after having seen bad colours look much better. HDR can deliver extremely bright and extremely wrong colours. Steven shows real examples from YouTube of where the brain has been tricked into thinking colour and brightness are correct but they clearly are not.

Whilst it’s long been known that HDR and WCG are inextricably linked with human vision, this is a great insight into tackling this at scale and the research that has gone on to bring this under automated control.

Watch now!
Free registration required

This talk is from Streaming Tech Sweden, an annual conference run by Eyevinn Technology. Videos from the event are available to paid attendees but are released free of charge after several months. As with all videos on The Broadcast Knowledge, this is available free of charge after registering on the site.

Speaker

Steven Robertson Steven Robertson
Software Engineer, YouTube Player Infrastructure
Google

Video: Intro to 4K Video & HDR

With all the talk of IP, you’d be wrong to think SDI is dead. 12G for 4K is alive and well in many places, so there’s plenty of appetite to understand how it works and how to diagnose problems.

In this double-header, Steve Holmes from Tektronix takes us through the ins and outs of HDR and also SDI for HDR at the SMPTE SF section.

Steve starts with his eye on the SMPTE standards for UHD SDI video looking at video resolutions and seeing that a UHD picture can be made up of 4 HD pictures which gives rise to two well known formats ‘Quad split’ and ‘2SI’ (2 Sample Interleave).

Colour is the next focus and a discussion on the different colour spaces that UHD is delivered with (spoiler: they’re all in use), what these look like on the vector scope and look at the different primaries. Finishing up with a roundup and a look at interlink timing, there’s a short break before hitting the next topic…HDR

Watch now!

High Dynamic Range is an important technology which is still gaining adoption and is often provided in 4K programmes. Steve defines the two places HDR is important; in the acquisition and the display of the video then provides a handy lookup table of terms such as HDR, WCG, PQ, HDR10, DMCVT and more.

Steve gives us a primer on what HDR is in terms of brightness ‘NITS’, how these relate to real life and how we talk about it with respect to the displays. We then look at HDR on the waveform monitor and look at features of waveform monitors which allow engineers to visualise and check HDR such as false colour.

The topic of gamma, EOTFs and colour spaces comes up next and is well explained building on what came earlier. Before the final demo and Q&A, Steve talks about different ways to grade pictures when working in HDR.

A great intro to the topics at hand – just like Steve’s last one: Uncompressed Video over IP & PTP Timing

Watch now!

Speakers

Steve Holmes Steve Holmes
Senior Applications Engineer,
Tektronix

Video: The ST 2094 Standards Suite For Dynamic Metadata

Lars Borg explains to us what problems the SMPTE ST 2094 standard sets out to solve. Looking at the different types of HDR and Wide Colour Gamut (WCG) we quickly see how many permutations there are and how many ways there are to get it wrong.

ST 2094 carries the metadata needed to manage the colour, dynamic range and related data. In order to understand what’s needed, Lars takes us through the details of the HDR implementations, touching on workflows and explaining how the ability of your display affects the video.

We then look at midtones and dynamic metadata before a Q&A.

This talk is very valuable in understanding the whole HDR, WCG ecosystem as much as it is ST 2094.

Watch now!

Speaker

Lars Borg Lars Borg
Principal Scientist,
Adobe

Video: Content Production Technology on Hybrid Log-Gamma


‘Better Pixels’ is the continuing refrain from the large number of people who are dissatisfied with simply increasing resolution to 4K or even 8K. Why can’t we have a higher frame-rate instead? Why not give us a wider colour gamut (WCG)? And why not give us a higher dynamic range (HDR)? Often, they would prefer any of these 3 options over higher resolution.

Watch this video explain more, now.

Dynamic Range is the word given to describe how much of a difference there is between the smallest possible signal and the strongest possible signal. In audio, what’s the quietest things that can be heard verses the loudest thing that can be heard (without distortion). In video, what’s the difference between black and white – after all, can your TV fully simulate the brightness and power of our sun? No, what about your car’s headlights? Probably not. Can your TV go as bright as your phone flashlight – well, now that’s realistic.

So let’s say your TV can go from a very dark black to being as bright as a medium-power flashlight, what about the video that you send your TV? When there’s a white frame, do you want your TV blasting as bright as it can? HDR allows producers to control the brightness of your display device so that something that is genuinely very bright, like star, a bright light, an explosion can be represented very brightly, whereas something which is simply white, can have the right colour, but also be medium brightness. With video which is Standard Dynamic Range (SDR), there isn’t this level of control.

For films, HDR is extremely useful, but for sports too – who’s not seen a football game where the sun leaves half the pitch in shadow and half in bright sunlight? With SDR, there’s no choice but to have one half either very dark or very bright (mostly white) so you can’t actually see the game there. HDR enabled the production crew to let HDR TVs show detail in both areas of the pitch.

HLG, which stands for Hybrid Log-Gamma is the name of a way of delivering HDR video. It’s been pioneered, famously, by Japan’s NHK with the UK’s BBC and has been standardised as ARIB STV-B67. In this talk, NHK’s Yuji Nagata helps us navigate working with multiple formats; HDR HLG -> SDR, plus converting from HLG to Dolby’s HDR format called PQ.

The reality of broadcasting is that anyone who is producing a programme in HDR will have to create an SDR version at some point. The question is how to do that and when. For live, some broadcasters may need to fully automate this. In this talk, we look at a semi-automated way of doing this.

HDR is usually delivered in a Wide Colour Gamut signal such as the ITU’s BT.2020. Converting between this colour space and the more common BT.709 colour space which is part of the HD video standards, is also needed on top of the dynamic range conversion. So listen to Yugi Nagata’s talk to find out NHK’s approach to this.

NHK has pushed very hard for many years to make 8K broadcasts feasible and has in recent times focussed on tooling up in time for the the 2020 Olympics. This talk was given at the SMPTE 2017 technical conference, but is all the more relevant now as NHK up the number of 8K broadcasts approaching the opening ceremony. This work on HDR and WCG is part of making sure that the 8K format really delivers an impressive and immersive experience for those that are lucky enough to experience it. This work on the video goes hand in hand with their tireless work with audio which can deliver 22.2 multichannel surround.

Watch now!

Speaker

Yuji Nagata Yuji Nagata
Broadcast Engineer,
NHK