HDR and wide colour gamuts are difficult enough in professional settings – how can YouTube get it right with user-generated content?
Viewing conditions have been a challenge right from the beginning of TV but ever more so now with screens of many different shapes and sizes being available with very varied abilities to show brightness and colour. Steven spends some time discussing the difficulty of finding a display suitable for colour grading and previewing your work on – particularly for individual users who are without a large production budget.
Interestingly, we then see that one of the biggest difficulties is in visual perception which makes colours you see after having seen bad colours look much better. HDR can deliver extremely bright and extremely wrong colours. Steven shows real examples from YouTube of where the brain has been tricked into thinking colour and brightness are correct but they clearly are not.
Whilst it’s long been known that HDR and WCG are inextricably linked with human vision, this is a great insight into tackling this at scale and the research that has gone on to bring this under automated control.
This talk is from Streaming Tech Sweden, an annual conference run by Eyevinn Technology. Videos from the event are available to paid attendees but are released free of charge after several months. As with all videos on The Broadcast Knowledge, this is available free of charge after registering on the site.
Software Engineer, YouTube Player Infrastructure
With all the talk of IP, you’d be wrong to think SDI is dead. 12G for 4K is alive and well in many places, so there’s plenty of appetite to understand how it works and how to diagnose problems.
In this double-header, Steve Holmes from Tektronix takes us through the ins and outs of HDR and also SDI for HDR at the SMPTE SF section.
Steve starts with his eye on the SMPTE standards for UHD SDI video looking at video resolutions and seeing that a UHD picture can be made up of 4 HD pictures which gives rise to two well-known formats ‘Quad split’ and ‘2SI’ (2 Sample Interleave).
Colour is the next focus and a discussion on the different colour spaces that UHD is delivered with (spoiler: they’re all in use), what these look like on the vectorscope and look at the different primaries. Finishing up with a roundup and a look at interlink timing, there’s a short break before hitting the next topic…HDR
High Dynamic Range is an important technology which is still gaining adoption and is often provided in 4K programmes. Steve defines the two places HDR is important; in the acquisition and the display of the video then provides a handy lookup table of terms such as HDR, WCG, PQ, HDR10, DMCVT and more.
Steve gives us a primer on what HDR is in terms of brightness ‘NITS’, how these relate to real life and how we talk about it with respect to the displays. We then look at HDR on the waveform monitor and look at features of waveform monitors which allow engineers to visualise and check HDR such as false colour.
The topic of gamma, EOTFs and colour spaces comes up next and is well-explained building on what came earlier. Before the final demo and Q&A, Steve talks about different ways to grade pictures when working in HDR.
With the advent of digital video, the people in the middle of the broadcast chain have little do to with colour for the most part. Yet those in post production, acquisition and decoding/display are finding it life more and more difficult as we continue to expand colour gamut and deliver on new displays.
Google’s Steven Robertson takes us comprehensively though the challenges of colour from the fundamentals of sight to the intricacies of dealing with REC 601, 709, BT 2020, HDR, YUV transforms and all the mistakes people make in between.
An approachable talk which gives a great overview, raises good points and goes into detail where necessary.
An interesting point of view is that colour subsampling should die. After all, we’re now at a point where we could feed an encoded with 4:4:4 video and get it to compress the colour channels more than the luminance channel. Steven says that this would generate more accurate colour than by stripping it of a fixed amount of data like 4:2:2 subsampling does.
Given at Brightcove HQ as part of the San Francisco Video Tech meet-ups.
Vimeo’s Vittorio Giovara discusses ways to improve viewer retention by improving videos paying attention to colour space and colour volume. This talk covers how and why High Dynamic Range (HDR) works, Wide Color Gamut (WCG), 10-bit vs 8-bit video and also discusses the importance of frame rate on viewer retention.
After all, 4K has gotten most of the headlines, but there are other ways to improve the quality of your streaming video that have even more visual impact. This talk explores video colorimetry, ranging from video quality concepts to the latest trends in the industry. Vimeo’s experience is used as a practical implementation example and showcases how new compression technologies are deployed for the benefit of creators and their audiences.
This isn’t just about pretty videos, Vittorio shows the economic benefits of producing a better product.
Views and opinions expressed on this website are those of the author(s) and do not necessarily reflect those of SMPTE or SMPTE Members.
This website is presented for informational purposes only. Any reference to specific companies, products or services does not represent promotion, recommendation, or endorsement by SMPTE