Video: Maintaining Colour Spaces

Getting colours right is tricky. Many of us get away without considering colour spaces both in our professional and personal life. But if you’ve ever wanted to print a logo which is exactly the right colour, you may have found out the hard way that the colour in your JPEG doesn’t always match the CMYK of the printer. Here, we’re talking, of course about colour in video. With SD’s 601 and HD’s 709 colour space, how do we keep colours correct?

Rec. ITU-R BT.601-7 also known as REC 601 is the colour space standardised for SD video, Rec. ITU-R T.709-6 also known as Rec. 709 is typically used for HD video. Now for anyone who wants to brush up on what a colour space is, check out this excellent talk from Vimeo’s Vittorio Giovara. A great communicator, we have a number of other talks from him.

In this talk starting 28 minutes into the Twitch feed, Matt Szatmary exposes a number of problems. The first is the inconsistent, and sometimes wrong, way that browsers interpret colours in videos. Second is that FFmpeg only maintains colour space information in certain circumstances and, lastly, he exposes the colour changes that can occur when you’re not careful about maintaining the ‘chain of custody’ of colour space information.

Matt starts by explaining that the ‘VUI’ information, the Video Usability Information, found in AVC and HEVC conveys colour space information among other things such as aspect ratio. This was new to AVC and are not used by the encoder but indicate to decoders things to consider during the decoder process. We then see a live demonstration of Matt using FFmpeg to move videos through different colour spaces and the immediate results in different browsers.

This is an illuminating talk for anyone who cares about actually displaying the correct colours and brightnesses, particularly given there are many processes based on FFmpeg. Matt demonstrates how to ensure FFmpeg is maintaining the correct information.

Watch now!
Download the scripts used in the video
Speakers

Matt Szatmary Matt Szatmary
Senior Video Encoding Engineer,
Mux

Video: Colour Theory

Understanding the way colour is recorded and processed in the broadcast chain is vital to ensuring its safe passage. Whilst there are plenty of people who work in part of the broadcast chain which shouldn’t touch colour, being purely there for transport, the reality is that if you don’t know how colour is dealt with under the hood, it’s not possible to any technical validation of the signal beyond ‘it looks alright!’. The problem being, if you don’t know what’s involved in displaying it correctly, or how it’s transported, how can you tell?

Ollie Kenchington has dropped into the CPV Common Room for this tutorial on colour which starts at the very basics and works up to four case studies at the end. He starts off by simply talking about how colours mix together. Ollie explains the difference between the world of paints, where mixing together is an act of subtracting colours and the world of mixing light which is about adding colours together. Whilst this might seem pedantic, it creates profound differences regarding what colour two mixed colours create. Pigments such as paints look that way because they only reflect the colour(s) you see. They simply don’t reflect the other colours. This is why they are called subtractive; shine a blue light on something that is pure red, and you will just see black, because there is no red light to reflect back. Lights, however, provide lights and look that way because they are sending out the light you see. So mixing a red and blue light will create magenta. This is known as additive colour mixing and introduces color.adobe.com which lets you discover new colour palettes.

The colour wheel is next on the agenda which Ollie explains allows you to talk about the amplitude of a colour – the distance the colour is from the centre of the circle – and the angle that defines the colour itself. But as important as it is to describe a colour in a document, it’s all the more important to understand how humans see colours. Ollie lays out the way that rods & cones work in the eye. That there is a central area that sees the best detail and has most of the cones. The cones, we see, are the cells that help us see colour. The fact there aren’t many cones in our periphery is covered up by our brains which interpolate colour from what they have seen and what they know about our current environment. Everyone is colour blind, Ollie explains, in our peripheral vision but the brain makes up for it all from what it knows about what you have seen. Overall, in your eye, sensitivity to blue is by far much less than that you have for green and then red. This is because, in evolutionary terms, there is much less important information gained by seeing detail in blue than in green, the colour of plants. Red, of course, helps understanding shades of green and brown which are both colours native to plants. The upshot of this, Ollie explains, is that when we come to processing light, we have to do it in a way that takes into account the human sensitivity to different wavelengths. This means that we can show three rectangles next to each other, red, green and blue, see them as similar brightnesses but then see that under the hood, we’ve reduced the intensity of the blue by 89 per cent, the red by 70 and the green by only 41. When added together, these show the correct greyscale brightness.

The CIE 1931 colour space is the next topic. The CIE 1931 colourspace shows all the colours that the human eye can see. Ollie demonstrates, by overlaying it on the graph that ITU-R Rec.709 – broadcast’s most well-known and most widely-used colourspace only provides 35% coverage of what our eyes can see. This makes the call for Rec 2020 from the proponents of UHD and ‘better pixels’, which covers 75%, all the more relevant.

Ollie next focuses in on acquisition talking about CMOS chips in cameras which are monochromatic by nature. As each pixel of a CMOS sensor only records how many photons it received, it is intrinsically monochrome. Therefore, in order to show colour, you need to put a Bayer colour filter array in front. Essentially this describes a pattern of red, blue and green filters above this pixel. With the filter in place, you know that the value you read from a given pixel represents just that single colour. If you put red, blue and green filters over a range of pixels on the sensor, you are able to reconstruct the colour of the incoming scene.

Ollie then starts to talk about reducing colour date. We an do this at source by only recording 8, rather than 10-bits of colour, but Ollie shows us a clear demonstration of when that doesn’t look good; typically 8-bit video lets itself down on sunsets, flesh tones or similar subtle. gradients. The same principle drives the HDR discussion regarding 10-bit Vs. 12 bit. With PQ being built for 12-bit, but realistic live production workflows for the next few years being 10-bit which HLG expects, there is plenty of water to go under the bridge before we see whether PQ’s 12-bit advantage really comes into its own outside of cinemas. Ollie also explains colour subsampling which gets a thorough explanation detailing not only 4:4:4 and 4:2:2 but also the less common examples.

The next section looks at ‘scopes’ also known as ‘waveform monitors’. Ollie starts with the histogram which shows you how much of your picture is a certain brightness helping understanding how exposed your picture is overall. With the histogram, the horizontal axis shows brightness with the left being black and the right being white. Whereas the waveform shows the brightness on the horizontal and then the x axis shows you the position in the picture that a certain brightness happens. This allows you to directly associate brightness values with objects in the scene. This can be done with the luma signal or the separate RGB which then allows you to understand the colour of that area. Vectorscope

Ollie then moves on to discussing balancing contrast looking at lift (lifting the black point), gamma (affects central), gain (altering the white point) and mixing that with shadows, midtones and highlights. He then talks about how the surroundings affect your perceived brightness of the picture and shows it with great boxes in different surrounds. Ollie demonstrates this as part of the slides in the presentation very effectively and talks about the need for standards to control this. When grading, he discusses the different gamma that screens should be set to for different types of work and discusses the standard which says that the ambient light in the surrounding room should be about 10% as bright as the screen displaying pure white.

The last part of the talk presents case studies of programmes and films looking at the way they used colour, saturation, costume and lighting to enhance and underwrite the story that was being told. This takeaway is the need to think of colour as a narrative element. Something that can be informed from and understood by wardrobe, visual look intention, wardrobe and lighting. The conversation about colour and grading should start early in the filming process and a key point Ollie makes is that this is not a conversation that costs a lot, but having it early in the production is priceless in terms of its impact on the cost and results of the project.

Watch now!
Speakers

Ollie Kenchington Ollie Kenchington
Owner & Creative Director,
Korro Films, Korro Academy

Video: Analog Luma – A History and Explanation of Video

There are many video fundamentals in today’s video looking at how we see light and how we can represent it in a video signal. Following on from last week’s look at analogue 525-line video we take a deeper dive in to light and colour.

The video starts by examining how white light can be split into colours, primaries, and how these can be re-combined in different amounts to create different colours. It then moves on to examining how the proportion of colours which create ‘white’ light isn’t as even as you might imagine. This allows us to understand how to create brighter and dimmer light which is called the luminance. We’re introduced to the CIE 2d and 3d colour graphs helping us to understand colour space and colour volume

Modern video, even if analogue, is acquired with red, green and blue as separate signals. This means if we want a grey-scale video signal, i.e. luminance only, we need to combine using the proportions discussed earlier. This biased version of luminance is what is called ‘luma’ explains the video from the Displaced Gamers YouTube Channel.

On top of human perception, much of the 20th century was dominated by CRT (Cathode Ray Tube) TVs, which don’t respond linearly to electrical voltage, meaning if you double the voltage, the brightness doesn’t necessary double. In order to compensate for that, ‘gamma correction’ is applied on acquisition so that playback on a CRT produces a linear response.

Pleasantly, an oscillator is wheeled out next looking at a real analogue video waveform demonstrating the shape of not only the sync pulses but the luminance waveform itself and how the on-screen rendition would be seen on a TV. The video then finishes with a brief look at colour addition NTSC, PAL, SECAM signals. A prelude, perhaps, to a future video.

Watch now!

Speaker

Chris Kennedy Chris Kennedy
Displaced Gamers,YouTube Channel

Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group