Video: Analog Luma – A History and Explanation of Video

There are many video fundamentals in today’s video looking at how we see light and how we can represent it in a video signal. Following on from last week’s look at analogue 525-line video we take a deeper dive in to light and colour.

The video starts by examining how white light can be split into colours, primaries, and how these can be re-combined in different amounts to create different colours. It then moves on to examining how the proportion of colours which create ‘white’ light isn’t as even as you might imagine. This allows us to understand how to create brighter and dimmer light which is called the luminance. We’re introduced to the CIE 2d and 3d colour graphs helping us to understand colour space and colour volume

Modern video, even if analogue, is acquired with red, green and blue as separate signals. This means if we want a grey-scale video signal, i.e. luminance only, we need to combine using the proportions discussed earlier. This biased version of luminance is what is called ‘luma’ explains the video from the Displaced Gamers YouTube Channel.

On top of human perception, much of the 20th century was dominated by CRT (Cathode Ray Tube) TVs, which don’t respond linearly to electrical voltage, meaning if you double the voltage, the brightness doesn’t necessary double. In order to compensate for that, ‘gamma correction’ is applied on acquisition so that playback on a CRT produces a linear response.

Pleasantly, an oscillator is wheeled out next looking at a real analogue video waveform demonstrating the shape of not only the sync pulses but the luminance waveform itself and how the on-screen rendition would be seen on a TV. The video then finishes with a brief look at colour addition NTSC, PAL, SECAM signals. A prelude, perhaps, to a future video.

Watch now!

Speaker

Chris Kennedy Chris Kennedy
Displaced Gamers,YouTube Channel

Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group