Video: Analog Luma – A History and Explanation of Video

There are many video fundamentals in today’s video looking at how we see light and how we can represent it in a video signal. Following on from last week’s look at analogue 525-line video we take a deeper dive in to light and colour.

The video starts by examining how white light can be split into colours, primaries, and how these can be re-combined in different amounts to create different colours. It then moves on to examining how the proportion of colours which create ‘white’ light isn’t as even as you might imagine. This allows us to understand how to create brighter and dimmer light which is called the luminance. We’re introduced to the CIE 2d and 3d colour graphs helping us to understand colour space and colour volume

Modern video, even if analogue, is acquired with red, green and blue as separate signals. This means if we want a grey-scale video signal, i.e. luminance only, we need to combine using the proportions discussed earlier. This biased version of luminance is what is called ‘luma’ explains the video from the Displaced Gamers YouTube Channel.

On top of human perception, much of the 20th century was dominated by CRT (Cathode Ray Tube) TVs, which don’t respond linearly to electrical voltage, meaning if you double the voltage, the brightness doesn’t necessary double. In order to compensate for that, ‘gamma correction’ is applied on acquisition so that playback on a CRT produces a linear response.

Pleasantly, an oscillator is wheeled out next looking at a real analogue video waveform demonstrating the shape of not only the sync pulses but the luminance waveform itself and how the on-screen rendition would be seen on a TV. The video then finishes with a brief look at colour addition NTSC, PAL, SECAM signals. A prelude, perhaps, to a future video.

Watch now!

Speaker

Chris Kennedy Chris Kennedy
Displaced Gamers,YouTube Channel

Video: What is 525-Line Analog Video?

With an enjoyable retro feel, this accessible video on understanding how analogue video works is useful for those who have to work with SDI rasters, interlaced video, black and burst, subtitles and more. It’ll remind those of us who once knew, a few things since forgotten and is an enjoyable primer on the topic for anyone coming in fresh.

Displaced Gamers is a YouTube channel and their focus on video games is an enjoyable addition to this video which starts by explaining why analogue 525-line video is the same as 480i. Using a slow-motion video of a CRT (Cathode Ray Tube) TV, the video explains the interlacing technique and why consoles/computers would often use 240p.

We then move on to timing looking at the time spent drawing a line of video, 52.7 microseconds, and the need for horizontal and vertical blanking. Blanking periods, the video explains are there to cover the time that the CRT TV would spend moving the electron beam from one side of the TV to the other. As this was achieved by electromagnets, while these were changing their magnetic level, and hence the position of the beam, the beam would need to be turned off – blanked.

The importance of these housekeeping manoeuvres for older computers was that this was time they could use to perform calculations, free from the task of writing data in to the video buffer. But this was not just useful for computers, broadcasters could use some of the blanking to insert data – and they still do. We see in this video a VHS video played with the blanking clearly visible and the data lines flashing away.

For those who work with this technology still, for those who like history, for those who are intellectually curious and for those who like reminiscing, this is an enjoyable video and ideal for sharing with colleagues.

Watch now!
Speaker

Chris Kennedy Chris Kennedy
Displaced Gamers,YouTube Channel

Webinar: Networking Fundamentals


Date: Thursday 12th December, 1pm EST / 18:00 GMT

Networking is increasingly important throughout the broadcast chain. This webcast picks out the fundamentals that underpin SMPTE ST 2110 and that help deliver video streaming services. We’ll piece them together and explain how they work, leaving you with more confidence in talking about and working with technologies such as multicast video and HTTP Live Streaming (HLS).

Register now!
Speaker

Russell Trafford-Jones Russell Trafford-Jones
Editor, https://TheBroadcastKnowledge.com
Manager, Support & Services, Techex

Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group