Video: How many Nits is Color Bars?

IABM NITS Webinar

Update: This webinar is now available on-demand. Links in this article have been updated to match.

Brightness, luminance, luma, NITS and candela. What are the differences between these similar terms? If you’ve not been working closely with displays and video, you may not know but as HDR grows in adoption, it pays to have at least a passing understanding of the terms in use.

Date: Thursday January 23rd – 11am ET / 16:00 GMT

Last week, The Broadcast Knowledge covered the difference between Luma and Luminance in this video from YouTube channel DisplacedGamers. It’s a wide ranging video which explains many of the related fundamentals of human vision and analogue video much of which is relevant in this webinar.

To explain the detail of not only what these mean, but also how we use them to set up our displays, the IABM have asked Norm Hurst from SRI, often known as Sarnoff, to come in and discuss his work researching test patterns. SRI make many test patterns which show up how your display is/isn’t working and also expose some of the processing that the signal has gone through on its journey before it even got to the display. In many cases these test patterns tell their story without electronic meters or analysers, but when brightness is concerned, there can still be place for photometers, colour analysers and other associated meters.

HDR and its associated Wide Colour Gamut (WCG) bring extra complexity in ensuring your monitor is set up correctly particularly as many monitors can’t show some brightness levels and have to do their best to accommodate these requests from the incoming signal. Being able to operationally and academically assess and understand how the display is performing and affecting the video is of prime importance. Similarly colours, as ever, a prone to shifting as they are processed, attenuated and/or clipped.

This free webinar from the IABM is led by CTO Stan Moote.

Watch now!
Speaker

Norm Hurst Norm Hurst
Senior Principal Research Engineer,
SRI International SARNOFF
Stan Moote Stan Moote
CTO,
IABM

Webinar: ATSC 3.0 Signaling, Delivery, and Security Protocols

ATSC 3.0 is bringing IP delivery to terrestrial broadcast. Streaming data live over the air is no mean feat, but nevertheless can be achieved with standard protocols such as MPEG DASH. The difficulty is telling the other end what’s its receiving and making sure that security is maintained ensuring that no one can insert unintended media/data.

In the second of this webinar series from the IEEE BTS, Adam Goldberg digs deep into two standards which form part of ATSC 3.0 to explain how security, delivery and signalling are achieved. Like other recent standards, such as SMPTE’s 2022 and 2110, we see that we’re really dealing with a suite of documents. Starting from the root document A/300, there are currently twenty further documents describing the physical layer, as we learnt last week in the IEEE BTS webinar from Sony’s Luke Fay, management and protocol layer, application and presentation layer as well as the security layer. In this talk Adam, who is Chair of a group on ATSC 3.0 security and vice-chair one on Management and Protocols, explains what’s in the documents A/331 and A/360 which between them define signalling, delivery and security for ATSC 3.0.

Security in ATSC 3.0
One of the benefits of ATSC 3.0’s drive into IP and streaming is that it is able to base itself on widely developed and understood standards which are already in service in other industries. Security is no different, using the same base technology that secure websites use the world over to achieve security. Still colloquially known by its old name, SSL, the encrypted communication with websites has seen several generations since the world first saw ‘HTTPS’ in the address bar. TLS 1.2 and 1.3 are the encryption protocols used to secure and authenticate data within ATSC 3.0 along with X.509 cryptographic signatures.

Authentication vs Encryption
The importance of authentication alongside encryption is hard to overstate. Encryption allows the receiver to ensure that the data wasn’t changed during transport and gives assurance that no one else could have decoded a copy. It provides no assurance that the sender was actually the broadcaster. Certificates are the key to ensuring what’s called a ‘chain of trust’. The certificates, which are also cryptographically signed, match a stored list of ‘trusted parties’ which means that any data arriving can carry a certificate proving it did, indeed, come from the broadcaster or, in the case of apps, a trusted third party.

Signalling and Delivery
Telling the receiver what to expect and what it’s getting is a big topic and dealt with in many places with in the ATSC 3.0 suite. The Service List Table (SLT) provides the data needed for the receiver to get handle on what’s available very quickly which in turn points to the correct Service Layer Signaling (SLS) which, for a specific service, provides the detail needed to access the media components within including the languages available, captions, audio and emergency services.

ATSC 3.0 Receiver Protocol Stack

ATSC 3.0 Receiver Protocol Stack

Media delivery is achieved with two technologies. ROUTE (Real-Time Object Delivery over Unidirectional Transport ) which is an evolution of FLUTE which the 3GPP specified to deliver MPEG DASH over LTE networks. and MMTP (Multimedia Multiplexing Transport Protocol) an MPEG standard which, like MPEG DASH is based on the container format ISO BMFF which we covered in a previous video here on The Broadcast Knowledge

Register now for this webinar to find out how this all connects together so that we can have safe, connected television displaying the right media at the right time from the right source!

Speaker

Adam Goldberg Adam Goldberg
Chair, ATSC 3.0 Specialist Group on ATSC 3.0 Security
Vice-chair, ATSC 3.0 Specialist Group on Management and Protocols
Director Technical Standards, Sony Electronics

Video: Analog Luma – A History and Explanation of Video

There are many video fundamentals in today’s video looking at how we see light and how we can represent it in a video signal. Following on from last week’s look at analogue 525-line video we take a deeper dive in to light and colour.

The video starts by examining how white light can be split into colours, primaries, and how these can be re-combined in different amounts to create different colours. It then moves on to examining how the proportion of colours which create ‘white’ light isn’t as even as you might imagine. This allows us to understand how to create brighter and dimmer light which is called the luminance. We’re introduced to the CIE 2d and 3d colour graphs helping us to understand colour space and colour volume

Modern video, even if analogue, is acquired with red, green and blue as separate signals. This means if we want a grey-scale video signal, i.e. luminance only, we need to combine using the proportions discussed earlier. This biased version of luminance is what is called ‘luma’ explains the video from the Displaced Gamers YouTube Channel.

On top of human perception, much of the 20th century was dominated by CRT (Cathode Ray Tube) TVs, which don’t respond linearly to electrical voltage, meaning if you double the voltage, the brightness doesn’t necessary double. In order to compensate for that, ‘gamma correction’ is applied on acquisition so that playback on a CRT produces a linear response.

Pleasantly, an oscillator is wheeled out next looking at a real analogue video waveform demonstrating the shape of not only the sync pulses but the luminance waveform itself and how the on-screen rendition would be seen on a TV. The video then finishes with a brief look at colour addition NTSC, PAL, SECAM signals. A prelude, perhaps, to a future video.

Watch now!

Speaker

Chris Kennedy Chris Kennedy
Displaced Gamers,YouTube Channel

Video: Where can SMPTE 2110 and NDI co-exist?

When are two video formats better than one? Broadcasters have long sought ‘best of breed’ systems matching equipment as close as possible to your ideal workflow. In this talk, we look getting the best of both compressed, low-latency and uncompressed video. NDI, a lightly compressed, ultra-low latency codec, allows full productions in visually lossless video with a field of latency. SMPTE’s ST-2110 allows full productions with uncompressed video and almost zero latency.

Bringing together the EBU’s Willem Vermost who paints a picture from the perspective of public broadcasters who are planning their moves into the IP realm, Marc Risby from UK distributor and integrator Boxer brings a more general view of the market’s interest and Will Waters who spent many years in Newtek, the company that invented NDI we hear the two approaches of compressed and uncompressed complement each other.

This panel took place just after the announcement that Newtek had been bought by VizRT, the graphics vendor, who sees a lot of benefit in being able to work in both types of workflow, for clients large and small and who have made Newtek its own entity under the VizRT umbrella to ensure continued focus.

A key differentiator of NDI is its focus on 1 gigabit networking. Its aim has always to enable ‘normal’ companies to be able to deploy IP video easily so they can rapidly benefit from the benefits that IP workflows bring over SDI or other baseband video technologies. A keystone in this strategy is to enable everything to happen on normal, 1Gbit switches which are prevalent in most companies today. Other key elements to the codec are: free, software development kit, bi-directionality, resolution-independent, audio sample-rate agnostic, tally support, auto-discovery and more.

In the talk, we discuss the pros and cons of this approach where interoperability is assured as everyone has to use the same receive and transmit code, against having a standard such as SMPTE ST-2110. SMPTE ST-2110 has the benefit of being uncompressed, assuring the broadcaster that they have captured the best possible quality of video, promises better management at scale, tighter integration into complex workflows, lower latency and the ability to treat the many different essences separately. Whilst we discuss many of the benefits of SMPTE ST-2110, you can get a more detailed overview from this presentation from the IP Showcase.

Watch now!

This panel was produced by IET Media, a technical network within the IET which runs events, talks and webinars for networking and education within the broadcast industry. More information

Speakers

Willem Vermost Willem Vermost
At the time, Senior IP Media Technology Architect, EBU
Now, Design and Engineering Manager, VRT
Marc Risby Marc Risby
CTO,
Boxer Group
Will Walters Will Waters
Formerly Vice President Of Worldwide Customer Success,
Now Head of Global Product Management,
VizRT
Russell Trafford-Jones Moderator: Russell Trafford-Jones
Exec Member, IET Media
Manager, Support & Services, Techex
Editor, The Broadcast Knowledge