Brightness, luminance, luma, NITS and candela. What are the differences between these similar terms? If you’ve not been working closely with displays and video, you may not know but as HDR grows in adoption, it pays to have at least a passing understanding of the terms in use.
Date: Thursday January 23rd – 11am ET / 16:00 GMT
Last week, The Broadcast Knowledge covered the difference between Luma and Luminance in this video from YouTube channel DisplacedGamers. It’s a wide ranging video which explains many of the related fundamentals of human vision and analogue video much of which is relevant in this webinar.
To explain the detail of not only what these mean, but also how we use them to set up our displays, the IABM have asked Norm Hurst from SRI, often known as Sarnoff, to come in and discuss his work researching test patterns. SRI make many test patterns which show up how your display is/isn’t working and also expose some of the processing that the signal has gone through on its journey before it even got to the display. In many cases these test patterns tell their story without electronic meters or analysers, but when brightness is concerned, there can still be place for photometers, colour analysers and other associated meters.
HDR and its associated Wide Colour Gamut (WCG) bring extra complexity in ensuring your monitor is set up correctly particularly as many monitors can’t show some brightness levels and have to do their best to accommodate these requests from the incoming signal. Being able to operationally and academically assess and understand how the display is performing and affecting the video is of prime importance. Similarly colours, as ever, a prone to shifting as they are processed, attenuated and/or clipped.
This free webinar from the IABM is led by CTO Stan Moote.
ATSC 3.0 is bringing IP delivery to terrestrial broadcast. Streaming data live over the air is no mean feat, but nevertheless can be achieved with standard protocols such as MPEG DASH. The difficulty is telling the other end what’s its receiving and making sure that security is maintained ensuring that no one can insert unintended media/data.
In the second of this webinar series from the IEEE BTS, Adam Goldberg digs deep into two standards which form part of ATSC 3.0 to explain how security, delivery and signalling are achieved. Like other recent standards, such as SMPTE’s 2022 and 2110, we see that we’re really dealing with a suite of documents. Starting from the root document A/300, there are currently twenty further documents describing the physical layer, as we learnt last week in the IEEE BTS webinar from Sony’s Luke Fay, management and protocol layer, application and presentation layer as well as the security layer. In this talk Adam, who is Chair of a group on ATSC 3.0 security and vice-chair one on Management and Protocols, explains what’s in the documents A/331 and A/360 which between them define signalling, delivery and security for ATSC 3.0.
Security in ATSC 3.0
One of the benefits of ATSC 3.0’s drive into IP and streaming is that it is able to base itself on widely developed and understood standards which are already in service in other industries. Security is no different, using the same base technology that secure websites use the world over to achieve security. Still colloquially known by its old name, SSL, the encrypted communication with websites has seen several generations since the world first saw ‘HTTPS’ in the address bar. TLS 1.2 and 1.3 are the encryption protocols used to secure and authenticate data within ATSC 3.0 along with X.509 cryptographic signatures.
Authentication vs Encryption
The importance of authentication alongside encryption is hard to overstate. Encryption allows the receiver to ensure that the data wasn’t changed during transport and gives assurance that no one else could have decoded a copy. It provides no assurance that the sender was actually the broadcaster. Certificates are the key to ensuring what’s called a ‘chain of trust’. The certificates, which are also cryptographically signed, match a stored list of ‘trusted parties’ which means that any data arriving can carry a certificate proving it did, indeed, come from the broadcaster or, in the case of apps, a trusted third party.
Signalling and Delivery
Telling the receiver what to expect and what it’s getting is a big topic and dealt with in many places with in the ATSC 3.0 suite. The Service List Table (SLT) provides the data needed for the receiver to get handle on what’s available very quickly which in turn points to the correct Service Layer Signaling (SLS) which, for a specific service, provides the detail needed to access the media components within including the languages available, captions, audio and emergency services.
ATSC 3.0 Receiver Protocol Stack
Media delivery is achieved with two technologies. ROUTE (Real-Time Object Delivery over Unidirectional Transport ) which is an evolution of FLUTE which the 3GPP specified to deliver MPEG DASH over LTE networks. and MMTP (Multimedia Multiplexing Transport Protocol) an MPEG standard which, like MPEG DASH is based on the container format ISO BMFF which we covered in a previous video here on The Broadcast Knowledge
Register now for this webinar to find out how this all connects together so that we can have safe, connected television displaying the right media at the right time from the right source!
Chair, ATSC 3.0 Specialist Group on ATSC 3.0 Security
Vice-chair, ATSC 3.0 Specialist Group on Management and Protocols
Director Technical Standards, Sony Electronics
ATSC 3.0 brings IP delivery to over-the-air TV marking a major change in delivery to the home. For the first time video, audio and other data is all delivered as network streams allowing services available to TV viewers at home to modernise and merge with online streaming services better matching the viewing habits of today. ATSC 3.0 deployments are starting in the USA and it has already been rolled out in South Korea for the XXIII Olympic Winter Games in 2018.
Whilst the move to IP is transformational, ATSC 3.0 delivers a whole slew of improvements to the ATSC standard for RF, bandwidth, Codecs and more. In this, the first of three webinars from the IEEE BTS focussing in on ATSC 3.0, we look at the physical layer with Luke Fay, Chair of the ATSC 3.0 group and also a Senior Manager of Technical Standards at Sony.
What is the Physical Layer?
The physical layer refers to the method data gets from one place to another. In this case, we’re talking about transmission by air, RF. Whilst this isn’t, in some ways, as physical as a copper cable, we have to remember that, at a basic level, communication is about making a high voltage in place A change the voltage in place B. The message physically moves from A to B and the medium it uses and the way it manipulates that medium are what we refer to as the physical layer.
One of the reasons for its wide adoption is that COFDM has guard bands; times when the transmitter is guaranteed not to send any data. This allows the receiver some time to receive any data which comes in late due to multi-path reflections or any other reason. This means that for COFDM, you get better performance if you run a network of nearby transmitters on the same frequency – known as a Single Frequency Network (SFN). A transmitters signal from further away will arrive later, and if in the guard interval, will be used to re-inforce the directly received signal. This means that, counter-intuitively from analogue days, running an SFN actually helps improve reception.
Multiple operating points to match the business case
Another important feature of ATSC 3.0 at the physical layer is the ability to be able to choose the robustness of the signal and have multiple transmissions simultaneously using different levels of robustness. These multiple transmissions are called pipes. As many of us will be familiar with, when transmitting a high bandwidth, the signal can be fragile and easily corrupted by interference. Putting resilience into the signal uses up bandwidth either due using some of the capacity to put error checking and error recovery data in or just by slowing down the rate the signal is sent which, of course, means not as many bits can be sent in the same time window.
Because bandwidth and resilience are a balancing act with each one fighting against the other, it’s important for stations to be able to choose what’s right for them and their business case. Having a high robustness signalm for penetration indoors can be very useful for targeting reception on mobile devices and ATSC 3.0 can actually achieve reception when the signal is below the noise, i.e. a negative signal to noise ratio. A higher bandwidth service delivering UHD at around 20Mbps can be achieved, however, by using 64 instead of 16 QAM.
Networking is increasingly important throughout the broadcast chain. This webcast picks out the fundamentals that underpin SMPTE ST 2110 and that help deliver video streaming services. We’ll piece them together and explain how they work, leaving you with more confidence in talking about and working with technologies such as multicast video and HTTP Live Streaming (HLS).