Video: Examining the OTT Technology Stack

This video looks at the whole streaming stack asking what’s now, what trends are coming to the fore and how are things going to be done better in the future? Whatever part of the stack you’re optimising, it’s vital to have a way to measure the QoE (Quality of Experience) of the viewer. In most workflows, there is a lot of work done to implement redundancy so that the viewer sees no impact despite problems happening upstream.

The Streaming Video Alliance’s Jason Thibeault diggs deeper with Harmonic’s Thierry Fautier, Brenton Ough from Touchstream, SSIMWAVE’s Hojatollah Yeganeh and Damien Lucas from Ateme.

Talking about Codecs, Thierry makes the point that only 7% of devices can currently support AV1 and with 10 billion devices in the world supporting AVC, he sees a lot of benefit in continuing to optimise this rather than waiting for VVC support to be commonplace. When asked to identify trends in the marketplace, moving to the cloud was identified as a big influencer that is driving the ability to scale but also the functions themselves. Gone are the days, Brenton says, that vendors ‘lift and shift’ into the cloud. Rather, the products are becoming cloud-native which is a vital step to enable functions and products which take full advantage of the cloud such as being able to swap the order of functions in a workflow. Just-in-time packaging is cited as one example.

Examining the OTT Technology Stack from Streaming Video Alliance on Vimeo.

Other changes are that server-side ad insertion (SSAI) is a lot better in the cloud and sub partitioning of viewers, where you do deliver different ads to different people, is more practical. Real-time access to CDN data allowing you near-immediate feedback into your streaming process is also a game-changer that is increasingly available.

Open Caching is discussed on the panel as a vital step forward and one of many areas where standardisation is desperately needed. ISPs are fed up, we hear, of each service bringing their own caching box and it’s time that ISPs took a cloud-based approach to their infrastructure and enabled multiple use servers, potentially containerised, to ease this ‘bring your own box’ mentality and to take back control of their internal infrastructure.

HDR gets a brief mention in light of the Euro soccer championships currently on air and the Japan Olympics soon to be. Thierry says 38% of Euro viewership is over OTT and HDR is increasingly common, though SDR is still in the majority. HDR is more complex than just upping the resolution and requires much more care over which screen it’s watched. This makes adopting HDR more difficult which may be one reason that adoption is not yet higher.

The discussion ends with a Q&A after talking about uses for ‘edge’ processing which the panel agrees is a really important part of cloud delivery. Processing API requests at the edge, doing SSAI or content blackouts are other examples of where the lower-latency response of edge compute works really well in the workflow.

Watch now!
Speakers

Thierry Fautier Thierry Fautier
VP Video Strategy.
Harmonic Inc.
Damien Lucas Damien Lucas
CTO,
Ateme
Hojatollah Yeganeh Hojatollah Yeganeh
Research Team Lead
SSIMWAVE
Brenton Ough Brenton Ough
CEO & Co-Founder,
Touchstream
Jason Thibeault Moderator: Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: The Future of Live HDR Production

HDR has long been hailed as the best way to improve the image delivered to viewers because it packs a punch whatever the resolution. Usually combined with a wider colour gamut, it brings brighter highlights, more colours with the ability to be more saturated. Whilst the technology has been in TVs for a long time now, it’s continued to evolve and it turns out doing a full, top tier production in HDR isn’t trivial so broadcasters have been working for a number of years now to understand the best way to deliver HDR material for live sports.

Leader has brought together a panel of people who have all cut their teeth implementing HDR in their own productions and ‘writing the book’ on HDR production. The conversation starts with the feeling that HDR’s ‘there’ now and is now much more routinely than before doing massive shows as well as consistent weekly matches in HDR.
 

 
Pablo Garcia Soriano from CORMORAMA introduces us to light theory talking about our eyes’ non-linear perception of brightness. This leads to a discussion of what ‘Scene referred’ vs ‘Display referred’ HDR means which is a way of saying whether you interpret the video as describing the brightness your display should be generating or the brightness of the light going into the camera. For more on colour theory, check out this detailed video from CVP or this one from SMPTE.

Pablo finishes by explaining that when you have four different deliverables including SDR, Slog-3, HLG and PQ, the only way to make this work, in his opinion, is by using scene-referred video.

Next to present is Prin Boon from PHABRIX who relates his experiences in 2019 working on live football and rugby. These shows had 2160p50 HDR and 1080i25 SDR deliverables for the main BT Programme and the world feed. Plus there were feeds for 3rd parties like the jumbotron, VAR, BT Sport’s studio and the EPL.

2019, Prin explains, was a good year for HDR as TVs and tablets were properly available in the market and behind the scenes, Stedicam now had compatible HDR rigs and radio links could now be 10-bit. Replay servers, as well, ran in 10bit. In order to produce an HDR programme, it’s important to look at all the elements and if only your main stadium cameras are HDR, you soon find that much of the programme is actually SDR originated. It’s vital to get HDR into each camera and replay machine.

Prin found that ‘closed-loop SDR shading’ was the only workable way of working that allowed them to produce a top-quality SDR product which, as Kevin Salvidge reminds us is the one that earns the most money still. Prin explains what this looks like, but in summary, all monitoring is done in SDR even though it’s based on the HDR video.

In terms of tips and tricks, Prin warns about being careful with nomenclature not only in your own operation but also in vendor specified products giving the example of ‘gain’ which can be applied either as a percentage or as dB in either the light or code space, all permutations giving different results. Additionally, he cautions that multiple trips to and from HDR/SDR will lead to quantisation artefacts and should be avoided when not necessary.
 

 
The last presentation is from Chris Seeger and Michael Drazin from NBC Universal talk about the upcoming Tokyo Olympics where they’re taking the view that SDR should look the ‘same’ as HDR. To this end, they’ve done a lot of work creating some LUTs (Look Up Tables) which allow conversion between formats. Created in collaboration with the BBC and other organisations, these LUTs are now being made available to the industry at large.

They use HLG as their interchange format with camera inputs being scene referenced but delivery to the home is display-referenced PQ. They explain that this actually allows them to maintain more than 1000 NITs of HDR detail. Their shaders work with HDR, unlike the UK-based work discussed earlier. NBC found that the HDR and SDR out of the CCU didn’t match so the HDR is converted using the NBC LUTs to SDR. They caution to watch out for the different primaries of BT 709 and BT 2020. Some software doesn’t change the primaries and therefore the colours are shifted.

NBC Universal put a lot of time into creating their own objective visualisation and measurement system to be able to fully analyse the colours of the video as part of their goal to preserve colour intent even going as far as to create their own test card.

The video ends with an extensive Q&A session.

Watch now!
Speakers

Chris Seeger Chris Seeger
Office of the CTO, Director, Advanced Content Production Technology
NBC Universal
Michael Drazin Michael Drazin
Director Production Engineering and Technology,
NBC Olympics
Pablo Garcia Soriano Pablo Garcia Soriano
Colour Supervisor, Managing Director
CROMORAMA
Prinyar Boon Prinyar Boon
Product Manager, SMPTE Fellow
PHABRIX
Ken Kerschbaumer Moderator: Ken Kerschbaumer
Editorial Director,
Sports Video Group
Kevin Salvidge
European Regional Development Manager,
Leader

Video: Bit-Rate Evaluation of Compressed HDR using SL-HDR1

HDR video can look vastly better than standard dynamic range (SDR), but much of our broadcast infrastructure is made for SDR delivery. SL-HDR1 allows you to deliver HDR over SDR transmission chains by breaking down HDR signals into an SDR video plus enhancement metadata which describes how to reconstruct the original HDR signal. Now part of the ATSC 3.0 suite of standards, people are asking the question whether you get better compression using SL-HDR1 or compressing HDR directly.

HDR works by changing the interpretation of the video samples. As human sight has a non-linear response to luminance, we can take the same 256 or 1024 possible luminance values and map them to brightness so that where the eye isn’t very sensitive, only a few values are used, but there is a lot of detail where we see well. Humans perceive more detail at lower luminosity, so HDR devotes a lot more of the luminance values to describing that area and relatively few at high brightness where specular highlights tend to be. HDR, therefore, has the benefit of not only increasing the dynamic range but actually provides more detail in the lower light areas than SDR.

Ciro Noronha from Cobalt has been examining the question of encoding. Video encoders are agnostic to dynamic range. Since HDR and SDR only define the meaning of the luminance values, the video encoder sees no difference. Yet there have been a number of papers saying that sending SL-HDR1 can result in bitrate savings over HDR. SL-HDR1 is defined in ETSI TS 103 433-1 and included in ATSC A/341. The metadata carriage is done using SMPTE ST 2108-1 or carried within the video stream using SEI. Ciro set out to do some tests to see if this was the case with technology consultant Matt Goldman giving his perspective on HDR and the findings.

Ciro tested with three types of Tested 1080p BT.2020 10-bit content with the AVC and HEVC encoders set to 4:2:0, 10-bit with a 100-frame GOP. Quality was rated using PSNR as well as two special types of PSNR which look at distortion/deviation from the CIE colour space. The findings show that AVC encode chains benefit more from SL-HDR1 than HEVC and it’s clear that the benefit is content-dependent. Work remains to be done now to connect these results with verified subjective tests. With LCEVC and VVC, MPEG has seen that subjective assessments can show up to 10% better results than objective metrics. Additionally, PSNR is not well known for correlating well with visual improvements.

Watch now!
Speakers

Ciro Noronha Ciro Noronha
Executive Vice President of Engineering, Cobalt Digital
President, Rist Forum
Matthew Goldman Matthew Goldman
Technology Consultant