Video: The Future of Live HDR Production

HDR has long been hailed as the best way to improve the image delivered to viewers because it packs a punch whatever the resolution. Usually combined with a wider colour gamut, it brings brighter highlights, more colours with the ability to be more saturated. Whilst the technology has been in TVs for a long time now, it’s continued to evolve and it turns out doing a full, top tier production in HDR isn’t trivial so broadcasters have been working for a number of years now to understand the best way to deliver HDR material for live sports.

Leader has brought together a panel of people who have all cut their teeth implementing HDR in their own productions and ‘writing the book’ on HDR production. The conversation starts with the feeling that HDR’s ‘there’ now and is now much more routinely than before doing massive shows as well as consistent weekly matches in HDR.
 

 
Pablo Garcia Soriano from CORMORAMA introduces us to light theory talking about our eyes’ non-linear perception of brightness. This leads to a discussion of what ‘Scene referred’ vs ‘Display referred’ HDR means which is a way of saying whether you interpret the video as describing the brightness your display should be generating or the brightness of the light going into the camera. For more on colour theory, check out this detailed video from CVP or this one from SMPTE.

Pablo finishes by explaining that when you have four different deliverables including SDR, Slog-3, HLG and PQ, the only way to make this work, in his opinion, is by using scene-referred video.

Next to present is Prin Boon from PHABRIX who relates his experiences in 2019 working on live football and rugby. These shows had 2160p50 HDR and 1080i25 SDR deliverables for the main BT Programme and the world feed. Plus there were feeds for 3rd parties like the jumbotron, VAR, BT Sport’s studio and the EPL.

2019, Prin explains, was a good year for HDR as TVs and tablets were properly available in the market and behind the scenes, Stedicam now had compatible HDR rigs and radio links could now be 10-bit. Replay servers, as well, ran in 10bit. In order to produce an HDR programme, it’s important to look at all the elements and if only your main stadium cameras are HDR, you soon find that much of the programme is actually SDR originated. It’s vital to get HDR into each camera and replay machine.

Prin found that ‘closed-loop SDR shading’ was the only workable way of working that allowed them to produce a top-quality SDR product which, as Kevin Salvidge reminds us is the one that earns the most money still. Prin explains what this looks like, but in summary, all monitoring is done in SDR even though it’s based on the HDR video.

In terms of tips and tricks, Prin warns about being careful with nomenclature not only in your own operation but also in vendor specified products giving the example of ‘gain’ which can be applied either as a percentage or as dB in either the light or code space, all permutations giving different results. Additionally, he cautions that multiple trips to and from HDR/SDR will lead to quantisation artefacts and should be avoided when not necessary.
 

 
The last presentation is from Chris Seeger and Michael Drazin from NBC Universal talk about the upcoming Tokyo Olympics where they’re taking the view that SDR should look the ‘same’ as HDR. To this end, they’ve done a lot of work creating some LUTs (Look Up Tables) which allow conversion between formats. Created in collaboration with the BBC and other organisations, these LUTs are now being made available to the industry at large.

They use HLG as their interchange format with camera inputs being scene referenced but delivery to the home is display-referenced PQ. They explain that this actually allows them to maintain more than 1000 NITs of HDR detail. Their shaders work with HDR, unlike the UK-based work discussed earlier. NBC found that the HDR and SDR out of the CCU didn’t match so the HDR is converted using the NBC LUTs to SDR. They caution to watch out for the different primaries of BT 709 and BT 2020. Some software doesn’t change the primaries and therefore the colours are shifted.

NBC Universal put a lot of time into creating their own objective visualisation and measurement system to be able to fully analyse the colours of the video as part of their goal to preserve colour intent even going as far as to create their own test card.

The video ends with an extensive Q&A session.

Watch now!
Speakers

Chris Seeger Chris Seeger
Office of the CTO, Director, Advanced Content Production Technology
NBC Universal
Michael Drazin Michael Drazin
Director Production Engineering and Technology,
NBC Olympics
Pablo Garcia Soriano Pablo Garcia Soriano
Colour Supervisor, Managing Director
CROMORAMA
Prinyar Boon Prinyar Boon
Product Manager, SMPTE Fellow
PHABRIX
Ken Kerschbaumer Moderator: Ken Kerschbaumer
Editorial Director,
Sports Video Group
Kevin Salvidge
European Regional Development Manager,
Leader

Video: It’s not football. It’s LaLiga: How Spain’s top-flight uses graphics and data for fan engagement

Commentary is transformative to any sport, allowing casual viewers to understand the significance of what happens in the game along with dyed-in-the-wool fans who also benefit from the facts and figures brought up by the commentators. Increasingly, commentators have had facts and figures at their fingertips to seamlessly weave into the narrative. Now, the amount of data is such that companies like LaLiga are looking for other ways to enhance the viewing experience by inserting stats into the on-screen graphics.

Roger Brosel Head of Content & Programming at LaLiga explains that Mediacoach is a Spanish company that provides match analysis tools to teams and their coaches. But just as Formula One’s stats which are collected to help team engineers are now used on-screen in the broadcast, LaLiga realised they could be using this data during the live game.

 

 

WTVision’s Wilem van Breukelen discusses how they integrated Mediacoach data so they could show stats on people and show the lineup. When there’s a corner, they can show where, statistically speaking, the ball may end up and similar facts.

Roger continues that the value for LaLiga in being able to show these stats is to build on the entertainment proposition. Adding an informative layer to the game adds to the enjoyment helping people of all levels learn more about the game and the players; it helps editorially telling the story of the game.

Wilem explains that they are easily able to take LSM clips from operators and edit them during the game adding factual graphics on top. These can be immediately re-used or offered on to broadcasters to incorporate into their coverage or social media feeds. After the broadcasters’ exclusivity window, LaLiga can then publish those, or similar feeds during the week as part of their push to keep LaLiga relevant when there are no games and extending the interest in the weekend’s activities. Doing this has shown a clear increase in social media engagement.

It’s important, cautions Roger, to keep a balance between on-screen stats and the pure sport underneath whose story can be overwhelmed by distracting viewers with too many numbers. This is why the graphics team also have editorial understanding and know the game of football well.

Watch now!
Speakers

Roger Brosel Roger Brosel
Head of Content and Programming,
LaLiga
Willem van Breukelen Willem van Breukelen
LaLiga Graphics Lead
wTVision

Video: Football Production Technology: The Verdict


Football coverage of the main game is always advancing, but this year there have been big changes in production as well as the continued drive to bring second screens mainstream. This conversation covers the state of the art of football production bringing together Mark Dennis of Sunset+Vine, Emili Planas from Mediapro and Tim Achberger from Sportcast in a conversation moderated by Sky Germany’s Alessandro Reitano for the SVG Europ Football Summit 2021.

The first topic discussed is the use of automation to drive highlights packages. Mark from S+V feels that for the tier 1 shows they do, human curation is still better but recognises that the creation of secondary and tertiary video from the event could benefit from AI packages. In fact, Mediapro is doing just this providing a file-based clips package while the match is ongoing. This helps broadcasters use clips quicker and also avoids post-match linear playouts. Tim suggests that AI has a role to play when dealing with 26 cameras and orchestrating the inputs and outputs of social media clips as well as providing specialised feeds. Sportcast are also using file delivery to facilitate secondary video streams during the match.

 

 

Answering the question “What’s missing from the industry?”, Mark asks if they can get more data and then asks how can they show all data. His point is that there are still many opportunities to use data, like BT Sport’s current ability to show the speed of players. He feels this works best on the second screen, but also sees a place for increasing data available to fans in the stadium. Emili wants better data-driven content creation tools and ways to identify which data is relevant. Time agrees that data is important and, in common with Emili, says that the data feeds provide the basis of a lot of the AI workflows’ ability to classify and understand clips. He sees this as an important part of filtering through the 26 cameras to find the ones people actually want to see.

Alessandro explains he feels that focus is moving from the main 90 minutes to the surrounding storylines. Not in a way that detracts from the main game, but in a way that shows production is taking seriously the pre and post stories and harnessing technology to exploit the many avenues available to tell the stories and show footage that otherwise would have space to be seen.

The discussion turns to drones and other special camera systems asking how they fit in. Tim says that dromes have been seen as a good way to differentiate your product and without Covid restrictions, could be further exploited. Tim feels that special cameras should be used more in post and secondary footage wondering if there could be two world feeds, one which has a more traditional ‘Camera 1’ approach and another which much more progressively includes a lot of newer camera types. Emili follows on by talking bout Mediapro’s ‘Cinecam’ which uses a Sony Venice camera to switch between normal Steadicam footage during the match to a shallow depth-of-field DSLR style post-match which give the celebrations a different, more cinematic look with the focus leading the viewer to the action.

The panel finishes by discussing the role of 5G. Emili sees it as a benefit to production and a way to increase consumer viewing time. He sees opportunities for 5G to replace satellite and help move production into the cloud for tier 2 and 3 sports. Viewers at home may be able to watch matches in better quality and in stadiums the plans are to offer data-enriched services to fans so the can analyse what’s going on and have a better experience than at home. Mark at S+V sees network slicing as the key technology giving production the confidence that they will have the bandwidth they need on the day. 5G will reduce costs and he’s hoping he may be able to enhance remote production for staff at home whose internet isn’t great quality bringing more control and assuredness into their connectivity.

Watch now!
Speakers

Tim Achberger Tim Achberge
Sportcast,
Head of Innovation & Technology
Emili Planas Emili Planas
CTO and Operations Manager
Mediapro
Mark Dennis Mark Dennis
Director of Technical Operations
Sunset+Vine
Alessandro Reitano Moderator: Alessandro Reitano
SVP of Sports Production,
Sky Germany

Video: FOX – Uncompressed live sports in the cloud

Is using uncompressed video in the cloud with just 6 frames of latency to get there and back ready for production? WebRTC manages sub-second streaming in one direction and can even deliver AV1 in real-time. The key to getting down to a 100ms round trip is to move down to millisecond encoding and to use uncompressed video in the cloud. This video shows how it can be done.

Fox has a clear direction to move into the cloud and last year joined AWS to explains how they’ve put their delivery distribution into the cloud remuxing feeds for ATSC transmitters, satellite uplinks, cable headends and encoding for internet delivery, In this video, Fox’s Joel Williams, Evan Statton from AWS explain their work together making this a reality. Joel explains that latency is not a very hot topic for distribution as there are many distribution delays. The focus has been on getting the contribution feeds into playout and MCR monitoring quickly. After all, when people are counting down to an ad break, it needs to roll exactly on zero.

Evan explains the approach AWS has taken to solving this latency problem and starts with considering using SMPTE’s ST 2110 in the cloud. ST 2110 has video flows of at least 1 Gbps, typically and when implemented on-premise is typically built on a dedicated network with very strict timing. Cloud datacentres aren’t like that and Evan demonstrates this showing how across 8 video streams, there are video drops of several seconds which is clearly not acceptable. Amazon, however, has a product called ‘Scalable Reliable Datagram’ which is aimed at moving high bitrate data through their cloud. Using a very small retransmission buffer, it’s able to use multiple paths across the network to deliver uncompressed video in real-time. The retransmission buffer here being very small enables just enough healing to redeliver missing packets within the 16.7ms it takes to deliver a frame of 60fps video.

On top of SRD, AWS have introduced CDI, the Cloud Digital Interface, which is able to describe uncompressed video flows in a way already familiar to software developers. This ‘Audio Video Metadata’ layer handles flows in the same way as 2110, for instance keeping essences separate. Evan says this has helped vendors react favourably to this new technology. For them instead of using UDP, SRD can be used with CDI giving them not only normal video data structures but since SRD is implemented in the Nitro network card, packet processing is hidden from the application itself.

The final piece to the puzzle is keeping the journey into and out of the cloud low-latency. This is done using JPEG XS which has an encoding time of a few milliseconds. Rather than using RIST, for instance, to protect this on the way into the cloud, Fox is testing using ST 2022-7. 2022-7 takes in two identical streams on two network interfaces, typically. This way it should end up with two copies of each packet. Where one gets lost, there is still another available. This gives path redundancy which a single stream will never be able to offer. Overall, the test with Fox’s Arizona-based Technology Center is shown in the video to have only 6 frames of latency for the return trip. Assuming they used a California-based AWS data centre, the ping time may have been as low as two frames. This leaves four frames for 2022-7 buffers, XS encoding and uncompressed processing in the cloud.

Watch now!
Speakers

Joel Williams Joel Wiliams
VP of Architecutre & Engineering,
Fox Corporation
Evan Statton Evan Statton
Principal Architect, Media & Entertainment,
AWS