Video: 8K Use Cases

Is there really a reason to move to 8K? UHD/4K within the industry is continuing to gain traction but is far from widespread having started to move out of the ‘trails’ phase and into some limited productions such as high profile sports events and early some early adopters. Given we’re at such an early phase, what are the drivers for 8K technology and who’s supporting 8K adoption?

Chris Chinnock from the 8K Association is here to explain these use cases. But in order to encourage the technology and help broadcasters and vendors as they test 8K workflows, Chris explains that they are working from several angles such as documenting encoders and decoders, creating 8K content as well as supporting AI approaches. All these activities fall under the association’s umbrella of educating both professionals and consumers about 8K, encouraging the use of the 8K ecosystem, creating a consumer-facing certification system and promoting 8K in a wide number of ways.

It’s important that AI is part of the approach as this, whether as Artificial Intelligence, Machine Learning or Neural Networks, is an increasingly key part of codecs’ approach to compression. Super Resolution has shown that machine learning does a better job of up- and down-conversion than standard mathematical approaches. So whether using ML for pre- and post-processing or to replace a traditional algorithm, pretty much every stage in video manipulation is being considered for enhancement or replacement with AI/ML at the moment. Thus the 8K association is supporting this work as it may be that these approaches are essential for creating 8K distribution platforms that are feasible with the available technology.

 

 

This brings us back to why we even want 8K and taking a step back from each of Chris’s slides we see the overall picture which we were reminded of in 2020 with MPEG’s recent batch of three new codecs: Not everyone has the same use case in and out of the media & entertainment industry. Chris mentions LCEVC as a good codec available in the near term that can help compress 8K. LCEVC was built not only to deliver great video for broadcast & streaming, but also with a view to keeping complexity low so it could go into old set-top boxes or, for example, body cameras. It allows devices with not much processing power to processes higher resolution video than ever anticipated which can be useful within media and entertainment but can also be very useful in many other industries.

Chris takes us through the medical applications, applications for large-format displays in AV, simulation uses and general corporate use cases for boardrooms and impactful displays within the building which all make use of 8K video with bespoke displays meaning they don’t have the problem M&E has with mass distribution of high resolution.

The use cases for film & TV are customer facing such as improved picture quality (particularly when paired with high frame rate, HDR and other technologies) but also behind the scenes where capturing in 8K allows selecting a UHD/HD part of the picture, higher zoom and promises a reduction in the number of camera positions. This is best leveraged in major sporting events and will see a lot of use in the Japanese Olympics.

Chris is good enough to acknowledge there are many challenges at the moment with enabling 8K workflows and also looks at the difficulties of distribution. Without significant investment in codecs, he says that satellite and OTA are not obvious candidates for delivering 8K with no current path to delivery over DVB or ATSC 3.0 without use of the internet. The internet, whether as part of hybrid broadcast or pure streaming is seen as the best vehicle for 8K but 5G, Chris explains, is no silver bullet to enabling wide-spread 8K availability. As we’ve seen in other videos 5G is on its way but covers a massive range of frequencies only the highest of which, known as millimetre-wave will actually deliver gigabit bandwidths. The initial 5G offerings are on existing frequencies and some new ones up to 6GHz which do provide higher bandwidth but which could easily come down per user when adoption matures. Chris concludes 8K delivery to the home may still be best, for the masses, using fibre delivery to the home.

The complementary element to bandwidth provision is codec availability where we’re already seeing LCEVC, AV1 or China’s AVS3 being able to be applied to 8K distribution with codecs like VVC and EVC becoming available as the vendors complete their implementations and bring them to market.

Watch now!
Speakers

Chris Chinnock
Executive Director,
8K Association

Video: AES67 Over Wide Area Networks


AES67 is a widely adopted standard for moving PCM audio from place to place. Being a standard, it’s ideal for connecting equipment together from different vendors and delivers almost zero latency, lossless audio from place to place. This video looks at use cases for moving AES from its traditional home on a company’s LAN to the WAN.

Discovery’s Eurosport Technology Transformation (ETT) project is a great example of the compelling use case for moving to operations over the WAN. Eurosport’s Olivier Chambin explains that the idea behind the project is to centralise all the processing technology needed for their productions spread across Europe feeding their 60 playout channels.

Control surfaces and some interface equipment is still necessary in the European production offices and commentary points throughout Europe, but the processing is done in two data centres, one in the Netherlands, the other in the UK. This means audio does need to travel between countries over Discovery’s dual MPLS WAN using IGMPv3 multicast with SSM

From a video perspective, the ETT project has adopted 2110 for all essences with NMOS control. Over the WAN, video is sent as JPEG XS but all audio links are 2022-7 2110-30 with well over 10,000 audio streams in total. Timing is done using PTP aware switches with local GNSS-derived PTP with a unicast-over-WAN as a fallback. For more on PTP over WAN have a look at this RTS webinar and this update from Meinberg’s Daniel Boldt.

 

 

Bolstering the push for standards such as AES67 is self-confessed ‘audioholic’ Anthony P. Kuzub from Canada’s CBC. Chair of the local AES section he makes the point that broadcast workflows have long used AES standards to ensure vendor interoperability from microphones to analogue connectors, from grounding to MADI (AES10). This is why AES67 is important as it will ensure that the next generation of equipment can also interoperate.

Surrounding these two case studies is a presentation from Nicolas Sturmel all about the AES SC-02-12-M working group which aims to define the best ways of working to enable easy use of AES67 on the WAN. The key issue here is that AES67 was written expecting short links on a private network that you can completely control. Moving to a WAN or the internet with long-distance links on which your bandwidth or choice of protocols is limited can make AES67 perform badly if you don’t follow the best practices.

To start with, Nicolas urges anyone to check they actually need AES67 over the WAN to start with. Only if you need precise timing (for lip-sync for example) with PCM quality and low latencies from 250ms down to as a little as 5 milliseconds do you really need AES67 instead of using other protocols such as ACIP, he explains. The problem being that any ping on the internet, even to something fairly close, can easily take 16 to 40ms for the round trip. This means you’re guaranteed 8ms of delay, but any one packet could be as late as 20ms known as the Packet Delay Variation (PDV).

Not only do we need to find a way to transmit AES67, but also PTP. The Precise Time Protocol has ways of coping for jitter and delay, but these don’t work well on WAN links whether the delay in one direction may be different to the delay for a packet in the other direction. PTP also isn’t built to deal with the higher delay and jitter involved. PTP over WAN can be done and is a way to deliver a service but using a GPS receiver at each location, as Eurosport does, is a much better solution only hampered by cost and one’s ability to see enough of the sky.

The internet can lose packets. Given a few hours, the internet will nearly always lose packets. To get around this problem, Nicolas looks at using FEC whereby you are constantly sending redundant data. FEC can send up to around 25% extra data so that if any is lost, the extra information sent can be leveraged to determine the lost values and reconstruct the stream. Whilst this is a solid approach, computing the FEC adds delay and the extra data being constantly sent adds a fixed uplift on your bandwidth need. For circuits that have very few issues, this can seem wasteful but having a fixed percentage can also be advantageous for circuits where a predictable bitrate is much more important. Nicolas also highlights that RIST, SRT or ST 2022-7 are other methods that can also work well. He talks about these longer in his talk with Andreas Hildrebrand

The video concludes with a Q&A.

Watch now!
Speakers

Nicolas Sturmel Nicolas Sturmel
Product Manager – Senior Technologist,
Merging Technologies
Anthony P. Kuzub Anthony P. Kuzub
Senior Systems Designer,
CBC/Radio Canada
Olivier Chambin Olivier Chambin
Audio Broadcast Engineer, AioP and Voice-over-IP
Eurosport Discovery

Video: Understanding the World of Ad Tech

Advertising has been the mainstay of TV for many years. Like it or loathe it, ad-support VoD (AVoD) delivers free to watch services that open up content to a much wider range of people than otherwise possible just like ad-supported broadcast TV. Even people who can afford subscriptions have a limit to the number of services they will subscribe to. Having an AVoD offering means you can draw people in and if you also have SVoD, there’s a path to convince them to sign up.

To look at where ad tech is today and what problems still exist, Streaming Media contributing editor Nadine Krefetz has brought together Byron Saltysiak from WarnerMedia, Verizon Media’s Roy Firestone, CBS Interactive’s Jarred Wilichinksy and Newsy’s Tony Brown to share their daily experience of working with OTT ad tech.

 

 

Nadine is quick to ask the panel what they feel the weakest link is in ad tech. ‘Scaling up’ answered Jarred who’s seen from massive events how quickly parts of the ad ecosystem fail when millions of people need an ad break at the same time. Bryon adds that with the demise of flash came the loss of an abstraction layer. Now, each platform has to be targetted directly leading to a lot of complexity. Previously, as long as you got flash right, it would work on all platforms. Lastly, redundancy came up as a weakness. Linked to Jarred’s point about the inability to scale easily, the panel’s consensus is they are far off broadcast’s five-nines uptime targets. In some ways, this is to be expected as IT is a more fragmented, faster-moving market than consumer TVs making it all the harder to keep up and match the changing patterns.

A number of parts of the conversation centred around ad tech as an ecosystem. This is a benefit and a drawback. Working in an ecosystem means that as much as the streaming provider wants to invest in bolstering their own service to make it able to cope with millions upon millions of requests, they simply can’t control what the rest of the ecosystem does and if 2 million people all go for a break at once, it doesn’t take much for an ad provider’s servers to collapse under the weight. On the other hand, points out Byron, what is a drawback is also a strength whereby streaming has the advantage of scale which broadcasters don’t. Roy’s service delivered one hundred thousand matches last year. Byron asks how many linear channels you’d need to cover that many.

Speed is a problem given that the ad auction needs to happen in the twenty seconds or so leading up to the ad being shown to the viewer. With so many players, things can go wrong starting off simply with slow responses to requests. But also with ad lengths. Ad breaks are built around 15 seconds segments so it’s difficult when companies want 6 or 11 seconds and it’s particularly bad when five 6-second ads are scheduled for a break: “no-one wants to see that.”

Jarred laments that despite the standards and guidelines available that “it’s still the wild west” when it comes to ad quality and loudness where viewers are the ones bearing the brunt of these mismatched practices.

Nadine asks about privacy regulations that are increasingly reducing the access advertisers have to viewer data. Byron points out that they do in some way need a way to identify a user such that they avoid showing them the same ad all the time. It turns out that registered/subscribed users can be tracked under some regulations so there’s a big push to have people sign up.

Other questions covered by the panel include QA processes, the need for more automation in QA, how to go about starting your own service, dealing with Roku boxes and how to deal with AVoD downloaded files which, when brought online, need to update the ad servers about which ads were watched.

Watch now!
Speakers

Tony Brown Tony Brown
Chief of Staff,
Newsy
Jarred Wilichinsky Jarred Wilichinsky
SVP Global Video Monetization and Operations,
CBS Interactive
Byron Saltysiak Byron Saltysiak
VP of Video and Connected Devices,
WarnerMedia
Roy Firestone Roy Firestone
Principal Product Manger,
Verizon Media
Nadine Krefetz Nadine Krefetz
Contributing Editor,
Streaming Media

Video: The Future of Live HDR Production

HDR has long been hailed as the best way to improve the image delivered to viewers because it packs a punch whatever the resolution. Usually combined with a wider colour gamut, it brings brighter highlights, more colours with the ability to be more saturated. Whilst the technology has been in TVs for a long time now, it’s continued to evolve and it turns out doing a full, top tier production in HDR isn’t trivial so broadcasters have been working for a number of years now to understand the best way to deliver HDR material for live sports.

Leader has brought together a panel of people who have all cut their teeth implementing HDR in their own productions and ‘writing the book’ on HDR production. The conversation starts with the feeling that HDR’s ‘there’ now and is now much more routinely than before doing massive shows as well as consistent weekly matches in HDR.
 

 
Pablo Garcia Soriano from CORMORAMA introduces us to light theory talking about our eyes’ non-linear perception of brightness. This leads to a discussion of what ‘Scene referred’ vs ‘Display referred’ HDR means which is a way of saying whether you interpret the video as describing the brightness your display should be generating or the brightness of the light going into the camera. For more on colour theory, check out this detailed video from CVP or this one from SMPTE.

Pablo finishes by explaining that when you have four different deliverables including SDR, Slog-3, HLG and PQ, the only way to make this work, in his opinion, is by using scene-referred video.

Next to present is Prin Boon from PHABRIX who relates his experiences in 2019 working on live football and rugby. These shows had 2160p50 HDR and 1080i25 SDR deliverables for the main BT Programme and the world feed. Plus there were feeds for 3rd parties like the jumbotron, VAR, BT Sport’s studio and the EPL.

2019, Prin explains, was a good year for HDR as TVs and tablets were properly available in the market and behind the scenes, Stedicam now had compatible HDR rigs and radio links could now be 10-bit. Replay servers, as well, ran in 10bit. In order to produce an HDR programme, it’s important to look at all the elements and if only your main stadium cameras are HDR, you soon find that much of the programme is actually SDR originated. It’s vital to get HDR into each camera and replay machine.

Prin found that ‘closed-loop SDR shading’ was the only workable way of working that allowed them to produce a top-quality SDR product which, as Kevin Salvidge reminds us is the one that earns the most money still. Prin explains what this looks like, but in summary, all monitoring is done in SDR even though it’s based on the HDR video.

In terms of tips and tricks, Prin warns about being careful with nomenclature not only in your own operation but also in vendor specified products giving the example of ‘gain’ which can be applied either as a percentage or as dB in either the light or code space, all permutations giving different results. Additionally, he cautions that multiple trips to and from HDR/SDR will lead to quantisation artefacts and should be avoided when not necessary.
 

 
The last presentation is from Chris Seeger and Michael Drazin from NBC Universal talk about the upcoming Tokyo Olympics where they’re taking the view that SDR should look the ‘same’ as HDR. To this end, they’ve done a lot of work creating some LUTs (Look Up Tables) which allow conversion between formats. Created in collaboration with the BBC and other organisations, these LUTs are now being made available to the industry at large.

They use HLG as their interchange format with camera inputs being scene referenced but delivery to the home is display-referenced PQ. They explain that this actually allows them to maintain more than 1000 NITs of HDR detail. Their shaders work with HDR, unlike the UK-based work discussed earlier. NBC found that the HDR and SDR out of the CCU didn’t match so the HDR is converted using the NBC LUTs to SDR. They caution to watch out for the different primaries of BT 709 and BT 2020. Some software doesn’t change the primaries and therefore the colours are shifted.

NBC Universal put a lot of time into creating their own objective visualisation and measurement system to be able to fully analyse the colours of the video as part of their goal to preserve colour intent even going as far as to create their own test card.

The video ends with an extensive Q&A session.

Watch now!
Speakers

Chris Seeger Chris Seeger
Office of the CTO, Director, Advanced Content Production Technology
NBC Universal
Michael Drazin Michael Drazin
Director Production Engineering and Technology,
NBC Olympics
Pablo Garcia Soriano Pablo Garcia Soriano
Colour Supervisor, Managing Director
CROMORAMA
Prinyar Boon Prinyar Boon
Product Manager, SMPTE Fellow
PHABRIX
Ken Kerschbaumer Moderator: Ken Kerschbaumer
Editorial Director,
Sports Video Group
Kevin Salvidge
European Regional Development Manager,
Leader