Video: Football Production Technology: The Verdict


Football coverage of the main game is always advancing, but this year there have been big changes in production as well as the continued drive to bring second screens mainstream. This conversation covers the state of the art of football production bringing together Mark Dennis of Sunset+Vine, Emili Planas from Mediapro and Tim Achberger from Sportcast in a conversation moderated by Sky Germany’s Alessandro Reitano for the SVG Europ Football Summit 2021.

The first topic discussed is the use of automation to drive highlights packages. Mark from S+V feels that for the tier 1 shows they do, human curation is still better but recognises that the creation of secondary and tertiary video from the event could benefit from AI packages. In fact, Mediapro is doing just this providing a file-based clips package while the match is ongoing. This helps broadcasters use clips quicker and also avoids post-match linear playouts. Tim suggests that AI has a role to play when dealing with 26 cameras and orchestrating the inputs and outputs of social media clips as well as providing specialised feeds. Sportcast are also using file delivery to facilitate secondary video streams during the match.

 

 

Answering the question “What’s missing from the industry?”, Mark asks if they can get more data and then asks how can they show all data. His point is that there are still many opportunities to use data, like BT Sport’s current ability to show the speed of players. He feels this works best on the second screen, but also sees a place for increasing data available to fans in the stadium. Emili wants better data-driven content creation tools and ways to identify which data is relevant. Time agrees that data is important and, in common with Emili, says that the data feeds provide the basis of a lot of the AI workflows’ ability to classify and understand clips. He sees this as an important part of filtering through the 26 cameras to find the ones people actually want to see.

Alessandro explains he feels that focus is moving from the main 90 minutes to the surrounding storylines. Not in a way that detracts from the main game, but in a way that shows production is taking seriously the pre and post stories and harnessing technology to exploit the many avenues available to tell the stories and show footage that otherwise would have space to be seen.

The discussion turns to drones and other special camera systems asking how they fit in. Tim says that dromes have been seen as a good way to differentiate your product and without Covid restrictions, could be further exploited. Tim feels that special cameras should be used more in post and secondary footage wondering if there could be two world feeds, one which has a more traditional ‘Camera 1’ approach and another which much more progressively includes a lot of newer camera types. Emili follows on by talking bout Mediapro’s ‘Cinecam’ which uses a Sony Venice camera to switch between normal Steadicam footage during the match to a shallow depth-of-field DSLR style post-match which give the celebrations a different, more cinematic look with the focus leading the viewer to the action.

The panel finishes by discussing the role of 5G. Emili sees it as a benefit to production and a way to increase consumer viewing time. He sees opportunities for 5G to replace satellite and help move production into the cloud for tier 2 and 3 sports. Viewers at home may be able to watch matches in better quality and in stadiums the plans are to offer data-enriched services to fans so the can analyse what’s going on and have a better experience than at home. Mark at S+V sees network slicing as the key technology giving production the confidence that they will have the bandwidth they need on the day. 5G will reduce costs and he’s hoping he may be able to enhance remote production for staff at home whose internet isn’t great quality bringing more control and assuredness into their connectivity.

Watch now!
Speakers

Tim Achberger Tim Achberge
Sportcast,
Head of Innovation & Technology
Emili Planas Emili Planas
CTO and Operations Manager
Mediapro
Mark Dennis Mark Dennis
Director of Technical Operations
Sunset+Vine
Alessandro Reitano Moderator: Alessandro Reitano
SVP of Sports Production,
Sky Germany

Video: Deep Neural Networks for Video Coding

We know AI is going to stick around. Whether it’s AI, Machine Learning, Deep Learning or by another name, it all stacks up to the same thing: we’re breaking away from fixed algorithms where one equation ‘does it all’ to a much more nuanced approached with a better result. This is true across all industries. Within the Broadcast industry, one way it can be used is in video and audio compression. Want to make an image smaller? Downsample it with a Convolutional Neural Network and it will look better than Lanczos. No surprise, then, that this is coming in full force to a compression technology near you.

In this talk from Comcast’s Dan Grois, we hear the ongoing work to super-charge the recently released VVC by replacing functional blocks with neural-networks-based technologies. VVC has already achieved 40-50% improvements over HEVC. From the work Dan’s involved with, we hear that more gains are looking promising by using neural networks.

Dan explains that deep neural networks recognise images in layers. The brain does the same thing having one area sensitive to lines and edges, another to objects, another part of the brain to faces etc. A Deep Neural Network works in a similar way.
 

 

During the development of VVC, Dan explains, neural network techniques were considered but deemed too memory- or computationally-intensive. Now, 6 years on from the inception of VVC, these techniques are now practical and are likely to result in a VVC version 2 with further compression improvements.

Dan enumerates the tests so far swapping out each of the functional blocks in turn: intra- and inter-frame prediction, up- and down-scaling, in-loop filtering etc. He even shows what it would look like in the encoder. Some blocks show improvements of less than 5%, but added together, there are significant gains to be had and whilst this update to VVC is still in the early stages, it seems clear that it will provide real benefits for those that can implement these improvements which, Dan highlights at the end, are likely to require more memory and computation than the current version VVC. For some, this will be well worth the savings.

Watch now!
Speaker

Dan Grois Dan Grois
Principal Researcher,
Comcast

Video: Super Resolution: What’s the buzz and why does it matter?

“Enhance!” the captain shouts as the blurry image on the main screen becomes sharp and crisp again. This was sci-fi – and this still is sci-fi – but super-resolution techniques are showing that it’s really not that far-fetched. Able to increase the sharpness of video, machine learning can enable upscaling from HD to UHD as well as increasing the frame rate.

Bitmovin’s Adithyan Ilangovan is here to explain the success they’ve seen with super-resolution and though he concentrates on upscaling, this is just as relevant to improving downscaling. Here are our previous articles covering super resolution.

Adithyan outlines two main enablers of super-resolution, allowing it to displace the traditional methods such as bicubic and Lanczos. Enabler one is the advent of machine learning which now has a good foundation of libraries and documentation for coders allowing it to be fairly accessible to a wide audience. Furthermore, the proliferation of GPUs and, particularly for mobile devices, neural engines is a big help. Using the GPUs inside CPUs or in desktop PCI slots allows the analysis to be done locally without transferring great amounts of video to the cloud solely for the purpose of processing or identification. Furthermore, if your workflow is in the cloud, it’s now easy to rent GPUS and FPGAs to handle such workloads.

Using machine learning doesn’t only allow for better upscaling on a frame-by-frame basis, but we are also able to allow it to form a view of the whole file, or at least the whole scene. With a better understanding of the type of video it’s analysing (cartoon, sports, computer screen etc.) it can tune the upscaling algorithm to deal with this optimally.

Anime has seen a lot of tuning for super-resolution. Due to Anime’s long history, there are a lot of old cartoons which are both noisy and low resolution which are still enjoyed now but would benefit from more resolution to match the screens we now routinely used.

Adithyan finishes by asking how we should best take advantage of super-resolution. Codecs such as LCEVC use it directly within the codec itself, but for systems that have pre and post-processing before the encoder, Adithyan suggests it’s viable to consider reducing the bitrate to reduce the CDN costs knowing the using super-resolution on the decoder, the video quality can actually be maintained.

The video ends with a Q&A.

Watch now!
Download the slides
Speaker

Adithyan Ilangovan Adithyan Ilangovan
Encoding Engineer,
Bitmovin

Video: IP For Broadcast, Colour Theory, AI, VR, Remote Broadcast & More


Today’s video has a wide array of salient topics from seven speakers at SMPTE Toronto’s meeting in February. Covering Uncompressed IP networking, colour theory & practice, real-time virtual studios and AI, those of us outside of Toronto can be thankful it was recorded.

Ryan Morris from Arista (starting 22m 20s) is the first guest speaker and kicks off with though-provoker: showing the uncompressed bandwidths of video, we see that even 8K video at 43Gb/s is much lower than the high-end network bandwidths available in 400Gbps switch ports available today with 800Gbps arriving within a couple of years. That being said, he gives us an introduction to two of the fundamental technologies enabling the uncompressed IP video production: Multicast and Software-Defined Networking (SDN).

Multicast, Ryan explains is the system of efficiently distributing data from one source to many receivers. It allows a sender to only send out one stream even if there are a thousand receivers on the network; the network will split the feed at the nearest common point to the decoder. This is all worked out using the Internet Group Message Protocol (IGMP) which is commonly found in two versions, 2 and 3. IGMP enables routers to find out which devices are interested in which senders and allows devices to register their interest. This is all expressed by the notion of joining or leaving a multicast group. Each multicast group is assigned an IP address reserved by international agreement for this purpose, for instance, 239.100.200.1 is one such address.

Ryan then explores some of the pros and cons of IGMP. Like most network protocols each element of the network makes its own decision based on standardised rules. Though this works well for autonomy, it means that there no knowledge of the whole system. It can’t take notice of link capacity, it doesn’t know the source bandwidth, you can guess where media will flow, but it’s not deterministic. Broadcasters need more assurance of traffic flows for proper capacity planning, planned maintenance and post-incident root cause analysis.

Reasons to consider SDN over IGMP

SDN is an answer to this problem. Replacing much of IGMP, SDN takes this micro-decision making away from the switch architecture and replaces it with decisions made looking at the whole picture. It also brings an in important abstraction layer back to broadcast networks; engineers are used to seeing X-Y panels and, in an emergency, it’s this simplicity which gets things back on air quickly and effectively. With SDN doing the thinking, it’s a lot more practical to program a panel with human names like ‘Camera 1’ and allow a take button to connect it to a destination.

Next is Peter Armstrong from THP who talks about colour in television (starting 40m 40s). Starting back with NTSC, Peter shows the different colour spaces available from analogue through to SD then HD with Rec 709 and now to 3 newer spaces. For archiving, there is an XYZ colour space for archival which can represent any colour humans can see. For digital cinema there is DCI-P3 and with UHD comes BT 2020. These latter colour spaces provide for display of many more colours adding to the idea of ‘better pixels’ – improving images through improving the pixels rather than just adding more.

Another ‘better pixels’ idea is HDR. Whilst BT 2020 is about Wide Colour Gamut (WCG), HDR increases the dynamic range so that the brightness of each pixel can represent a brightness between 0 and 1000 NITs, say instead of the current standard of 0 to 100. Peter outlines the HLG and PQ standards for delivering HDR. If you’re interested in a deeper dive, check out our library of articles and videos such as this talk from Amazon Prime Video. or this from SARNOFF’s Norm Hurst.

ScreenAlign device from DSC Labs

SMPTE fellow and founder of DSC Laboratories, David Corley (56m 50s), continues the colour theme taking us on an enjoyable history of colour charting over the past 60 years up to the modern day. David explains how he created a colour chart in the beginning when labs were struggling to get colours correct for their non-black and white film stock. We see how that has developed over the years being standardised in SMPTE. Recently, he explains, they have a new test card for digital workflows where the camera shoots a special test card which you also have in a digital format. In your editing suite, if you overlay that file on the video, you can colour correct the video to match. Furthermore, DSC have developed a physical overlay for your monitor which self-illuminates meaning when you put it in front of your monitor, you can adjust the colour of the display to match what you see on the chart in front.

Gloria Lee (78m 8s) works for Graymeta, a company whose products are based on AI and machine learning. She sets the scene explaining how broadly our lives are already supported by AI but in broadcast highlights the benefits as automating repetitive tasks, increasing monetisation possibilities, allowing real-time facial recognition and creating additional marketing opportunities. Gloria concludes giving examples of each.

Cliff Lavalée talks about ‘content creation with gaming tools’ (91m 10s) explaining the virtual studio they have created at Groupe Média TFO. He explains the cameras the tracking and telemetry (zoom etc.) needed to ensure that 3 cameras can be moved around in real-time allowing the graphics to follow with the correct perspective shifts. Cliff talks about the pros and cons of the space. With hardware limiting the software capabilities and the need for everything to stick to 60fps, he finds that the benefits which include cost, design freedom and real-time rendering create an over-all positive. This section finishes with a talk from one of the 3D interactive set designers who talks us through the work he’s done in the studio.

Mary Ellen Carlyle concludes the evening talking about remote production and esports. She sets the scene pointing to a ‘shifting landscape’ with people moving away from linear TV to online streaming. Mary discusses the streaming market as a whole talking about Disney+ and other competitors currently jostling for position. Re-prising Gloria’s position on AI, Mary next looks further into the future for AI floating the idea of AI directing of football matches, creating highlights packages, generating stats about the game, spotting ad insertion opportunities and more.

Famously, Netlflix has said that Fortnite is one of its main competitors. And indeed, esports is a major industry unto itself so whether watching or playing games, there is plenty of opportunity to displace Netflix. Deloitte Insights claim 40% of gamers watch esports events at least once a week and in terms of media rights, these are already in the 10s and 100s of millions and are likely to continue to grow. Mary concludes by looking at the sports rights changing hands over the next few years. The thrust being that there are several high profile rights auctions coming up and there is likely to be fervent competition which will increase prices. Some are likely to be taken, at least in part, by tech giants. We have already seen Amazon acquiring rights to some major sports rights.

Watch now!
Speakers

Ryan Morris Ryan Morris
Systems Engineer,
Arista
Gloria Lee Gloria Lee
VP, Business Development
Graymeta Inc.
Mary Ellen Carlyle Mary Ellen Carlyle
SVP & General Manager,
Dome Productions
Cliff Lavalée Cliff Lavalée
Director of LUV studio services,
Groupe Média TFO
Peter Armstrong Peter Armstrong
Video Production & Post Production Manager,
THP
David Corley David Corley
Presiedent,
DSC Labs