Video: In Stadium Production Workflow and COVID 19

As COVID-19 has wrought many changes to society, this webinar looks to see how it’s changed the way live sports is produced by the in-stadium crews producing the TV shows the cover the ups and downs of our beloved teams. We know that crews have had to be creative, but what has that actually looked like and has it all been negative?

The SMPTE Philadelphia section first invites Martin Otremsky to talk about how his work within the MLB has changed. Before the beginning of the season, Martin and team weren’t allowed in the stadium and had very little notice of the first game for which they were allowed to prepare.

Crowd noise was a big issue. Not only was the concern that the players would find it offputting in a silent stadium, but it was soon realised that swearing and tactics chat could easily be heard by other players and the mics. Bringing back crowd noise helped mask that and allow the teams to talk normally.

The crowd noise was run off three iPads. One which ran a 3-hour loop of general crowd noise and then two which had touch screen buttons to trigger different types/moods of effects dealing with the noises of anticipation and reactions, when the crowd would be at ease and when they would be ‘pumped’. This was run on the fly by two operators who would keep this going throughout the game. The crowd noise required a fair bit of fine-tuning including getting the in-stadium acoustics right as the speaker system is set up to assume the stands are lined with absorbent people; without a crowd, the acoustics are much more echoey.

Link to video

The Covid protections dealt with people in 3 tiers. Tier 1 was for the players and coaches all of whom were tested every 48 hours and where they would need to be. Tier 3 was for people who could go everywhere not in Tier 1. Tier 2 was one or two people who were allowed in both but under extreme restrictions. As such the crew in Tier 3 found it hard to do a lot of maintenance/configuration in certain places as they had to leave every time Tier 1 staff needed to be in the area.

The operation itself had been pared down from 28 to 9 people which was partly achieved by simplifying the programme itself. The ballpark often had to be flipped to accommodate another team using it as their ‘home’ stadium which caused a lot of work as graphics would have to be reworked to fit the in-stadium graphics boards. Crowd noise would have to cheer for a different team and the video & production graphics had to be swapped out. Matin ends his presentation with a Q&A session.

Next up is Carl Mandell for the Philadelphia Union football/soccer team. They had been lucky enough to be at the end of a stadium technical refresh of their LED displays and their control room, moving up to 1080p60 HDR. Alas, the COVID restrictions hit just before the home opener, robbing them of the chance to use all the new equipment. Their new cameras, for instance, remained unused until right near the end of the season.

Link to video

Unlike the MLB, Carl explains they didn’t choose to do crowd audio in the stadium itself. They ran virtual graphics to fulfil contractual obligations and to brighten up the relatively empty stadium. Unlike MLB, they were able to have a limited crowd in the stadium during most matches.

For the crowd noise used in the broadcast programme, they used audio taken from their ten-year archive of previous games and allowed the chants and reactions to be under the control of fans.

One COVID success was moving the press conferences on Zoom. Whilst the spokesperson/people were actually in the stadium with a PTZ camera in front of them and an 85″ TV to the side, all the media were brought in on Zoom. Carl says they found that this format produced much more engaging conferences which increased their international appeal and raised the viewership of the press conferences.

Watch now!
Speakers

Carl Mandell Carl Mandell
Director, Broadcasting & Video Production
Philadelphia Union
Martin Otremsky Martin Otremsky
Director of Video Engineering,
Philadelphia Phillies

Video: Live Production Forecast: Cloudy for the Foreseeable Future

Our ability to work remotely during the pandemic is thanks to the hard work of many people who have developed the technologies which have made it possible. Even before the pandemic struck, this work was still on-going and gaining momentum to overcome more challenges and more hurdles of working in IP both within the broadcast facility and in the cloud.

SMPTE’s Paul Briscoe moderates the discussion surrounding these on-going efforts to make the cloud a better place for broadcasters in this series of presentation from the SMPTE Toronto section. First in the order is Peter Wharton from TAG V.S. talking about ways to innovate workflows to better suit the cloud.

Peter first outlines the challenges of live cloud production, namely keeping latency low, signal quality high and managing the high bandwidths needed at the same time as keeping a handle on the costs. There is an increasing number of cloud-native solutions but how many are truly innovating? Don’t just move workflows into the cloud, advocates Peter, rather take this opportunity to embrace the cloud.

Working with the cloud will be built on new transport interfaces like RIST and SRT using a modular and open architecture. Scalability is the name of the game for ‘the cloud’ but the real trick is in building your workflows and technology so that you can scale during a live event.

Source: TAG V.S.

There are still obstacles to be overcome. Bandwidth for uncompressed video is one, with typical signals up to 3Gbps uncompressed which then drives very high data transfer costs. The lack of PTP in the cloud makes ST 2110 workflows difficult, similarly the lack of multicast.

Tackling bandwidth, Peter looks at the low-latency ways to compress video such as NDI, NDI|HX, JPEG XS and Amazon’s lossless CDI. Peter talks us through some of the considerations in choosing the right codec for the task in hand.

Finishing his talk, Peter asks if this isn’t time for a radical change. Why not rethink the entire process and embrace latency? Peter gives an example of a colour grading workflow which has been able to switch from on-prem colour grading on very high-spec computers to running this same, incredibly intensive process in the cloud. The company’s able to spin up thousands of CPUs in the cloud and use spot pricing to create temporary, low cost, extremely powerful computers. This has brought waiting times down for jobs to be processed significantly and has reduced the cost of processing an order of magnitude.

Lastly Peter looks further to the future examining how saturating the stadium with cameras could change the way we operate cameras. With 360-degree coverage of the stadium, the position of the camera can be changed virtually by AI allowing camera operators to be remote from the stadium. There is already work to develop this from Canon and Intel. Whilst this may not be able to replace all camera operators, sports is the home of bleeding-edge technology. How long can it resist the technology to create any camera angle?

Source: intoPIX

Jean-Baptiste Lorent is next from intoPIX to explain what JPEG XS is. A new, ultra-low-latency, codec it meets the challenges of the industry’s move to IP, its increasing desire to move data rather than people and the continuing trend of COTS servers and cloud infrastructure to be part of the real-time production chain.

As Peter covered, uncompressed data rates are very high. The Tokyo Olympics will be filmed in 8K which racks up close to 80Gbps for 120fps footage. So with JPEG XS standing for Xtra Small and Xtra Speed, it’s no surprise that this new ISO standard is being leant on to help.

Tested as visually lossless to 7 or more encode generations and with latency only a few lines of video, JPEG XS works well in multi-stage live workflows. Jean-Baptiste explains that it’s low complexity and can work well on FPGAs and on CPUs.

JPEG XS can support up to 16-bit values, any chroma and any colour space. It’s been standardised to be carried in MPEG TSes, in SMPTE ST 2110 as 2110-22, over RTP (pending) within HEIF file containers and more. Worst case bitrates are 200Mbps for 1080i, 390Mbps for 1080p60 and 1.4Gbps for 2160p60.

Evolution of Standards-Based IP Workflows Ground-To-Cloud

Last in the presentations is John Mailhot from Imagine Communications and also co-chair of an activity group at the VSF working on standardising interfaces for passing media place to place. Within the data plane, it would be better to avoid vendors repeatedly writing similar drivers. Between ground and cloud, how do we standardise video arriving and the data you need around that. Similarly standardising new technologies like Amazon’s CDI is important.

John outlines the aim of having an interoperability point within the cloud above the low-level data transfer, closer to 7 than to 1 in the OSI model. This work is being done within AIMS, VSF, SMPTE and other organisations based on existing technologies.

Q&A
The video finishes with a Q&A and includes comments from AWS’s Evan Statton whose talk on CDI that evening is not part of this video. The questions cover comparing NDI with JPEG XS, how CDI uses networking to achieve high bandwidths and high reliability, the balance between minimising network and minimising CPU depending on workflow, the increasingly agile nature of broadcast infrastructure, the need for PTP in the cloud plus the pros and cons of standards versus specifications.

Watch now!
Speakers

Peter Wharton Peter Wharton
Director Corporate Strategy, TAG V.S.
President, Happy Robotz
Vice President of Membership, SMPTE
Jean-Baptiste Lorent Jean-Baptiste Lorent
Director Marketing & Sales,
intoPIX
John Mailhot John Mailhot
Co-Chair Cloud-Gounrd-Cloud-Ground Activity Group, VSF
Directory & NMOS Steering Member, AMWA
Systems Architect for IP Convergence, Imagine Communcations
Paul Briscoe Moderator: Paul Briscoe
Canadian Regional Governor, SMPTE
Consultant, Televisionary Consulting
Evan Statton Evan Statton
Principal Architect, Media & Entertainment
Amazon Web Services

Video: The New Video Codec Landscape – VVC, EVC, HEVC, LC-EVC, AV1 and more

The codec arena is a lot more complex than before. Gone is the world of 5 years ago with AVC doing nearly everything. Whilst AVC is still a major force, we now have AV1 and VP9 being used globally with billions of uses a year, HEVC is not the force majeure it was once expected to be, but is now seeing significant use on iPhones and overall adoption continues to grow. And now, in 2020 we see three new codecs on the scene, VVC, EVC and LCEVC.

To help us make sense of this SMPTE has invited Walt Husak and Sean McCarthy to take us through what the current codecs are, what makes them different, how well they work, how to compare them and what the future roadmaps hold.

Sean starts by explaining which codecs are maintained by which bodies, with the IEC, ITU and MPEG being involved, not to mention the corporate codecs (VP8, and VP9 from Google) and the Chinese AVS series of codecs. Sean explains that these share major common elements and are each evolutions of each other. But why are all these codecs needed? Next, we see the use-cases that have brought these codecs into existence. Granted, AVC and HEVC entered the scene to reduce bitrate in an effort to make HD and UHD practical, respectively, but EVC and LC-EVC have different aims.

Sean gives a brief overview of the basics of encoding starting with partitioning the image, predicting parts of it, applying transformations, refining it (also known as applying ‘loop filters) and finishing with entropy codings. All of these blocks are briefly explained and exist in all the codecs covered in this talk. The evolutions which make the newer codecs better are therefore evolutions of each of these elements. For instance, explains Sean, splitting the image into different sections, known as partitioning, has become more sophisticated in recent codecs allowing for larger sections to be considered at once but, at the same time, smaller partitions created within each.

All codecs have profiles whereby the tools in use, or the complexity of their implementation, is standardised for certain types of video: 8-bit, 10-bit, HDR etc. This allows hardware implementers to understand the upper bounds of computation so they don’t end up over-provisioning hardware resources and increasing the cost. Sean looks at how VVC uses the same tools throughout all of its four profiles with only a few exceptions. Screen content sees two extra tools come for 4:2:2 formats and above. AV1 has the same tools throughout all the profiles but, deliberately, EVC doesn’t. Essential Video Coding has a royalty-free base layer that uses techniques that are not subject to any use payments. Using this layer gives you AVC-quality encoding, approximately. Using the main profile, however, gets you similar to HEVC encoding albeit with royalty payments.

The next part of the talk examines two main reasons for the increase in compression over recent codec generation, block size and partitioning, before highlighting some new tools in VVC and AV1. Block size refers to the size of the blocks that an image is split up into for processing. By using a larger block, the algorithms can spot patterns more efficiently so the continued increase from 16×16 in AVC to 128×128 now in VVC drives an increase in computation but also in compression. Once you have your block, splitting it up following the features of the images is the next stage. Called partitioning, we see the number of ways that the codecs can mathematically split a block has grown significantly. VVC can also partition chroma separately to luma. VVC and AV1 also include 64 and 16 ways, respectively, to diagonally partition rather than the typical vertical and horizontal partitioning modes.

Screen content coding tools are increasingly important, pandemics aside, there has long been growth in the amount of computer-generated content being shared online whether that’s through esports, video conference screen sharing or elsewhere. Truth be told, HEVC has support for screen-content encoding but it’s not in the main profile so many implementations don’t support it. VVC not only evolves the screen-content tools, but it also makes it present as default. AV1, also, was designed to work well with screen content. Sean takes some time to look at the IBC tool, intra-block copy, which allows the encoder to relate parts of the current frame to other sections. Working at the prediction stage, with screen content that contains, for instance, lots of text, parts of that text will look similar and to a first approximation, one part of the image can be duplicated in another. This is similar to motion compensation where a macroblock is ‘copied’ to another frame in a different position, but all the work is done on the present frame for Intra BC. Palette mode is another screen content tool that allows the colour of a section of the image to be described as a palette of colours rather than using the full RGB value for each and every pixel.

Sean covers the scaled prediction between resolutions in VVC and super-resolution in AV1, VVC’s 360-degree video optimisations and luma mapping before handing over to Walt Husak who goes into more detail on how the newer codecs work, starting with LCEVC.

LCEVC is a codec that improves the performance of already-deployed codecs, typically used to enhance spatial resolution. If you wanted to encode HD, the codec would downsample the HD to an SD resolution and encode that with AVC, HEVC or another codec. At the same time, it would upsample that encoded video again and generate two correction layers that correct for artefacts and add sharpness. This information is added into the base codec and sent to the decoder. This can allow a software-only enhancement to a hardware deployment fully utilising the hardware which has already been deployed. Walt notes that the enhancement layers are much the same technology as has already been standardised by SMPTE as VC6 (ST 2117). LCEVC has been found to be computationally efficient allowing it to address markets such as embedded devices where hardware restrictions would otherwise prohibit the use of higher resolutions than for which it was originally designed. Very low bitrate performance is also very good.

Sean introduces us to his “Dos and Don’ts” of codec comparisons. The theme running through them is to take care that you are comparing like for like. Codecs can be set to run ‘fast’ or ‘slow’ each of which holds its own compromises in terms of encoding time and resulting quality. Similarly, there are some implementations that are made simply to implement the standard as rigorously as possible which is an invaluable tool when developing the codec or an implementation. Such a reference implementation for codec X, clearly, shouldn’t be compared to production implementations of a codec Y as the times are guaranteed to be very different and you will not learn anything from the process. Similarly, there are different tools that give codecs much more time to optimise known as single- and double-pass which shouldn’t be cross-compared.

The talk draws to a close with a look at codec performance. Sean shows a number of graphs showing how VVC performs against HEVC. Interestingly the metrics clearly show a 40% increase in efficiency of VVC over HEVC, but when seen in subjective tests, the ratings show a 50% improvement. VVC’s encoder is approximately 10x as complex as HEVC’s.

HEVC and AV1 perform similarly for the same bit rate. Overall, Sean says, AV1 is a little blurrier in regions of spatial detail and can have some temporal flickering. HEVC is more likely to have blocking and ringing artefacts. EVC’s main profile is up to 29% better than HEVC. LCEVC performs up to 8% better than AVC when using an AVC base layer and also slightly better than HEVC when using an HEVC based codec. Sean makes the point that the AVC has been continually updated since its initial release and is now on version 27, so it’s not strictly true to simply say it’s an ‘old’ codec. HEVC similarly is on version 7. Sean runs down part of the roadmap for AVC which leads on to the use of AI in codecs.

Finishing the video, Walt looks at the use of Deep Learning in codecs. Deep learning is also known as machine learning and referred to as AI (Artificial Intelligence). For most people, these terms are interchangeable and refer to the ability of a signal to be manipulated not by a fixed equation or algorithm (such as Lanczos scaling) but by a computer that has been trained through many millions of examples to recognise what looks ‘right’ and to replicate that effect in new scenarios.

Walt talks about JPEG’s AI learning research on still images who are aiming to complete an ‘end-to-end’ study of compression with AI tools. There’s also MPEG’s Deep Neural Network-based Video Coding which is looking at which tools within codecs can be replaced with AI. Also, recently we have seen the foundation of the MPAI (Moving Picture, Audio and Data Coding by Artificial Intelligence) organisation by Leonardo Chiariglione, an industry body devoted to the use of AI in compression. With all this activity, it’s clear that future advances in compression will be driven by the increasing use of these techniques.

The video ends with a Q&A session.

Watch now!
Find out more on SMPTE’s site
Speakers

Sean McCarthy Sean McCarthy
Director, Video Strategy and Standards,
Dolby Laboratories
Walt Husak Walt Husak
Director, Image Technologies,
Dolby Laboratories

Video: RAVENNA AM824 & SMPTE ST 2110-31 Applications



Audio has a long heritage in IP compared to video, so there’s plenty of overlap and there are edge cases abound when working between RAVENNA, AES67 and SMPTE ST 2110-30 and -31. SMPTE’s 2110 suite of standards currently holds two methods of carrying audio including a way of carrying encoded audio such as Dolby AC4 and Dolby E.

RAVENNA Evangelist Andreas Hildebrand is joined by Dolby Labs architect James Cowdrey to discuss the compatibility of -30 and -31 with AES67 and how non-PCM data can be carried in -31 whether that be lightly compressed audio, object audio for immersive experiences or even just pure metadata.

Andreas starts by revising the key differences between AES67 and RAVENNA. The core of AES67 fits neatly within RAVENNA’s capabilities including the transport of up to 24-bit linear PCM with 48 samples per packet and up to 8 channels of 48kHz audio. RAVENNA offers more sample rates, more channels and adds discovery and redundancy with modes such as ‘MADI’ and ‘High performance’ which help constrain and select the relevant parameters.

SMPTE ST 2110-30 is based on AES67 but adds its own constraints such that any -30 stream can be received by an AES67 decoder, however, an AES67 sender needs to be aware of -30’s constraints for it to be correctly decoded by a -30 receiver. Andreas says that all AES67 senders now have this capability.


In contrast to 2110-30, 2110-31 is all about AES3 and the ability of AES3 to carry both linear PCM and non-PCM data. We look at the structure of the AES3 which contains audio blocks each of which has 192 Frames. These frames are split into 2, in the case of stereo, 64 in the case of MADI. Within each of these subframes, we finally find the preamble and the 24-bit data. Andreas explains how this is linked to AM824 and the SDP details needed.

James Cowdery leads the second part of today’s talk first talking about SMPTE ST 337 which details how to send non-PCM audio and data in an AES3 serial digital audio interface. It can carry AC-3, AC-4 for object audio delivering immersive audio experiences, Dolby E and also the metadata standards KLV and Serial ADM.

‘Why use Dolby E?’ asks James. Dolby E has a number of advantages although as bandwidth has become more available, it is increasingly replaced by uncompressed audio. However legacy workflows may now be reliant on IP infrastructure between the receiver and decoder, so it’s important to be able to carry it. Dolby E also packs a whole set of surround sound within a single data stream removing any problems of relative phase and can be carried over MPEG-2 transport streams so it still has plenty of flexibility and uses cases.

Its strength can bring fragility and one way which you can destroy a Dolby E feed is by switching between two videos containing Dolby E in the middle of the data rather than waiting for the gap between packets which is called the guardband. Dolby E needs to be aligned to the video so that you can crossfade and switch between videos without breaking the audio. James makes the point that one reason to use -31 and not -30 to carry Dolby E, or any other non-PCM data, is that -30 assumes that a sample rate converter can be used and so there is usually little control over when an SRC is brought in to use. A sample rate converter, of course, would destroy any non-PCM data.

RAVENNA 824 and 2110-31 gateways will preserver the line position of Dolby data. Can support Dolby E transport can therefore be supported by a vendor without Dolby support. James notes that your Dolby E packets need to be 125 microseconds to achieve packet-level switching without missing a guardband and corrupting data.

Immersive audio requires metadata. sADM is an open specification for metadata interchange, the aim of which is to help interoperability between vendors. sADM metadata can be embedded in SDI, transported uncompressed as SMPTE 302 in MPEG-2 Transport Streams and for 2110, is carried in -31. It’s based on XML description of metadata from the Audio Definition Model and James advises using the GZip compression mode to reduce the bitrate as it can be sent per-frame. An alternative metadata standard is SMPTE ST 336 which is an open format providing a binary payload which makes it a lower-latency method for sending Metadata. These methods of sending metadata made sense in the past, but now, with SMPTE ST 2110 having its own section for metadata essences, we see 2110-41 taking shape to allow data like this to be carried on its own.

Watch now!
Speakers

James Cowdery James Cowdery
Senior Staff Architect
Dolby Laboratories
Andreas Hildebrand Andreas Hildebrand
RAVENNA Evangelist,
ALC NetworX