Video: Live Production Forecast: Cloudy for the Foreseeable Future

Our ability to work remotely during the pandemic is thanks to the hard work of many people who have developed the technologies which have made it possible. Even before the pandemic struck, this work was still on-going and gaining momentum to overcome more challenges and more hurdles of working in IP both within the broadcast facility and in the cloud.

SMPTE’s Paul Briscoe moderates the discussion surrounding these on-going efforts to make the cloud a better place for broadcasters in this series of presentation from the SMPTE Toronto section. First in the order is Peter Wharton from TAG V.S. talking about ways to innovate workflows to better suit the cloud.

Peter first outlines the challenges of live cloud production, namely keeping latency low, signal quality high and managing the high bandwidths needed at the same time as keeping a handle on the costs. There is an increasing number of cloud-native solutions but how many are truly innovating? Don’t just move workflows into the cloud, advocates Peter, rather take this opportunity to embrace the cloud.

Working with the cloud will be built on new transport interfaces like RIST and SRT using a modular and open architecture. Scalability is the name of the game for ‘the cloud’ but the real trick is in building your workflows and technology so that you can scale during a live event.

Source: TAG V.S.

There are still obstacles to be overcome. Bandwidth for uncompressed video is one, with typical signals up to 3Gbps uncompressed which then drives very high data transfer costs. The lack of PTP in the cloud makes ST 2110 workflows difficult, similarly the lack of multicast.

Tackling bandwidth, Peter looks at the low-latency ways to compress video such as NDI, NDI|HX, JPEG XS and Amazon’s lossless CDI. Peter talks us through some of the considerations in choosing the right codec for the task in hand.

Finishing his talk, Peter asks if this isn’t time for a radical change. Why not rethink the entire process and embrace latency? Peter gives an example of a colour grading workflow which has been able to switch from on-prem colour grading on very high-spec computers to running this same, incredibly intensive process in the cloud. The company’s able to spin up thousands of CPUs in the cloud and use spot pricing to create temporary, low cost, extremely powerful computers. This has brought waiting times down for jobs to be processed significantly and has reduced the cost of processing an order of magnitude.

Lastly Peter looks further to the future examining how saturating the stadium with cameras could change the way we operate cameras. With 360-degree coverage of the stadium, the position of the camera can be changed virtually by AI allowing camera operators to be remote from the stadium. There is already work to develop this from Canon and Intel. Whilst this may not be able to replace all camera operators, sports is the home of bleeding-edge technology. How long can it resist the technology to create any camera angle?

Source: intoPIX

Jean-Baptiste Lorent is next from intoPIX to explain what JPEG XS is. A new, ultra-low-latency, codec it meets the challenges of the industry’s move to IP, its increasing desire to move data rather than people and the continuing trend of COTS servers and cloud infrastructure to be part of the real-time production chain.

As Peter covered, uncompressed data rates are very high. The Tokyo Olympics will be filmed in 8K which racks up close to 80Gbps for 120fps footage. So with JPEG XS standing for Xtra Small and Xtra Speed, it’s no surprise that this new ISO standard is being leant on to help.

Tested as visually lossless to 7 or more encode generations and with latency only a few lines of video, JPEG XS works well in multi-stage live workflows. Jean-Baptiste explains that it’s low complexity and can work well on FPGAs and on CPUs.

JPEG XS can support up to 16-bit values, any chroma and any colour space. It’s been standardised to be carried in MPEG TSes, in SMPTE ST 2110 as 2110-22, over RTP (pending) within HEIF file containers and more. Worst case bitrates are 200Mbps for 1080i, 390Mbps for 1080p60 and 1.4Gbps for 2160p60.

Evolution of Standards-Based IP Workflows Ground-To-Cloud

Last in the presentations is John Mailhot from Imagine Communications and also co-chair of an activity group at the VSF working on standardising interfaces for passing media place to place. Within the data plane, it would be better to avoid vendors repeatedly writing similar drivers. Between ground and cloud, how do we standardise video arriving and the data you need around that. Similarly standardising new technologies like Amazon’s CDI is important.

John outlines the aim of having an interoperability point within the cloud above the low-level data transfer, closer to 7 than to 1 in the OSI model. This work is being done within AIMS, VSF, SMPTE and other organisations based on existing technologies.

Q&A
The video finishes with a Q&A and includes comments from AWS’s Evan Statton whose talk on CDI that evening is not part of this video. The questions cover comparing NDI with JPEG XS, how CDI uses networking to achieve high bandwidths and high reliability, the balance between minimising network and minimising CPU depending on workflow, the increasingly agile nature of broadcast infrastructure, the need for PTP in the cloud plus the pros and cons of standards versus specifications.

Watch now!
Speakers

Peter Wharton Peter Wharton
Director Corporate Strategy, TAG V.S.
President, Happy Robotz
Vice President of Membership, SMPTE
Jean-Baptiste Lorent Jean-Baptiste Lorent
Director Marketing & Sales,
intoPIX
John Mailhot John Mailhot
Co-Chair Cloud-Gounrd-Cloud-Ground Activity Group, VSF
Directory & NMOS Steering Member, AMWA
Systems Architect for IP Convergence, Imagine Communcations
Paul Briscoe Moderator: Paul Briscoe
Canadian Regional Governor, SMPTE
Consultant, Televisionary Consulting
Evan Statton Evan Statton
Principal Architect, Media & Entertainment
Amazon Web Services

Video: Decentralised Production Tips and Best Practices

Live sports production has seen a massive change during COVID. We looked at how this changed at the MCR recently on The Broadcast Knowledge hearing how Sky Sports had radically changed along with Arsenal TV. This time we look to see how life in the truck has changed. The headline being that most people are staying at home, so how to you keep people at home and mix a multi-camera event?

Ken Kerschbaumer from Sports Video Group talks to VidOvation Jim Jachetta
and James Japhet from Hawk-Eye to understand the role they’ve been playing in bringing live sports to screen where the REMI/Outside Broadcast has been pared down to the minimum and most staff are at home. The conversation starts with the backdrop of The Players Championship, part of the PGA Tour which was produced by 28 operators in the UK who mixes 120+ camera angles and the audio to produce 25 live streams including graphics for broadcasters around the world.

Lip-sync and genlock aren’t optional when it comes to live sports. Jim explains that his equipment can do up to fifty cameras with genlock synchronisation over bonded cellular and this is how The Players worked with a bonded cellular on each camera. Jim discusses how audio, also has to be frame-accurate as they had many, many mics always open going back to the sound mixer at home.

James from Hawk-Eye explained that part of their decision to leave equipment on-site was due to lip-sync concerns. Their system worked differently to VidOvation, allowing people to ‘remote desktop’, using a Hawk-Eye-specifc low-latency technology dedicated to video transport. This also works well for events where there isn’t enough connectivity to support streaming of 10, 20 or 50+ feeds to different locations from the location.

The production has to change to take account of two factors: the chance a camera’s connectivity might go down and latency. It’s important to plan shots ahead of time to account for these factors, outlining what the backup plan is, say going to a wide shot on camera 3, if camera 1 can’t be used. When working with bonded cellular, latency is an unavoidable factor and can be as high as 3 seconds. In this scenario, Jim explains it’s important to explain to the camera operators what you’re looking for in a shot and let them work more autonomously than you might traditionally do.

Latency is also very noticeable for the camera shaders who usually rack cameras with milliseconds of latency. CCU’s are not used to waiting a long time for responses, so a lot of faked messages need to be sent to keep the CCU and controller happy. The shader operator needs to then get used to the latency, which won’t be as high as the video latency and take things a little slower in order to get the job done.

Not travelling everywhere has been received fairly well by freelancers who can now book in more jobs and don’t need to suffer reduced pay for travel days. There are still people travelling to site, Jim says, but usually, people who can drive and then will sit in the control room with shields. For the PGA Tour, the savings are racking up. Whilst there are a lot of other costs/losses at the moment for so many industries, it’s clear that the reduced travel and hosting will continue to be beneficial after restrictions are lifted.

Watch now!
Speakers

Jim Jachetta Jim Jachetta
EVP & CTO: Wireless Video & Cellular Uplinks
VidOvation
James Japhet James Japhet
Managing Director
Hawk-Eye North America
Ken Kerschbaumer Ken Kerschbaumer
Editorial Director,
Sports Video Group

Video: TV Sport Innovation – Staying Ahead of the Game

Sports has always led innovation in many areas of broadcast, but during COVID not only did they have to adapt nearly every workflow and redeploy staff, but they then had to brace to deliver 100 games in 40 days. Gordon Roxburgh sums it up: “I’ve been at Sky twenty years, and I think [these have] been the most challenging six months…we’ve faced.”

In this session from the DTG’s Future Vision 2020 conference, Carl Hibbert from Futuresource Consulting talks to Sky, Arsenal TV and Facebook to find how their businesses have adapted. Melissa Lawton from Facebook explains how their live streaming, both for user-generated footage and produced sport have adapted to the changing needs. When COVID hit, Facebook lost some very valuable content. Their response was to double down on fan engagement, with challenges to fans to create content and also staging events which were produced and commentated as real sports events, but all shots were people at home exercising but being brought into the narrative of an Iron Man competition. Facebook have also invested in their user-facing tools and dashboards to help expose and monitor contribution via live streaming.

Gordon Roxburgh from Sky explains the seachange he’s seen in production. “The first thing was to keep channels on air…and keep staff safe.” They moved rapidly from a fully staffed office to just three or four people on-site and a presenter. In order to mix, they created a Virtual Production suite which allowed people to create content in the cloud.

For content, Gordon says that watch-alongs proved very popular where key sports personalities talk through what they were thinking during key sporting moments. This was just one of the many content ideas that keep programming going until “Project restart” commenced where the whole sports ecosystem asked itself ‘How can we deliver 100 games in 40 days?’ Once they knew the season would start, Gordon says, this opened up a 3-week build period during which BT Media and Broadcast, NEP, NEP Connect and multiple internal departments collaborated to produce rapid turnarounds.

“As an industry, we came together.” The working practices developed at Sky were shared with other major broadcasters who also shared their best practice – always putting staff first. Sky even went to the extent of building a technical space in a large studio floor to keep people apart and co-opted a set of training rooms to become a self-contained graphics unit. These ideas kept graphics operators together but not mixing with the rest of the production.

The view from Arsenal TV is explained by John Dollin. They worked quickly very early on and were able to be back in the office from February. Whilst Arsenal TV doesn’t have the rights to stream live, they produce their programmes live for transmission later. This used to be done in a crowded room but was soon transferred to a virtual mixer in the cloud with remote editors. John highlights the challenge of involving freelancers into the system and providing them with appropriate supervision. More importantly, he feels that their current ability to maintain the pre-covid production quality is due to the continued dedication of certain personnel who are putting in long hours which is not a sustainable situation to be in.

Watch now!
Free Registration
Speakers

Gordon Roxburgh Gordon Roxburgh
Technical Manager,
Sky Sports
Melissa Lawton Melissa Lawton
Live Sports Production Strategy,
Facebook
John Dollin John Dollin
Senior Product & Engineering Manager,
Arsenal Footballl Club
Carl Hibbert Carl Hibbert
Head of Consumer Media & Technology,
Futuresource Consulting

Video: 5 Myths About Dolby Vision & HDR debunked

There seem no let up in the number of technologies coming to market and whilst some, like HDR, have been slowly advancing on us for many years, the technologies that enable them such as Dolby Vision, HDR10+ and the metadata handling technologies further upstream are more recent. So it’s no surprise that there is some confusion over what’s possible and what’s not.

In this video, Bitmovin and Dolby the truth behind 5 myths surrounding the implementation and financial impact of Dolby Vision and HDR in general. Bitmovin sets the scene by with Sean McCarthy giving an overview on their research into the market. He explains why quality remains important, simply put to either keep up with competitors or be a differentiator. Sean then gives an overview of the ‘better pixels’ principle underlining that improving the pixels themselves is often more effective than higher resolution, technologies such as wide colour gamut (WCG) and HDR.

David Brooks then explains why HDR looks better, explaining the biology and psychology behind the effect as well as the technology itself. The trick with HDR is that there are no extra brightness values for the pixels, rather the brightness of each pixel is mapped onto a larger range. It’s this mapping which is the strength of the technology, altering the mapping gives different results, ultimately allowing you to run SDR and HDR workflows in parallel. David explains how HDR can be mapped down to low-brightness displays,

The last half of this video is dedicated to the myths. Each myth has several slides of explanation, for instance, the one suggests that the workflows are very complex. Hangen Last walks through a number of scenarios showing how dual (or even three-way) workflows can be achieved. The other myths, and the questions at the end, talk about resolution, licensing cost, metadata, managing dual SDR/HDR assets and live workflows with Dolby Vision.

Watch now!
Speakers

David Brooks David Brooks
Senior Director, Professional Solutions,
Dolby Laboratories
Hagan Last Hagan Last
Technology Manager, Content Distribution,
Dolby Laboratories
Sean McCarthy Sean McCarthy
Senior Technical Product Marketing Manager,
Bitmovin
Kieran Farr Moderator: Kieran Farr
VP Marketing,
Bitmovin