Video: Esports Production During COVID

Esports continues to push itself into to harness the best of IT and broadcast industries to bring largescale events to half a billion people annually. Natrually, the way this is done has changed with the pandemic, but the 10% annual growth remains on track. The esports market is still maturing and while it does, the industry is working hard on innovating with the best technology to bring the best quality video to viewers and to drive engagement. Within the broadcast industry, vendors are working hard to understand how best to serve this market segment which is very happy to adopt high-quality, low latency solutions and broadcasters are asking whether the content is right for them.

Takling all of these questions is a panel of experts brought together by SMPTE’s Washington DC section including Christopher Keath from Blizzard Entertainment, Mark Alston from EA, Scott Adametz from Riot Games, Richard Goldsmith with Delloite and, speaking in January 2021 while he worked for Twitch, Jonas Bengtson.

First off the bat, Michael introduced the esports market. With 2.9 billion people playing games globally and 10% growth year-on-year, he says that it’s still a relatively immature market and then outlines some notable trends. Firstly there is a push to grow into a mainstream audience. To its benefit, esports has a highly loyal and large fanbase, but growth outside of this demographic is still difficult. In this talk and others, we’ve heard of the different types of accompanying, secondary programmes aimed more at those who are interested enough to have a summary and watch a story being told, but not interested in watching the blow-by-blow 8 hour tournament.

Another trend outlined by Michael is datasharing. There are so many stats available both in terms of the play itself, similar to traditional sports ‘percentage possession’ stats, but also factual data which can trigger graphics such as names, affiliations, locations etc. Secondary data processing, just like traditional sports, is also a big revenue opportunity, so the market, explains Michael, is still working on bigger and better ways to share data for mutual benefit. More information on Deloitte’s opinion of the market is in this article with a different perspective in this global esports market report

You can watch either with this Speaker view or Gallery view

The panel discusses the different angle that esports has taken on publishing with many young producers only knowing the free software ‘OBS‘, underlined by Scott who says esports can still be scrappy in some places, bringing together unsynchronised video sources in a ‘democratised’ production which has both benefits and downsides. Another difference within esports is that many viewers have played the games, often extensively. They therefore know exactly what they look like so watching the game streamed can feel a very different experience after going through, portentially multiple stages of, encoding. The panel all spend a lot of time tuning encoders for different games to maintain the look as best as possible.

Christopher Keath explains what observers are. Effectively these are the in-game camera operators which talk to the head observer who co-ordinates them and has a simple switcher to make some available to the production. This leads to a discsussion on how best to bring the observer’s video, during the pandemic, into the programmes. Riot has kitted out the PCs in observers’ homes to bring them up to spect and allow them to stream out whereas EA has moved the observer PCs into their studio, backed by hefty internet links.

Jonas points out that Twitch brings tens of thousands of streams to the internet constantly and outlines that the Twitch angle on streaming is often different to the ‘esports’ angle of big events, rather they are personality driven. The proliferation of streaming onto Twitch, other similar services and as part of esports itself has driven GPU manufacturers, Jonas continues, to include dedicated streaming functionality on the GPUs to stop encoding detracting from the in-game performance. During the pandemic, Twitch has seen a big increase in social games, where interaction is more key rather than team-based competition games.

You can watch either with the Speaker view or this gallery view

Scott talks about Riot’s network global backbone which saw 3.2 petabytes of data flow – just for production traffic – during the League of Legends Worlds event which saw them produce the event in 19 different languages working between Berlin, LA and Shanghai. For him, the pandemic brought a change in the studio where everything was rendered in realtime in the unreal game engine. This allowed them to use augmented reality and have a much more flexible studio which looked better than the standard ‘VR studios’. He suggests they are likely to keep using this technology.

Agreeing with this by advocating a hybrid approach, Christopher says that the reflexes of the gamers are amazing and so there really isn’t a replacement for having them playing side-by-side on a stage. On top of that, you can then unite the excitement of the crowd with lights, smoke and pyrotechnics so that will still want to stay for some programmes, but cloud production is still a powerful tool. Mark agrees with that and also says that EA are exploring the ways in which this remote working can improve the work-life balance.

The panel concludes by answering questions touching on the relative lack of esports on US linear TV compared to Asia and eslewhere, explaining the franchise/league structures, discussing the vast range of technology-focused jobs in the sector, the unique opportunities for fan engagement, co-streaming and the impact of 5G.

Watch now!
Speakers

Mark Alston Mark Alston
Technical production manager
Electronic Arts (EA)
Christopher Keath Christopher Keath
Broadcast Systems Architect
Blizzard Entertainment
Jonas Bengtson Jonas Bengtson
Senior Engineering Manager, Discord
Formerly, Director at Twitch
Scott Adametz Scott Adametz
Senior Manager, Esports Engineering,
Riot Games
Richard Goldsmith Richard Goldsmith
Manager,
Deloitte Consulting

Video: As Time Goes by…Precision Time Protocol in the Emerging Broadcast Networks

How much timing do you need? PTP can get you timing in the nanoseconds, but is that needed, how can you transport it and how does it work? These questions and more are under the microscope in this video from RTS Thames Valley.

SMPTE Standards Vice President, Bruce Devlin introduces the two main speakers by reminding us why we need timing and how we dealt with it in the past. Looking back to the genesis of television, points out Bruce, everything was analogue and it was almost impossible to delay a signal at all. An 8cm, tightly wound coil of copper would give you only 450 nanoseconds of delay alternatively quartz crystals could be used to create delays. In the analogue world, these delays were used to time signals and since little could be delayed, only small adjustments were necessary. Bruce’s point is that we’ve swapped around now. Delays are everywhere because IP signals need to be buffered at every interface. It’s easy to find buffers that you didn’t know about and even small ones really add up. Whereas analogue TV got us from camera to TV within microseconds, it’s now a struggle to get below two seconds.

Hand in hand with this change is the change from metadata and control data being embedded in the video signal – and hence synchronised with the video signal – to all data being sent separately. This is where PTP, Precision Time Protocol, comes in. An IP-based timing mechanism which can keep time despite the buffers and allow signals to be synchronised.

Next to speak is Richard Hoptroff whose company works with broadcasters and financial services to provide accurate time derived from 4 satellite services (GPS, GLONASS etc) and the Swedish time authority RiSE. They have been working on the problem of delivering time to people who can’t put up antennas either because they are operating in an AWS datacentre or broadcasting from an underground car park. Delivering time by a wired network, Richard points out, is much more practical as it’s not susceptible to jamming and spoofing, unlike GPS.

Richard outlines SMPTE’s ST 2059-2 standard which says that a local system should maintain accuracy to within 1 microsecond. the JT-NM TR1001-1 specification calls for a maximum of 100ms between facilities, however Richard points out that, in practice, 1ms or even 10 microseconds is highly desired. And in tests, he shows that with layer 2, PTP unicast looping around western Europe was able to adhere to 1 microsecond, layer 3 within 10 microseconds. Over the internet, with a VPN Richard says he’s seen around 40 microseconds which would then feed into a boundary clock at the receiving site.

Summing up Richard points out that delivering PTP over a wired network can deliver great timing without needing timing hardware on an OPEX budget. On top of that, you can use it to add resilience to any existing GPS timing.

Gerard Philips from Arista speaks next to explain some of the basics about how PTP works. If you are interested in digging deeper, please check out this talk on PTP from Arista’s Robert Welch.

Already in use by many industries including finance, power and telcoms, PTP is base on IEEE-1588 allowing synchronisation down to 10s of nanoseconds. Just sending out a timestamp to the network would be a problem because jitter is inherent in networks; it’s part and parcel of how switches work. Dealing with the timing variations as smaller packets wait for larger packets to get out of the way is part of the job of PTP.

To do this, the main clock – called the grandmaster – sends out the time to everyone 8 times a second. This means that all the devices on the network, known as endpoints, will know what time it was when the message was sent. They still won’t know the actual time because they don’t know how long the message took to get to them. To determine this, each endpoint has to send a message back to the grandmaster. This is called a delay request. All that happens here is that the grandmaster replies with the time it received the message.

PTP Primary-Secondary Message Exchange.
Source: Meinberg [link]

This gives us 4 points in time. The first (t1) is when the grandmaster sent out the first message. The second (t2) is when the device received it. t3 is when the endpoint sent out its delay request and t4 is the time when the master clock received that request. The difference between t2 and t1 indicates how long the original message took to get there. Similarly t4-t3 gives that information in the other direction. These can be combined to derive the time. For more info either check out Arista’s talk on the topic or this talk from RAVENNA and Meinberg from which the figure above comes.

Gerard briefly gives an overview of Boundary Clock which act as secondary time sources taking pressure off the main grandmaster(s) so they don’t have to deal with thousands of delay requests, but they also solve a problem with jitter of signals being passed through switches as it’s usually the switch itself which is the boundary clock. Alternatively, Transparent Clock switches simply pass on the PTP messages but they update the timestamps to take account of how long the message took to travel through the switch. Gerard recommends only using one type in a single system.

Referring back to Bruce’s opening, Gerard highlights the need to monitor the PTP system. Black and burst timing didn’t need monitoring. As long as the main clock was happy, the DA’s downstream just did their job and on occasion needed replacing. PTP is a system with bidirectional communication and it changes depending on network conditions. Gerard makes a plea to build a monitoring system as part of your solution to provide visibility into how it’s working because as soon as there’s a problem with PTP, there could quickly be major problems. Network switches themselves can provide a lot of telemetry on this showing you delay values and allowing you to see grandmaster changes.

Gerard’s ‘Lessons Learnt’ list features locking down PTP so only a few ports are actually allowed to provide time information to the network, dealing carefully with audio protocols like Dante which need PTP version 1 domains, and making sure all switches are PTP-aware.

The video finishes with Q&A after a quick summary of SMPTE RP 2059-15 which is aiming to standardise telemetry reporting on PTP and associated information. Questions from the audience include asking how easy it is to do inter-continental PTP, whether the internet is prone to asymmetrical paths and how to deal with PTP in the cloud.

Watch now!
Speakers

Bruce Devlin Bruce Devlin
Standards Vice President,
SMPTE
Gerard Phillips Gerard Phillips
Systems Engineer,
Arista
Richard Hoptroff Richard Hoptroff
Founder and CTO
Hoptroff London Ltd

Video: Cloud Services for Media and Entertainment: Production and Post-Production

My content producers and broadcasters have been forced into the cloud. Some have chosen remote controlling their on-prem kit but many have found that the cloud has brought them benefits beyond simply keeping their existing workflows working during the pandemic.

This video from SMPTE’s New York section looks at how people moved production to the cloud and how they intend to keep it there. The first talk from WarnerMedia’s Greg Anderson discussing the engineering skills needed to be up to the task concluding that there are more areas of knowledge in play than one engineer can bring to to the table from the foundational elements such as security, virtulisation nad networking, to DevOps skills like continuous integration and development (CI/CD), Active Directory and databases.

The good news is that whichever of the 3 levels of engineer that Greg introduces, from beginner to expert, the entry points are pretty easy to access to start your journey and upskilling. Within the company, Greg says that leaders can help accelerate the transition to cloud by allowing teams a development/PoC account which provides a ‘modest’ allowance each month for experimentation, learning and prooving ideas. Not only does that give engineers good exposure to cloud skills, but it gives managers experience in modelling, monitoring and analysing costs.

Greg finishes by talking through their work with implementing a cloud workflow for HBO MAX which is currently on a private cloud and on the way to being in the public cloud. The current system provides for 300 concurrent users doing Edit, Design, Engineering and QC workflows with asset management and ingest. They are looking to the public cloud to consolidate real estate and standardise the tech stack amongst many other drivers outlined by Greg.

Scott Bounds Architect at Microsoft Azure talks about content creation in the cloud. The objectives for Azure is to allow worldwide collaboration, speed up the time to market, allow scaling of content creation and bring improvements in security, reliability and access of data.

This starts for many by using hybrid workflows rather than a full switch to the cloud. After all, Scott says that rough cut editing, motion graphics and VFX are all fairly easy to implement in the cloud whereas colour grading, online and finishing are still best for most companies if they stay on-prem. Scott talks about implementing workstations in the cloud allowing GPU-powered workstations to be used using the remote KVM technology PCoIP to connect in. This type of workflow can be automated using Azure scripting and Terraform.

John Whitehead is part of the New York Times’ Multimedia Infrastructure Engineering team which have recently moved their live production to the cloud. Much of the output of the NYT is live events programming such as covering press conferences. John introduces their internet-centric microservices architecture which was already being worked on before the pandemic started.

The standard workflow was to have a stream coming into MCR which would then get routed to an Elemental encoder for sending into the cloud and distributed with Fastly. To be production-friendly the had created some simple-to-use web frontends for routing. For full-time remote production, John explains they wanted to improve their production quality by adding a vision mixer, graphics and closed captions. John details the solution they chose which comprised cloud-first solutions rather than running windows in the cloud.

The NYT was pushed into the cloud by Covid, but it was felt to be low risk and something they were considering doing anyway. The pandemic forced them to consider that perhaps the technologies they were waiting for had already arrived and ended up saving on Capex and received immediate returns on their investment.

Finishing up the presentations is Anshul Kapoor from Google Cloud who presents market analysis on the current state of cloud adoption and the market conditions. He says that one manifestation of the current crisis is that new live-events content is reduced if not postponed which is making people look to their archives. Some people have not yet done their archiving process, whilst some already have a digital archive. Google and other cloud providers can offer vast scale in order to process and manage archives but also machine learning in order to process, make sense and make searchable all the content.

The video ends with an extensive Q&A with the presenters.

Watch now!
Speakers

Greg Anderson Greg Anderson
Senior Systems Engineer,
WarnerMedia
Scott Bounds Scott Bounds
Media Cloud Architect,
Microsoft
John Whitehead John Whitehead
Senior Engineer, Multimedia Infrastructure Engineering,
New York Times
Anshul Kapoor Anshul Kapoor
Business Development,
Google Cloud

Video: AES67/ST 2110-30 over WAN

Dealing with professional audio, it’s difficult to escape AES67 particularly as it’s embedded within the SMPTE ST 2110-30 standard. Now, with remote workflows prevalent, moving AES67 over the internet/WAN is needed more and more. This talk brings the good news that it’s certainly possible, but not without some challenges.

Speaking at the SMPTE technical conference, Nicolas Sturmel from Merging Technologies outlines the work being done within the AES SC-02-12M working group to define the best ways of working to enable easy use of AES67 on the WAn. He starts by outlining the fact that AES67 was written to expect short links on a private network that you can completely control which causes problems when using the WAN/internet with long-distance links on which your bandwidth or choice of protocols can be limited.

To start with, Nicolas urges anyone to check they actually need AES67 over the WAN to start with. Only if you need precise timing (for lip sync for example) with PCM quality and low latencies from 250ms down to as a little as 5 milliseconds do you really need AES67 instead of using other protocols such as ACIP, he explains. The problem being that any ping on the internet, even to something fairly close, can easily take 16 to 40ms for the round trip. This means you’re guaranteed 8ms of delay, but any one packet could be as late as 20ms known as the Packet Delay Variation (PDV).

Link

Not only do we need to find a way to transmit AES67, but also PTP. The Precise Time Protocol has ways of coping for jitter and delay, but these don’t work well on WAN links whether the delay in one direction may be different to the delay for a packet in the other direction. PTP also isn’t built to deal with the higher delay and jitter involved. PTP over WAN can be done and is a way to deliver a service but using a GPS receiver at each location is a much better solution only hampered by cost and one’s ability to see enough of the sky.

The internet can lose packets. Given a few hours, the internet will nearly always lose packets. To get around this problem, Nicolas looks at using FEC whereby you are constantly sending redundant data. FEC can send up to around 25% extra data so that if any is lost, the extra information sent can be leveraged to determine the lost values and reconstruct the stream. Whilst this is a solid approach, computing the FEC adds delay and the extra data being constantly sent adds a fixed uplift on your bandwidth need. For circuits that have very few issues, this can seem wasteful but having a fixed percentage can also be advantageous for circuits where a predictable bitrate is much more important. Nicolas also highlights that RIST, SRT or ST 2022-7 are other methods that can also work well. He talks about these longer in his talk with Andreas Hildrebrand

Watch now!
Speakers

Nicolas Sturmel Nicolas Sturmel
Product Manager, Senior Technologist,
Merging Technologies