Video: Esports Production During COVID

Esports continues to push itself into to harness the best of IT and broadcast industries to bring largescale events to half a billion people annually. Natrually, the way this is done has changed with the pandemic, but the 10% annual growth remains on track. The esports market is still maturing and while it does, the industry is working hard on innovating with the best technology to bring the best quality video to viewers and to drive engagement. Within the broadcast industry, vendors are working hard to understand how best to serve this market segment which is very happy to adopt high-quality, low latency solutions and broadcasters are asking whether the content is right for them.

Takling all of these questions is a panel of experts brought together by SMPTE’s Washington DC section including Christopher Keath from Blizzard Entertainment, Mark Alston from EA, Scott Adametz from Riot Games, Richard Goldsmith with Delloite and, speaking in January 2021 while he worked for Twitch, Jonas Bengtson.

First off the bat, Michael introduced the esports market. With 2.9 billion people playing games globally and 10% growth year-on-year, he says that it’s still a relatively immature market and then outlines some notable trends. Firstly there is a push to grow into a mainstream audience. To its benefit, esports has a highly loyal and large fanbase, but growth outside of this demographic is still difficult. In this talk and others, we’ve heard of the different types of accompanying, secondary programmes aimed more at those who are interested enough to have a summary and watch a story being told, but not interested in watching the blow-by-blow 8 hour tournament.

Another trend outlined by Michael is datasharing. There are so many stats available both in terms of the play itself, similar to traditional sports ‘percentage possession’ stats, but also factual data which can trigger graphics such as names, affiliations, locations etc. Secondary data processing, just like traditional sports, is also a big revenue opportunity, so the market, explains Michael, is still working on bigger and better ways to share data for mutual benefit. More information on Deloitte’s opinion of the market is in this article with a different perspective in this global esports market report

You can watch either with this Speaker view or Gallery view

The panel discusses the different angle that esports has taken on publishing with many young producers only knowing the free software ‘OBS‘, underlined by Scott who says esports can still be scrappy in some places, bringing together unsynchronised video sources in a ‘democratised’ production which has both benefits and downsides. Another difference within esports is that many viewers have played the games, often extensively. They therefore know exactly what they look like so watching the game streamed can feel a very different experience after going through, portentially multiple stages of, encoding. The panel all spend a lot of time tuning encoders for different games to maintain the look as best as possible.

Christopher Keath explains what observers are. Effectively these are the in-game camera operators which talk to the head observer who co-ordinates them and has a simple switcher to make some available to the production. This leads to a discsussion on how best to bring the observer’s video, during the pandemic, into the programmes. Riot has kitted out the PCs in observers’ homes to bring them up to spect and allow them to stream out whereas EA has moved the observer PCs into their studio, backed by hefty internet links.

Jonas points out that Twitch brings tens of thousands of streams to the internet constantly and outlines that the Twitch angle on streaming is often different to the ‘esports’ angle of big events, rather they are personality driven. The proliferation of streaming onto Twitch, other similar services and as part of esports itself has driven GPU manufacturers, Jonas continues, to include dedicated streaming functionality on the GPUs to stop encoding detracting from the in-game performance. During the pandemic, Twitch has seen a big increase in social games, where interaction is more key rather than team-based competition games.

You can watch either with the Speaker view or this gallery view

Scott talks about Riot’s network global backbone which saw 3.2 petabytes of data flow – just for production traffic – during the League of Legends Worlds event which saw them produce the event in 19 different languages working between Berlin, LA and Shanghai. For him, the pandemic brought a change in the studio where everything was rendered in realtime in the unreal game engine. This allowed them to use augmented reality and have a much more flexible studio which looked better than the standard ‘VR studios’. He suggests they are likely to keep using this technology.

Agreeing with this by advocating a hybrid approach, Christopher says that the reflexes of the gamers are amazing and so there really isn’t a replacement for having them playing side-by-side on a stage. On top of that, you can then unite the excitement of the crowd with lights, smoke and pyrotechnics so that will still want to stay for some programmes, but cloud production is still a powerful tool. Mark agrees with that and also says that EA are exploring the ways in which this remote working can improve the work-life balance.

The panel concludes by answering questions touching on the relative lack of esports on US linear TV compared to Asia and eslewhere, explaining the franchise/league structures, discussing the vast range of technology-focused jobs in the sector, the unique opportunities for fan engagement, co-streaming and the impact of 5G.

Watch now!
Speakers

Mark Alston Mark Alston
Technical production manager
Electronic Arts (EA)
Christopher Keath Christopher Keath
Broadcast Systems Architect
Blizzard Entertainment
Jonas Bengtson Jonas Bengtson
Senior Engineering Manager, Discord
Formerly, Director at Twitch
Scott Adametz Scott Adametz
Senior Manager, Esports Engineering,
Riot Games
Richard Goldsmith Richard Goldsmith
Manager,
Deloitte Consulting

SRT – How the hot new UDP video protocol actually works under the hood

It’s been a great year at The Broadcast Knowledge growing to over four thousand followers on social media and packing in 250 new articles. So what better time to look back at 2020’s most popular articles as we head into the new year?

It’s fair to say that SRT has seen a lot of interest this year. This was always going to be the case as top-tier broadcasters are now adopting a ‘code as infrastructure’ approach. whereby transmission chains, post-production and live-production workflows are generated via APIs in the cloud, ready for temporary or permanent use. Seen before as the perfect place to put your streaming service, the cloud is increasingly viewed as a viable option for nearly all parts of the production chain.

Getting video in and out of the cloud can be done without SRT, but SRT is a great option as it seamlessly corrects for missing packets which can get lost on the route. How it does this, is the topic of this talk from Alex Converse from Twitch. In the original article on this site, one of the highest-ranking this year, it’s also pitched as an RTMP replacement.

RTMP is still heavily used around the world and like many established technologies, there’s an element of ‘better the devil you know’ mixed in with a reality that much equipment out there will never be updated to do anything else. However, new equipment is being delivered with technologies such as SRT which means that getting from your encoder to the cloud, can now be done with less latency, with better reliability and with a wider choice of codecs than RTMP.

SRT, along with RIST, is helping transform the broadcast industry. To learn more, watch Alex’s video and then look at our other articles and videos on the topic.

Speaker

Alex Converse Alex Converse
Streaming Video Software Engineer,
Twitch

Video: AV1 Commercial Readiness Panel

With two years of development and deployments under its belt, AV1 is still emerging on to the codec scene. That’s not to say that it’s no in use billions of times a year, but compared to the incumbents, there’s still some distance to go. Known as very slow to encode and computationally impractical, today’s panel is here to say that’s old news and AV1 is now a real-time codec.

Brought together by Jill Boyce with Intel, we hear from Amazon, Facebook, Googles, Amazon, Twitch, Netflix and Tencent in this panel. Intel and Netflix have been collaborating on the SVT-AV1 encoder and decoder framework for two years. The SVT-AV1 encoder’s goal was to be a high-performance and scalable encoder and decoder, using parallelisation to achieve this aim.

Yueshi Shen from Amazon and Twitch is first to present, explaining that for them, AV1 is a key technology in the 5G area. They have put together a 1440p, 120fps games demo which has been enabled by AV1. They feel that this resolution and framerate will be a critical feature for Twitch in the next two years as computer games increasingly extend beyond typical broadcast boundaries. Another key feature is achieving an end-to-end latency of 1.5 seconds which, he says, will partly be achieved using AV1. His company has been working with SOC vendors to accelerate the adoption of AV1 decoders as their proliferation is key to a successful transition to AV1 across the board. Simultaneously, AWS has been adding AV1 capability to MediaConvert and is planning to continue AV1 integration in other turnkey content solutions.

David Ronca from Facebook says that AV1 gives them the opportunity to reduce video egress bandwidth whilst also helping increase quality. For them, SVT-AV1 has brought using AV1 into the practical domain and they are able to run AV1 payloads in production as well as launch a large-scale decoder test across a large set of mobile devices.

Matt Frost represent’s Google Chrome and Android’s point of view on AV1. Early adopters, having been streaming partly using AV1 since 2018 in resolution small and large, they have recently added support in Duo, their Android video-conferencing application. As with all such services, the pandemic has shown how important they can be and how important it is that they can scale. Their move to AV1 streaming has had favourable results which is the start of the return on their investment in the technology.

Google’s involvement with the Alliance for Open Media (AOM), along with the other founding companies, was born out of a belief that in order to achieve the scales needed for video applications, the only sensible future was with cheap-to-deploy codecs, so it made a lot of sense to invest time in the royalty-free AV1.

Andrey Norkin from Netflix explains that they believe AV1 will bring a better experience to their members. Netflix has been using AV1 in streaming since February 2020 on android devices using a software decoder. This has allowed them to get better quality at lower bitrates than VP9 Testing AV1 on other platforms. Intent on only using 10-bit encodes across all devices, Andrey explains that this mode gives the best efficiency. As well as being founding members of AoM, Netflix has also developed AVIF which is an image format based on AV1. According to Andrey, they see better performance than most other formats out there. As AVIF works better with text on pictures than other formats, Netflix are intending to use it in their UI.

Tencent’s Shan Liu explains that they are part of the AoM because video compression is key for most Tencent businesses in their vast empire. Tencent cloud has already launched an AV1 transcoding service and support AV1 in VoD.

The panel discusses low-latency use of AV1, with Dave Ronca explaining that, with the performance improvements of the encoder and decoders along-side the ability to tune the decode speed of AV1 by turning on and off certain tools, real-time AV1 are now possible. Amazon is paying attention to low-end, sub $300 handsets, according to Yueshi, as they believe this will be where the most 5G growth will occur so site recent tests showing decoding AV1 in only 3.5 cores on a mobile SOC as encouraging as it’s standard to have 8 or more. They have now moved to researching battery life.

The panel finishes with a Q&A touching on encoding speed, the VVC and LCEVC codecs, the Sisvel AV1 patent pool, the next ramp-up in deployments and the roadmap for SVT-AV1.

Watch now!
Please note: After free registration, this video is located towards the bottom of the page
Speakers

Yueshi Shen Yueshi Shen
Principle Engineer
AWS & Twitch
David Ronca David Ronca
Video Infrastructure Team,
Facebook
Matt Frost Matt Frost
Product Manager, Chome Media Technologies,
Google
Andrey Norkin Andrey Norkin
Emerging Technologies Team
Netflix
Shan Liu Dr Shan Liu
Chief Scientist & General Manager,
Tencent Media Lab
Jill Boyce Jill Boyce
Intel

Video: S-Frame in AV1: Enabling better compression for low latency live streaming.

Streaming is such a success because it manages to deliver video even as your network capacity varies while you are watching. Called ABR (Adaptive Bitrate), this short talk asks how we can allow low-latency streams to nimbly adapt to network conditions whilst keeping the bitrate low in the new AV1 codec.

Tarek Amara from Twitch explains the idea in AV1 of introducing S-Frames, sometimes called ‘switch frames’, which take the role of the more traditional I or IDR frames. If a frame is marked as an IDR frame, this means the decoder knows it can start decoding from this frame without worrying that it’s referencing some data that came before this frame. By doing this, you can allow frequent points at which a decoder can enter a stream. IDR frames are typically I frames which are the highest bandwidth frames, by a large proportion. This is because they are a complete rendition of a frame without any of the predictions you find in P and B frames.

Because IDR frames are so large, if you want to keep overall bandwidth down, you should reduce the number of them. However, reducing the number of frames reduces the number if ‘in points’ for for the stream meaning a decoder then has to wait longer before it can start displaying the stream to the viewer. An S-Frame brings the benefits of an IDR in that it still marks a place in the stream where the decoder can join, free of dependencies on data previously sent. But the S-Frame is takes up much less space.

Tarek looks at how an S-Frame is created, the parameters it needs to obey and explains how the frames are signalled. To finish off he presents tests run showing the bitrate improvements that were demonstrated.
Watch now!
Speaker

Tarek Amara Tarek Amara
Engineering Manager, Video Encoding,
Twitch