Video: Where The Puck Is Going: What’s Next for Esports & Sports Streaming

How’s sports streaming changing as the pandemic continues? Esports has the edge on physical sports as it allows people to compete from diverse locations. But both physical and esports benefit from bringing people into one place and getting the fans to see the players.

This panel from Streaming Media, moderated by Jeff Jacobs, looks at how producers, publishers, streamers and distributors reacted to 2020 and where they’re positioning themselves to be ahead in 2021. The panel opens by looking at the tools and the preferred workflows. There are so many ways to do remote production. Sam Asfahani from OS Studios, explained how they had already adopted some remote workflows to keep costs down but he has been impressed at the number of innovations released which help improve remote production. He explains they have a physical NDI Control room where they also use VMix for contribution. The changed workflows during the pandemic have convinced them that the second control room they were planning to build should now be in the cloud.

Aaron Nagler from Cheesehead TV discussed the way he’s stopped flying to watch games and has changed to watching syncronised using LiveX Director with his co-presenter. Within a few milliseconds, he is seeing the same footage so they can both present and comment in real-time. Intriguingly, Tyler Champley from Poker Central explains that, for them, remote production hasn’t been needed since the tournaments have been canceled and they use their studio facilities. Their biggest issue is that their players need to be in the same room to play the game, close to each other and without masks.

Link to video

The panel discusses what will stick after the pandemic. Sam makes the point that he’s gone from paying $20,000 for a star to stay overnight and be part of the show. The pandemic has made it so that sports stars are happy to be paid $5,000 for the two hours on a programme without having to leave their house and the show saves money too. He feels this will continue to be an option on an on-going basis, though the panel notes that technical capability is limited with contributors, even top dollar talent without anyone else there to help. Tyler says that his studio has been more in demand during Covid so his team has become better at tear-downs to accommodate multiple uses. And lastly, the panel makes the point that hybrid programme making models are going to continue.

After some questions from the audience, the panel comments on future strategies. Sean Gardner from Xilinx talks about the need and arrival of newer codecs such as AV1 and LCEVC can help do deliver lower bitrates and/or lower latency. Aaron mentions that he’s seen ways of gamifying the streams which he hasn’t used before which helps with monetising. And Sam leaves us with the thought that game APIs can help create fantastic productions when they’re done well, but he sees an even better future where APIs allow information to be fed back into the game which will be able to create a two-way event between the fans and the game.

Watch now!
Speakers

Jeff Jacobs Moderator:Jeff Jacobs
Executive Vice President & General Manager,
VENN
Aaron Nagler Aaron Nagler
Co-Founder,
Cheesehead TV
Sam Asfahani Sam Asfahani
CEO,
OS Studios
Sean Gardner Sean Gardner
Snr Manager, Market Development & Strategy, Cloud Video,
Xilinx
Tyler Champley Tyler Champley
VP Marketing & Audience Development,
Poker Central

Video: AES67/ST 2110-30 over WAN

Dealing with professional audio, it’s difficult to escape AES67 particularly as it’s embedded within the SMPTE ST 2110-30 standard. Now, with remote workflows prevalent, moving AES67 over the internet/WAN is needed more and more. This talk brings the good news that it’s certainly possible, but not without some challenges.

Speaking at the SMPTE technical conference, Nicolas Sturmel from Merging Technologies outlines the work being done within the AES SC-02-12M working group to define the best ways of working to enable easy use of AES67 on the WAn. He starts by outlining the fact that AES67 was written to expect short links on a private network that you can completely control which causes problems when using the WAN/internet with long-distance links on which your bandwidth or choice of protocols can be limited.

To start with, Nicolas urges anyone to check they actually need AES67 over the WAN to start with. Only if you need precise timing (for lip sync for example) with PCM quality and low latencies from 250ms down to as a little as 5 milliseconds do you really need AES67 instead of using other protocols such as ACIP, he explains. The problem being that any ping on the internet, even to something fairly close, can easily take 16 to 40ms for the round trip. This means you’re guaranteed 8ms of delay, but any one packet could be as late as 20ms known as the Packet Delay Variation (PDV).

Link

Not only do we need to find a way to transmit AES67, but also PTP. The Precise Time Protocol has ways of coping for jitter and delay, but these don’t work well on WAN links whether the delay in one direction may be different to the delay for a packet in the other direction. PTP also isn’t built to deal with the higher delay and jitter involved. PTP over WAN can be done and is a way to deliver a service but using a GPS receiver at each location is a much better solution only hampered by cost and one’s ability to see enough of the sky.

The internet can lose packets. Given a few hours, the internet will nearly always lose packets. To get around this problem, Nicolas looks at using FEC whereby you are constantly sending redundant data. FEC can send up to around 25% extra data so that if any is lost, the extra information sent can be leveraged to determine the lost values and reconstruct the stream. Whilst this is a solid approach, computing the FEC adds delay and the extra data being constantly sent adds a fixed uplift on your bandwidth need. For circuits that have very few issues, this can seem wasteful but having a fixed percentage can also be advantageous for circuits where a predictable bitrate is much more important. Nicolas also highlights that RIST, SRT or ST 2022-7 are other methods that can also work well. He talks about these longer in his talk with Andreas Hildrebrand

Watch now!
Speakers

Nicolas Sturmel Nicolas Sturmel
Product Manager, Senior Technologist,
Merging Technologies

Video: Decoder Complexity Aware AV1 Encoding Optimization

AV1’s been famous for very low encoding speed, but as we’ve seen from panel like this, AV1 encoding times have dropped into a practical range and it’s starting to gain traction. Zoe Liu/strong>, CEO of Visionular, is here to talk at Mile High Video 2020 about how careful use of encoding parameters can deliver faster encodes, smooth decodes, and yet balance that balance with codec efficiency.

Zoe starts by outlining the good work that’s been done with the SVT-AV1 encoder which leaves it ready for deployment, as we heard previously from David Ronca of Facebook. Similarly the Dav1d decoder has recently made many speed improvements, now being able to easily decode 24fps on mobiles using between 1.5 and 3 Snapdragon cores depending on resolution. Power consumption has been measured as higher than AVC decoding but less than HEVC. Further to that, hardware support is arriving in many devices like TVs.

Zoe then continues to show ways in which encoding can be sped up by reducing the calculations done which, in turn, increased decoder speed. Zoe’s work has exposed settings that significantly speed up decoding but have very little effect on the compression efficiency of the codec which opens up use cases where decoding was the blocker and a 5% reduction in the ability to compress is a price worth paying. One example cited is ignoring partition sizes of less than 8×8. These small partitions can be numerous and bog down calculations but their overall contribution to bitrate reduction is very low.

All of these techniques are brought together under the heading of Decoder Complexity Aware AV1 Encoding Optimization which, Zoe explains, can result in an encoding speed-up of over two times the original framerate i.e. twice real-time on an Intel i5. Zoe concludes that this creates a great opportunity to apply AV1 to VOD use cases.

Watch now!
Speaker

Zoe Liu Zoe Liu
CEO,
Visionular

Video: LCEVC, The Compression Enhancement Standard

MPEG released 3 codecs last year, VVC, LCEVC and EVC. Which one was unlike the others? LCEVC is the only one that is an enhancement codec, working in tandem with a second codec running underneath. Each MPEG codec from last year addressed specific needs with VVC aiming at comprehensive bitrate savings while EVC aims to push encoding further whilst having a patent-free base layer.

In this talk, we hear from Guido Meardi from V-Nova who explains why LVECV is needed and how it works. LCEVC was made, Guido explains, to cater to an increasingly crowded network environment with more and more devices sending and receiving video both in residential and enterprise. LCEVC helps by reducing the bitrate needed for a certain quality level but, crucially, reduces the computation needed to achieve good quality video which not only benefits IoT and embedded devices but also general computing.

LCEVC uses a ‘base codec’ which is any other codec, often AVC or HEVC, which runs at a lower resolution than the source video. By using this hybrid technique, LCEVC aims to get the best video compression out of the codec yet by running the encode at a quarter resolution, allowing this to be done on low-power hardware. LCEVC then deals with reconstructing two enhancement layers and a, relatively simple, super-resolution upsample. This is all achieved with a simple toolset and all of the LCEVC computation can be done in CPU, GPU or other types of computation; it’s not bound to hardware acceleration.

Guido presents a number of results from tests against a whole range of codecs from VVC to AV1 to plain old AVC. These tests have been done by a number of people including Jan Ozer who undertook a whole range of tests. All of these tests point to the ability of LCEVC to extend bandwidth savings of existing codecs, new and old.

Guido shows an example of a video only comprising edges (apart from mid-grey) and says that LCEVC encodes this not only better than HEVC but also with an algorithm two orders of magnitude less. We then see an example of a pure upsample and an LCEVC encode. Upsampling alone can look good, but it can’t restore information and when there are small textual elements, the benefit of having an enhancement layer bringing those back into the upsampled video is clear.

On the decode side, Guido presents tests showing that decode is also quicker by at least two times if nor more, and because most of the decoding work is involved in decoding the base layer, this is still done using hardware acceleration (for AVC, HEVC and other codecs depending on platform). Because we can still rely on hardware decoding, battery life isn’t impacted.

Watch now!
Speakers

Guido Meardi Guide Meardi
CEO & Co-Founder,
V-Nova