Video: Mobile and Wireless Layer 2 – Satellite/ATSC30/M-ABR/5G/LTE-B

Wireless internet is here to stay and as it improves, it opens new opportunities for streaming and broadcasting. With SpaceX delivering between 20 and 40ms latency, we see that even satellite can be relevant for low-latency streaming. Indeed radio (RF) is the focus of this talk discussing how 5G, LTE, 4G, ATSC and satellite fit into delivering streaming media o everyone.

LTE-B, in the title of this talk refers to LTE Broadcast, also known as eMBMS (Evolved Multimedia Broadcast Multicast Services.) delivered over LTE technology. Matt Stagg underlines the importance of LTE-B saying “Spectrum is finite and you shouldn’t waste it sending unicast”. Using LTE-B, we can achieve a one-to-many push with orchestration on top. ROuters do need to support this and UDP transport, but this is a surmountable challenge.

Matt explains that BT did a trial of LTE-B with BBC. The major breakthrough was they could ‘immediately’ deliver the output of an EVS direct to the fans in the stadium. For BT, the problem came with hitting critical mass. Matt makes the point that it’s not just sports, Love Island can get the same viewership. But with no support from Apple, the number of compatible devices isn’t high enough.

“Spectrum is final and you shouldn’t waste it sending unicast”

Matt Stagg

Turning the attention of the panel which includes Synamedia’s Mark Myslinski and Jack Arky from Verizon Wireless. Matt says that, in general, bandwidth capacity to edges in the UK is not a big issue since there is usually dark fibre, but hosting content at the edge doesn’t hit the spot due to the RAN. 5G has helped us move on beyond that.

Mark from Verizon explains that multi-edge access compute enabled by the low-latency of 5G. We need to move as much as is sensible to the edge to keep the delay down. Later in the video, we hear that XR (mixed reality) and AR (augmented reality) are two technologies which will likely depend on cloud computation to get the level of accurate graphics necessary. This will, therefore, require a low-latency connection.

From Verizon’s perspective, the most important technology being rolled out is actually ATSC 3.0. Much discussed at NAB 2015, stability has come to the standard and it’s now in use in South Korea and increasingly in the US. ATSC 3.0, as Mark explains, is a complimentary fully-IP technology to fit alongside 5G. He even talks about how 5G and ATSC could co-exist due to the open way the standards were created.

The session ends with a Q&A

Watch now!
Speakers

Mark Myslinski Mark Myslinski
Broadcast Solutions Manager,
Synamedia
Jack Arky Jack Arky
Senior Engineer, Product Development
Verizon Wireless
Matt Stagg Matt Stagg
Director, Mobile Strategy
BT Sport
Dom Robinson Dom Robinson
Co-Founder, Director and Creative Firestarter
id3as

Video: Low Latency Live from a Different Vantage Point

Building a low-latency live streaming platform is certainly possible nowadays, but not without its challenges and compromises. Traditionally, HLS-style delivery keeps latency high because of chunk sizes being between 5 and 10 seconds. Pushing that down to 2 seconds, generally seen as the minimum viable chunk size can then cause problems estimating bandwidth and thus breaking ABR.

Tackling these challenges are a host of technologies such as CMAF, Low-Latency HLS (LHLS) and Apple’s LLHLS but this talk takes a different approach to deliver streams with only 3-4 seconds of latency.

Michelle Munson from Eluvio explains that theoretically you could stream chunks in realtime and the delay would be the propagation time over the internet. In reality, though, encoding and transcoding delay add up, plus the CDN can gradually add to a drift of the signal to 15 seconds. ABR is tricky when delivering chunks in a streamed manner because the standard method of determining available bandwidth by measuring the download time gets broken since all chunks come in real-time.

Tackling this, Michelle introduces her to the decentralised fabric which Eluvio have put together which uses dispersed nodes to hold data acting, in some ways as a CDN but the trick here is that the nodes work together to share video. Each node can transcode just in time and also can create playlists on-demand from the distributed metadata in response to client requests. Being able to bring things together dynamically an on the fly removes a lot of latency pinch points from the system.

The result is a system which can deliver content from the encoder to the nodes in around 250ms then a further 50 or so for distribution. To make ABR easier, the player works one segment behind live so it always has a whole segment to download as quickly as it can and thus enabling ABR to work normally.

Michelle finishes by highlighting the results of testing both over time and at scale. The results show that node load stayed low and even in both scenarios delivering 3.5 seconds of latency.

Watch now!
Speakers

Michelle Munson Michelle Munson
CEO and Founder,
Eluvio

Video: Broadcasting WebRTC over Low Latency Dash


Using sub-second WebRTC with the scalability of CMAF: Allowing panelists and presenters to chat in real-time is really important to foster fluid conversations, but broadcasting that out to thousands of people scales more easily with CMAF based on MPEG DASH. In this talk, Mux’s Dylan Jhaveri (formerly CTO, Crowdcast.io) explains how they’ve combined WebRTC and CMAF to keep latencies low for everyone.

Speaking at the San Francisco VidDev meetup, Dylan explains that the Crowdspace webpage allows you to watch a number of participants talk in real-time as a live stream with live chat down the side of the screen. The live chat, naturally, feeds into the live conversation so latency needs to be low for the viewers as much as the on-camera participants. For them, WebRTC is used as this is one of the very few options that provides reliable sub-second streaming. To keep the interactivity between the chat and the participants, Crowdcast decided to look at ultra-low-latency CMAF which can deliver between 1 and 5 second latency depending on your risk threshold for rebuffering. So the task became to convert a WebRTC call into a low-latency stream that could easily be received by thousands of viewers.

 

 
Dylan points out that they were already taking WebRTC into the browser as that’s how people were using the platform. Therefore, using headless Chrome should allow you to pipe the video from the browser into ffmpeg and create an encode without having to composite individual streams whilst giving Crowdcast full layout control.

After a few months of tweaking, Dylan and his colleagues had Chrome going into ffmpeg then into a nodejs server which delivers CMAF chunks and manifests (click to learn more about how CMAF works). In order to scale this, Dylan explains the logic implemented in a CDN to use the nodejs server running in a docker container as an origin server. Using HLS they have a 95% cache hit rate and achieve 15 seconds latency. The tests at the time of the talks, Dylan explains, show that the CMAF implementation hits 3 seconds of latency and was working as expected.

The talk ends with a Q&A covering how they get the video out of the headerless Chrome, whether CMAF latency could be improved and why there are so many docker containers.

Watch now!
Speaker

Dylan Jhaveri Dylan Jhaveri
Senior Software Engineer, Mux
Formerly CTO & Co-founder, Crowdcast.io

Video: Where The Puck Is Going: What’s Next for Esports & Sports Streaming

How’s sports streaming changing as the pandemic continues? Esports has the edge on physical sports as it allows people to compete from diverse locations. But both physical and esports benefit from bringing people into one place and getting the fans to see the players.

This panel from Streaming Media, moderated by Jeff Jacobs, looks at how producers, publishers, streamers and distributors reacted to 2020 and where they’re positioning themselves to be ahead in 2021. The panel opens by looking at the tools and the preferred workflows. There are so many ways to do remote production. Sam Asfahani from OS Studios, explained how they had already adopted some remote workflows to keep costs down but he has been impressed at the number of innovations released which help improve remote production. He explains they have a physical NDI Control room where they also use VMix for contribution. The changed workflows during the pandemic have convinced them that the second control room they were planning to build should now be in the cloud.

Aaron Nagler from Cheesehead TV discussed the way he’s stopped flying to watch games and has changed to watching syncronised using LiveX Director with his co-presenter. Within a few milliseconds, he is seeing the same footage so they can both present and comment in real-time. Intriguingly, Tyler Champley from Poker Central explains that, for them, remote production hasn’t been needed since the tournaments have been canceled and they use their studio facilities. Their biggest issue is that their players need to be in the same room to play the game, close to each other and without masks.

Link to video

The panel discusses what will stick after the pandemic. Sam makes the point that he’s gone from paying $20,000 for a star to stay overnight and be part of the show. The pandemic has made it so that sports stars are happy to be paid $5,000 for the two hours on a programme without having to leave their house and the show saves money too. He feels this will continue to be an option on an on-going basis, though the panel notes that technical capability is limited with contributors, even top dollar talent without anyone else there to help. Tyler says that his studio has been more in demand during Covid so his team has become better at tear-downs to accommodate multiple uses. And lastly, the panel makes the point that hybrid programme making models are going to continue.

After some questions from the audience, the panel comments on future strategies. Sean Gardner from Xilinx talks about the need and arrival of newer codecs such as AV1 and LCEVC can help do deliver lower bitrates and/or lower latency. Aaron mentions that he’s seen ways of gamifying the streams which he hasn’t used before which helps with monetising. And Sam leaves us with the thought that game APIs can help create fantastic productions when they’re done well, but he sees an even better future where APIs allow information to be fed back into the game which will be able to create a two-way event between the fans and the game.

Watch now!
Speakers

Jeff Jacobs Moderator:Jeff Jacobs
Executive Vice President & General Manager,
VENN
Aaron Nagler Aaron Nagler
Co-Founder,
Cheesehead TV
Sam Asfahani Sam Asfahani
CEO,
OS Studios
Sean Gardner Sean Gardner
Snr Manager, Market Development & Strategy, Cloud Video,
Xilinx
Tyler Champley Tyler Champley
VP Marketing & Audience Development,
Poker Central