Video: Preparing for 5G Video Streaming

Will streaming really be any better with 5G? What problems won’t 5G solve? Just a couple of the questions in this panel from the Streaming Video Alliance. There are so many aspects of 5G which are improvements, it can be very hard to clearly articulate for a given use case which are the main ones that matter. In this webinar, the use case is clear: streaming to the consumer.

Moderating the session, Dom Robinson kicks off the conversation asking the panellists to dig below the hype and talk about what 5G means for streaming right now. Brian Stevenson is first up explaining that the low-bandwidth 5G option really useful as it allows operators to roll out 5G offerings with the spectrum they already have and, given its low frequency, get a good decent a propagation distance. In the low frequencies, 5G can still give a 20% improvement bandwidth. Whilst this is a good start, he continues, it’s really delivering in the mid-band – where bandwidth is 6x – that we can really start enabling the applications which are discussed in the rest of the talk.

Humberto la Roche from Cisco says that in his opinion, the focus needs to be on low-latency. Latency at the network level is reduced when working in the millimetre wavelengths, reducing around 10x. This is important even for video on demand. He points out, though that delay happens within the IP network fabric as well as in the 5G protocol itself and the wavelength it’s working on. Adding buffers into the network drives down the cost of that infrastructure so it’s important to look at ways of delivering the overall latency needed at a reasonable cost. We also hear from Sanjay Mishra who explains that some telcos are already deploying millimetre wavelengths and focussing on advancing edge compute in high-density areas as their differentiator.

The panel discusses the current technical challenges for operators. Thierry Fautier draws from his experience of watching sports in the US on his mobile devices. The US has a zero-rating policy, he explains, where a mobile operator waives all data charges when you use a certain service, but only delivers the video at SD resolution at 1.5 Mbps. Whilst the benefits to this are obvious, it means that as people buy new, often larger phones, with better screens, they expect to reap the benefits. At SD, Thierry says, you can’t see the ball in Tennis, so there 5G will offer the over-the-air network bandwidth needed to allow the telcos to offer HD as part of these deals.

Preparing for 5G Video Streaming from Streaming Video Alliance on Vimeo.

The panel discusses the problems seen so far in delivering MBMS – multicast for mobile networks. MBMS has been deployed sporadically around the world in current LTE networks (using eMBMS) but has faced a typical chicken and egg problem. Given that both cell towers and mobile devices need to support the technology, it hasn’t been worth the upgrade cost for the telcos given that eMBMS is not yet supported by many chipsets including Apple’s. Thierry says there is hope for a 5G version of MBMS since Apple is now part of the 3GPP.

CMAF had a similar chicken and egg situation when it was finalised, there was hesitance in using it because Apple didn’t support it. Now with iOS 14 supporting HLS in CMAF, there is much more interest in deploying such services. This is just as well, cautions Thierry, as all the talk of reduced latency in 5G or in the network itself won’t solve the main problem with streaming latency which exists at the application layer. If services don’t abandon HLS/DASH and move to LL-HLS and LL-DASH/CMAF then the improvements in latency lower down the stack will only convey minimal benefits to the viewer.

Sanjay discusses the problem of coverage and penetration which will forever be a problem. “All cell towers are not created equal.” The challenge will remain as to how far and wide coverage will be there.

The panel finishes looking at what’s to come and suggests more ‘federations’ of companies working together, both commercially and technically, to deliver video to users in better ways. Thierry sums up the near future as providing higher quality experiences, making in-stadia experiences great and enabling immersive video.

Watch now!
Speakers

Brian Stevenson Brian Stevenson
SME,
Streaming Video Alliance
Humberto La Roche Humberto La Roche
Principal Engineer,
Cisco
Sanjay Mishra Sanjay Mishra
Associate Fellow,
Verizon
Thierry Fautier Thierry Fautier
President-Chair at Ultra HD Forum
VP Video Strategy Harmonic at Harmonic
Dom Robinson Moderator: Dom Robinson
Co-Founder, Director, and Creative Firestarter
id3as

Video: LL-HLS Discussion with THEO, Wowza & Fastly

Roundtable discussion with Fastly, Theo and Wowza

iOS 14 has finally started to hit devices and with it, LL-HLS is now available in millions of devices. Low-Latency HLS is Apple’s latest evolution of HLS, a streaming protocol which has been widely used for over a decade. Its typical latency has gradually come down from 60 seconds to, between 6 and 15 seconds now. There are still a lot of companies that want to bring that down further and LL-HLS is Apple’s answer to people who want to operate at around 2-4 seconds total latency, which matches or beats traditional broadcast.

LL-HLS was introduced last year and had a rocky reception. It came after a community-driven low-latency scheme called LHLS and after MPEG DASH announced CMAF’s ability to hit the same 2-4 second window. Famously, this original context, as well as the technical questions over the new proposal, were summed up well in Phil Cluff’s blog post which was soon followed by a series of talks trying to make sense of LL-HLS ahead of implementation. This is the Apple video introducing LL-HLS in its first form. And the reactions from AL Shenker from CBS Interactive, Marina Kalkanis from M2A Media and Akamai’s Will Law which also nicely sums up the other two contenders. Apple have now changed some of the spec in response to their own further reasearch and external feedback which was received positively and summed up in, THEO CTO, Pieter-Jan Speelmans’ recent webinar bringing us the updates.

In this panel, Pieter is joined by Chris Buckley from Fastly Inc. and Wowza’s Jamie Sherry discussing pressing LL-HLS into action. Moderator Alison Kolodny hosts the talk which covers a wide variety of points.

“Wide adoption” is seen as the day-1 benefit. If you support LL-HLS then you’ll know you’re able to hit a large number of iPads, iPhones and Macs. Typically Apple sees a high percentage of the userbase upgrade fairly swiftly and easily see more than 75% of devices updated within four months of release. The panel then discusses how implementation has become easier given the change in protocol where the use of HTTP/2’s push technology was dropped which would have made typical CDN techniques like hosting the playlists separately to the media impossible. Overall, CDN implementation has become more practical since with pre-load hints, a CDN can host many, many connections into to it, all waiting for a certain chunk and collapse that down to a single link to the origin.

One aspect of implementation which has improved, we hear from Pieter-Jan, is building effective Adaptive Bit Rate (ABR) switching. With low-latency protocols, you are so close to live that it becomes very hard to download a chunk of video ahead of time and measure the download speed to see if it arrived quicker than realtime. If it did, you’d infer there was spare bit rate. LL-HLS’s use of rendition reports, however, make that a lot easier. Pieter-Jan also points out SSAI is easier with rendition reports.

The rest of the discussion covers device support for LL-HLS, subtitles workflows, the benefits of TLS 1.3 being recommended, and low-latency business cases.

Watch now!
The webinar is free to watch, on demand, in exchange for your email details. The link is emailed to you immediately.
Speaker

Chris Buckley
Senior Sales Engineer,
Fastly Inc.
Pieter-Jan Speelmans Pieter-Jan Speelmans
CTO,
THEO Technologies
Jamie Sherry Jamie Sherry
Senior Product Manager,
Wowza
Alison Kolodny Moderator: Alison Kolodny
Senior Product Manager of Media Services,
Frame.io

Video: Doing Better Congestion Control with BBR & Copa

In networking there are many possible bottlenecks, but the most pervasive is congestion caused by links operating at capacity and saturating the buffers. Full buffers are unable to fully adapt to the incoming traffic, increasing the chances of dropped packets, but the extra latency added by full buffer after full buffer quickly adds up and this extra latency further degrades the quality of the connection for the data that does make it through.

It’s no surprise then, that a lot of work goes into finding the best ‘congestion’ algorithms to allow data senders to back off when a link stops responding well. This talk, from Facebook engineer Nitin Garg, examines old and new approaches to keeping streams fast and responsive by running a 4-million-data-point test of three contenders, Cubic, BBR and Copa.


Nitin starts by introducing what we mean by ‘congestion’, how and why it occurs. The simple example is that your computer can send data, typically, at up to 1Gbps, yet your uplink to the internet is likely below this number. So congestion control is a feedback mechanism which lets your computer realise that sending at 1Gbps isn’t working and allows it to throttle back to a speed which fits within your upload bandwidth. The same is true further down the pipe. If you have 50Mbps uplink to the internet, but you are sending to a server which only has 10Mbps left, not only does your computer need to throttle below 50, but also 10Mbps.

We then walk through how Cubic, BBR and Copa work with Nitin explaining the differences. <a href=”https://web.mit.edu/copa/” rel=”noopener” target=”_blank>Copa is the newest of the protocols comes from MIT and comes with the unique ability to tune it to your need; throughput or low latency. As discussed above, to keep latency down, buffer size needs to be minimised which stops you being aggressive in loading up links which leads to latency and throughput being at opposite ends of a see-saw.

Nitin’s test was on mobile phones using Facebook’s Live streaming app on Android and iOS for live streaming with ABR where the app itself adapts to ensure that it is streaming with as high a quality as possible, but willing to reduce the bitrate when needed. Testing from global markets, they measured round trip times and the amount of delivered data. Nitin walks through the results both for latency and throughput and shows that when Copa is optimised for latency, in the worst conditions it leads the other two protocols in latency reduction.

Watch now!
Speakers

Nitin Garg Nitin Garg
Software Engineer, Videos Infra,
Facebook

Video: Low Latency Live Streaming At Scale

Low latency can be a differentiator for a live streaming service, or just a way to ensure you’re not beaten to the punch by social media or broadcast TV. Either way, it’s seen as increasingly important for live streaming to be punctual breaking from the past where latencies of thirty to sixty seconds were not uncommon. As the industry has matured and connectivity has enough capacity for video, simply getting motion on the screen isn’t enough anymore.

Steve Heffernan from MUX takes us through the thinking about how we can deliver low latency video both into the cloud and out to the viewers. He starts by talking about the use cases for sub-second latency – anything with interaction/conversations – and how that’s different from low-latency streaming which is one to many, potentially very large scale distribution. If you’re on a video call with ten people, then you need sub-second latency else the conversation will suffer. But distributing to thousands or millions of people, the sacrifice in potential rebuffering of operating sub-second, isn’t worth it, and usually 3 seconds is perfectly fine.

Steve talks through the low-latency delivery chain starting with the camera and encoder then looking at the contribution protocol. RTMP is still often the only option, but increasingly it’s possible to use WebRTC or SRT, the latter usually being the best for streaming contribution. Once the video has hit the streaming infrastructure, be that in the cloud or otherwise, it’s time to look at how to build the manifest and send the video out. Steve talks us through the options of Low-Latency HLS (LHLS) CMAF DASH and Apple’s LL-HLS. Do note that since the talk, Apple removed the requirement for HTTP/2 push.

The talk finishes off with Steve looking at the players. If you don’t get the players logic right, you can start off much farther behind than necessary. This is becoming less of a problem now as players are starting to ‘bend time’ by speeding up and slowing down to bring their latency within a certain target range. But this only underlines the importance of the quality of your player implementation.

Watch now!
Speaker

Steve Heffernan Steve Heffernan
Founder & Head of Product, MUX
Creator of video.js