Video: Layer 4 in the CDN

Caching is a critical element of the streaming video delivery infrastructure, but with the proliferation of streaming services, managing caching is complex and problematic. Open Caching is an initiative by the Streaming Video Alliance to bring this under control allowing ISPs and service providers a standard way to operate.

By caching objects as close to the viewer as possible, you can reduce round-trip times which helps reduce latency and can improve playback but, more importantly, moving the point at which content is distributed closer to the customer allows you to reduce your bandwidth costs, and create a more efficient delivery chain.

This video sees Disney Streaming Services, ViaSat and Stackpath discussing Open Caching with Jason Thibeault, Executive Director of the Streaming Video Alliance. Eric Klein from Disney explains that one driver for Open Caching is from content producers which find it hard to scale, to deliver content in a consistent manner across many different networks. Standardising the interfaces will help remove this barrier of scale. Alongside a drive from content producers, are the needs of the network operators who are interested in moving caching on to their network which reduces the back and forth traffic and can help cope with peaks.

Dan Newman from Viasat builds on these points looking at the edge storage project. This is a project to move caching to the edge of the networks which is an extension of the original open caching concept. The idea stretches to putting caching directly into the home. One use of this, he explains, can be used to cache UHD content which otherwise would be too big to be downloaded down lower bandwidth links.

Josh Chesarek from StackPath says that their interest in being involved in the Open Caching initiative is to get consistency and interoperability between CDNs. The Open Caching group is looking at creating these standard APIs for capacity, configuration etc. Also, Eric underlines the interest in interoperability by the close work they are doing with the IETF to find better standards on which to base their work.

Looking at the test results, the average bitrate increases by 10% when using open caching, but also a 20-40% improvement in connection use rebuffer ratio which shows viewers are seeing an improved experience. Viasat have used multicast ABR plus open caching. This shows there’s certainly promise behind the work that’s ongoing. The panel finishes by looking towards what’s next in terms of the project and CDN optimisation.

Watch now!
Speakers

Eric Klein Eric Klein
Director, CDN Technology,
Disney+
Dan Newman Dan Newman
Product Manager,
Viasat
Josh Chesarek Josh Chesarek
VP, Sales Engineering & Support
Stackpath.com
Jason Thibeault Jason Thibeault
Executive Director, Streaming Video Alliance

Video: Preparing for 5G Video Streaming

Will streaming really be any better with 5G? What problems won’t 5G solve? Just a couple of the questions in this panel from the Streaming Video Alliance. There are so many aspects of 5G which are improvements, it can be very hard to clearly articulate for a given use case which are the main ones that matter. In this webinar, the use case is clear: streaming to the consumer.

Moderating the session, Dom Robinson kicks off the conversation asking the panellists to dig below the hype and talk about what 5G means for streaming right now. Brian Stevenson is first up explaining that the low-bandwidth 5G option really useful as it allows operators to roll out 5G offerings with the spectrum they already have and, given its low frequency, get a good decent a propagation distance. In the low frequencies, 5G can still give a 20% improvement bandwidth. Whilst this is a good start, he continues, it’s really delivering in the mid-band – where bandwidth is 6x – that we can really start enabling the applications which are discussed in the rest of the talk.

Humberto la Roche from Cisco says that in his opinion, the focus needs to be on low-latency. Latency at the network level is reduced when working in the millimetre wavelengths, reducing around 10x. This is important even for video on demand. He points out, though that delay happens within the IP network fabric as well as in the 5G protocol itself and the wavelength it’s working on. Adding buffers into the network drives down the cost of that infrastructure so it’s important to look at ways of delivering the overall latency needed at a reasonable cost. We also hear from Sanjay Mishra who explains that some telcos are already deploying millimetre wavelengths and focussing on advancing edge compute in high-density areas as their differentiator.

The panel discusses the current technical challenges for operators. Thierry Fautier draws from his experience of watching sports in the US on his mobile devices. The US has a zero-rating policy, he explains, where a mobile operator waives all data charges when you use a certain service, but only delivers the video at SD resolution at 1.5 Mbps. Whilst the benefits to this are obvious, it means that as people buy new, often larger phones, with better screens, they expect to reap the benefits. At SD, Thierry says, you can’t see the ball in Tennis, so there 5G will offer the over-the-air network bandwidth needed to allow the telcos to offer HD as part of these deals.

Preparing for 5G Video Streaming from Streaming Video Alliance on Vimeo.

The panel discusses the problems seen so far in delivering MBMS – multicast for mobile networks. MBMS has been deployed sporadically around the world in current LTE networks (using eMBMS) but has faced a typical chicken and egg problem. Given that both cell towers and mobile devices need to support the technology, it hasn’t been worth the upgrade cost for the telcos given that eMBMS is not yet supported by many chipsets including Apple’s. Thierry says there is hope for a 5G version of MBMS since Apple is now part of the 3GPP.

CMAF had a similar chicken and egg situation when it was finalised, there was hesitance in using it because Apple didn’t support it. Now with iOS 14 supporting HLS in CMAF, there is much more interest in deploying such services. This is just as well, cautions Thierry, as all the talk of reduced latency in 5G or in the network itself won’t solve the main problem with streaming latency which exists at the application layer. If services don’t abandon HLS/DASH and move to LL-HLS and LL-DASH/CMAF then the improvements in latency lower down the stack will only convey minimal benefits to the viewer.

Sanjay discusses the problem of coverage and penetration which will forever be a problem. “All cell towers are not created equal.” The challenge will remain as to how far and wide coverage will be there.

The panel finishes looking at what’s to come and suggests more ‘federations’ of companies working together, both commercially and technically, to deliver video to users in better ways. Thierry sums up the near future as providing higher quality experiences, making in-stadia experiences great and enabling immersive video.

Watch now!
Speakers

Brian Stevenson Brian Stevenson
SME,
Streaming Video Alliance
Humberto La Roche Humberto La Roche
Principal Engineer,
Cisco
Sanjay Mishra Sanjay Mishra
Associate Fellow,
Verizon
Thierry Fautier Thierry Fautier
President-Chair at Ultra HD Forum
VP Video Strategy Harmonic at Harmonic
Dom Robinson Moderator: Dom Robinson
Co-Founder, Director, and Creative Firestarter
id3as

Video: Demystifying Video Delivery Protocols

Let’s face it, there are a lot of streaming protocols out there both for contribution and distribution. Internet ingest in RTMP is being displaced by RIST and SRT, whilst low-latency players such as CMAF and LL-HLS are vying for position as they try to oust HLS and DASH in existing services streaming to the viewer.

This panel, hosted by Jason Thibeault from the Streaming Video Alliance, talks about all these protocols and attempts to put each in context, both in the broadcast chain and in terms of its features. Two of the main contribution technologies are RIST and SRT which are both UDP-based protocols which implement a method of recovering lost packets whereby packets which are lost are re-requested from the sender. This results in a very high resilience to packet loss – ideal for internet deployments.

First, we hear about SRT from Maxim Sharabayko. He lists some of the 350 members of the SRT Alliance, a group of companies who are delivering SRT in their products and collaborating to ensure interoperability. Maxim explains that, based on the UDT protocol, it’s able to do live streaming for contribution as well as optimised file transfer. He also explains that it’s free for commercial use and can be found on github. SRT has been featured a number of times on The Broadcast Knowledge. For a deeper dive into SRT, have a look at videos such as this one, or the ones under the SRT tag.

Next Kieran Kunhya explains that RIST was a response to an industry request to have a vendor-neutral protocol for reliable delivery over the internet or other dedicated links. Not only does vendor-neutrality help remove reticence for users or vendors to adopt the technology, but interoperability is also a key benefit. Kieran calls out hitless switching across multiple ISPs and cellular. bonding as important features of RIST. For a summary of all of RIST’s features, read this article. For videos with a deeper dive, have a look at the RIST tag here on The Broadcast Knowledge.

Demystifying Video Delivery Protocols from Streaming Video Alliance on Vimeo.

Barry Owen represents WebRTC in this webinar, though Wowza deal with many protocols in their products. WebRTC’s big advantage is sub-second delivery which is not possible with either CMAF or LL-HLS. Whilst it’s heavily used for video conferencing, for which it was invented, there are a number of companies in the streaming space using this for delivery to the user because of it’s almost instantaneous delivery speed. Whilst a perfect rendition of the video isn’t guaranteed, unlike CMAF and LL-HLS, for auctions, gambling and interactive services, latency is always king. For contribution, Barry explains, the flexibility of being able to contribute from a browser can be enough to make this a compelling technology although it does bring with it quality/profile/codec restrictions.

Josh Pressnell and Ali C Begen talk about the protocols which are for delivery to the user. Josh explains how smoothstreaming has excited to leave the ground to DASH, CMAF and HLS. They discuss the lack of a true CENC – Common Encryption – mechanism leading to duplication of assets. Similarly, the discussion moves to the fact that many streaming services have to have duplicate assets due to target device support.

Looking ahead, the panel is buoyed by the promise of QUIC. There is concern that QUIC, the Google-invented protocol for HTTP delivery over UDP, is both under standardisation proceedings in the IETF and is also being modified by Google separately and at the same time. But the prospect of a UDP-style mode and the higher efficiency seems to instil hope across all the participants of the panel.

Watch now to hear all the details!
Speakers

Ali C. Begen Ali C. Begen
Technical Consultant, Comcast
Kieran Kunhya Kieran Kunhya
Founder & CEO, Open Broadcast Systems
Director, RIST Forum
Barry Owen Barry Owen
VP, Solutions Engineering
Wowza Media Systems
Joshua Pressnell Josh Pressnell
CTO,
Penthera Technologies
Maxim Sharabayko Maxim Sharabayko
Senior Software Developer,
Haivision
Jason Thibeault Moderator: Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: Latency Still Sucks (and What You Can Do About It)

The streaming industry is on an ever-evolving quest to reduce latency to bring it in line with, or beat linear broadcasts and to allow business models such as gaming to flourish. When streaming started, latency of a minute or more was not uncommon and whilst there are some simple ways to improve that, getting down to the latency of digital TV, approximately 5 seconds, is not without challenges. Whilst the target of 5 seconds works for many use cases, it’s still not enough for auctions, gambling or ‘gamification’ which need sub-second latency.

In this panel, Jason Thielbaut explores how to reduce latency with Casey Charvet from Gigcasters, Rob Roskin from CenturyLink and Haivision VP Engineering, Marc Cymontkowski. This wide-ranging discussion covers CDN caching, QUIC and HTTP/3, encoder settings, segmented Vs. non-segmented streaming, ingest and distribution protocols.

Key to the discussion is differentiating the ingest protocol from the distribution protocol. Often, just getting the content into the cloud quickly is enough to bring the latency into the customer’s budget. Marc from Haivision explains how SRT achieves low-latency contribution. SRT allows unreliable networks like the Internet to be used for reliable, encrypted video contribution. Created by Haivision and now an Open Source technology with an IETF draft spec, the alliance of SRT users continues to grow as the technology continues to develop and add features. SRT is a ‘re-request’ technology meaning it achieves its reliability by re-requesting from the encoder any data it missed. This is in contrast to TCP/IP which acknowledges every single packet of data and is sent missing data when acknowledgements aren’t received. Doing it the SRT, way really makes the protocol much more efficient and able to cope with real-time media. SRT can also encrypt all traffic which, when sending over the internet, is extremely important even if you’re not sending live-sports. In this video, Marc makes the point that SRT also recovers the timing of the stream so that the data comes out the SRT pipe in the same ‘shape’ as it went in. Particularly with VBR encoding, your decoder needs to receive the same peaks and troughs as the encoder created to save it having to buffer the input as much. All this included, SRT manages to deliver a transport latency of around 2.5 times the round trip time.

Haivision are members of RIST which is a similar technology to SRT. Marc explains that RIST is approaching the problem from a standards perspective; taking IETF RFCs and applying them to RTP. SRT took a more pragmatic way forward by creating a binary which implemented the features and by making this open source for interoperability.

The video finishes with a Q&A covering HTTP Header compression, recommended size of HLS chunks, peer-to-peer streaming and latency requirements for VoD.

Watch now!
Speakers

Rob Roskin Rob Roskin
Principal Solutions Architect,
Level3 Communications
Marc Cymontkowski Marc Cymontkowski
VP Engineering – Cloud,
Haivision
Casey Charvet Casey Charvet
Managing Director,
Gigcasters
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance