Video: Everyone is Streaming; Can the Infrastructure Handle it?

How well is the internet infrastructure dealing with the increase in streaming during the Covid-19 pandemic? What have we learnt in terms of delivering services and have we seen any changes in the way services are consumed? This video brings together carriers, vendors and service providers to answer these questions and give a wider picture.

The video starts off by getting different perspectives on how the pandemic has affected their business sharing key data points. Jeff Budney from Verizon says that carriers have had a ‘whirlwind’ few weeks. Conviva’s José Jesus says that while they are only seeing 3% more devices, there was a 37% increase in hours of video consumed. Peaks due to live sports have done but primetime is now spread and more stable, a point which was made by both Jeff Gilbert from Qwilt as well as José.

“We’ve seen a whole year’s worth of traffic growth…it’s really been incredible” — Jeff Budney, Verizon

So while it’s clear that growth has happened, but the conversation turns to whether this has caused problems. We hear views about how some countries did see reductions in quality of experience and some with none. This experience is showing where bottlenecks are, whether they are part of the ISP infrastructure or in individual players/services which haven’t been well optimised. Indeed, explains Jason Thibeault, Executive Director of the Streaming Video Alliance, the situation seems to be shining a light on the operational resilience, rather than technical capacity of ISPs.

Thierry Fautier from Harmonic emphasises the benefits of content-aware encoding whereby services could reduce bandwidth by “30 to 40 percent” before talking about codec choice. AVC (A.K.A. H.264) accounts for 90%+ of all HD traffic. Thierry contents that by switching to both HEVC and content-aware encoding services could reduce their bandwidth by up to a factor of four.

Open Caching is a working group creating specifications to standardise an interface to allow ISPs to pull information into a local cache from service providers. This moving of content to the edge is one way that we can help avoid bottlenecks by locating content as close to viewers as possible.

The elephant in the room is that Netflix reduced quality/bitrate in order to help some areas cope. Verizon’s Jeff Budney points out that this is contra to the industry’s approach to deployment where they have assumed there is always the capacity to provide the needed scale. If that’s true, how can one tweet from a European Commissioner have had such an impact? The follow on point is that if YouTube and Netflix are now sending 25% less data, as reports suggest, ABR simply means that other providers’ players will take up the slack, as is the intent-free way ABR works. If the rest of the industry benefits from the big providers ‘dialling back’ is this an effective measure and is it fair?

The talk concludes hitting topics on ABR Multicast, having more intelligent ways to manage large-scale capacity issues, more on Open Caching and deliver protocols.

Watch now!
Speakers

Thierry Fautier Thierry Fautier
VP Video Strategy, Harmonic Inc.
President-Chair, Ultra HD Forum
Eric Klein Eric Klein
Director, Content Distribution – Disney+/ESPN+, Disney Streaming Services
Co-Chair, Open Cache Working Group, Streaming Video Alliance
José Jesus José Jesus
Senior Product Manager,
Conviva
Jeffrey Budney Jeff Budney
Manager,
Verizon
Jeffrey Gilbert Jeffrey Gilbert
VP strategy and Business Development, CP,
Qwilt
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: Video Caching Best Practices

Caching is a critical element of the streaming video delivery infrastructure. By storing objects as close to the viewer as possible, you can reduce round-trip times, cut bandwidth costs, and create a more efficient delivery chain.

This video brings together Disney, Qwilt and Verizon to understand their best-practices and look at the new Open Caching Network (OCN) working group from the Streaming Video Alliance. This recorded webinar is a discussion on the different aspects of caching and the way the the OCN addresses this.

The talk starts simply by answering “What is a caching server and how does it work?” which helps everyone get on to the same page whilst listening to the answers to “What are some of the data points to collect from the cache?” hearing ‘cache:hit-ratio’, ‘latency’, ‘cache misses’, ‘data coming from the CDN vs the origin server’ as some of the answers.

This video continues by exploring how caching nodes are built, optimising different caching solutions, connecting a cache to the Open Caching Network, and how bettering cache performance and interoperability can improve your overall viewer experience.

The Live Streaming Working Group is mentioned covered as they are working out the parameters such as ‘needed memory’ for live streaming servers and moves quickly into discussing some tricks-of-the-trade, which often lead to a better cache.

There are lots of best practices which can be shared and the an open caching network one great way to do this. The aim is to create some interoperability between companies, allowing small-scale start-up CDNs to talk to larger CDNs. A way for a streaming company to understand that it can interact with ‘any’ CDN. As ever, the idea comes down to ‘interoperability’. Have a listen and judge for yourself!

Watch now!
Speakers

Eric Klein Eric Klein
Director, Content Distribution – Disney+/ESPN+, Disney Streaming Services
Co-Chair, Open Cache Working Group, Streaming Video Alliance
Yoav Gressel Yoav Gressel
Vice President of R&D,
Qwilt
Sanjay Mishra Sanjay Mishra
Director, Technology
Verizon
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Media Alliance

Video: Mitigating Online Video Delivery Latency

Real-world solutions to real-world streaming latency in this panel from the Content Delivery Summit at Streaming Media East. With everyone chasing reductions in latency, many with the goal of matching traditional broadcast latencies, there are a heap of tricks and techniques at each stage of the distribution chain to get things done quicker.

The panel starts by surveying the way these companies are already serving video. Comcast, for example, are reducing latency by extending their network to edge CDNs. Anevia identified encoding as latency-introducer number 1 with packaging at number 2.

Bitmovin’s Igor Oreper talks about Periscope’s work with low-latency HLS (LHLS) explaining how Bitmovin deployed their player with Twitter and worked closely with them to ensure LHLS worked seamlessly. Periscope’s LHLS is documented in this blog post.

The panel shares techniques for avoiding latency such as keeping ABR ladders small to ensure CDNs cache all the segments. Damien from Anevia points out that low latency can quickly become pointless if you end up with a low-latency stream arriving on an iPhone before Android; relative latency is really important and can be more so than absolute latency.

The importance of HTTP and the version is next up for discussion. HTTP 1.1 is still widely used but there’s increasing interest in HTTP 2 and QUIC which both handle connections better and reduce overheads thus reducing latency, though often only slightly.

The panel finishes with a Q&A after discussing how to operate in multi-CDN environments.

Watch now!
Speakers

Damien Lucas Damien Lucas
CTO & Co-Founder,
Anevia
Ryan Durfey Ryan Durfey
CDN Senior Product Manager,
Comcast Technology Solutions
Igor Oreper Igor Oreper
Vice President, Solutions
Bitmovin
Eric Klein Eric Klein
Director, Content Distribution,
Disney Streaming Services (was BAMTECH Media)
Dom Robinson Dom Robinson
Director,
id3as