Video: Edge Compute

Delivering personalised video at scale, live or otherwise, is a tradeoff between speed and complexity. In this lightning talk at Demuxed 2019, Kyle Boutette from Cloudflare explains the benefits of running code on the ‘edge’.

Kyle starts by highlighting the reason to use CDNs; they take the management of a whole fleet of servers off your hands allowing you to concentrate on delivering a video service and deploying the technology to do just that. This works really well and CDNs are the backbone of most of the large sites on the internet. Some companies build their own whilst some use Cloudflare or Amazon CloudFront among the many CDNs out there. Apart from dealing with the admin of the servers, CDNs are careful to provide servers as close to your users as practical which helps in reducing latency.

The problem that Kyle exposes is that any personalisation needs to be done on the player itself or on the server. The former requiring implementing the same features on many platforms, the latter destroying the value of the CDN since it’s based on needing the central server(s) to calculate the new information and send it to the CDN bringing us back to square one.

The solution that Cloudflare has developed allows javascript to run on the the CDN’s computers, referred to as the ‘edge’. This allows much of the logic to be done close to the consumer and gives the highest chance of reusing CDN assets whilst also reducing the latency of the requests compared to talking to the central server infrastructure. Doing this with javascript provides a well-understood environment for web developers. Kyle provides examples to understand how this can be done with relatively simple code.

Watch now!
Speaker

Kyle Boutette Kyle Boutette
Systems Engineer,
Cloudflare

Video: Video Caching Best Practices

Caching is a critical element of the streaming video delivery infrastructure. By storing objects as close to the viewer as possible, you can reduce round-trip times, cut bandwidth costs, and create a more efficient delivery chain.

This video brings together Disney, Qwilt and Verizon to understand their best-practices and look at the new Open Caching Network (OCN) working group from the Streaming Video Alliance. This recorded webinar is a discussion on the different aspects of caching and the way the the OCN addresses this.

The talk starts simply by answering “What is a caching server and how does it work?” which helps everyone get on to the same page whilst listening to the answers to “What are some of the data points to collect from the cache?” hearing ‘cache:hit-ratio’, ‘latency’, ‘cache misses’, ‘data coming from the CDN vs the origin server’ as some of the answers.

This video continues by exploring how caching nodes are built, optimising different caching solutions, connecting a cache to the Open Caching Network, and how bettering cache performance and interoperability can improve your overall viewer experience.

The Live Streaming Working Group is mentioned covered as they are working out the parameters such as ‘needed memory’ for live streaming servers and moves quickly into discussing some tricks-of-the-trade, which often lead to a better cache.

There are lots of best practices which can be shared and the an open caching network one great way to do this. The aim is to create some interoperability between companies, allowing small-scale start-up CDNs to talk to larger CDNs. A way for a streaming company to understand that it can interact with ‘any’ CDN. As ever, the idea comes down to ‘interoperability’. Have a listen and judge for yourself!

Watch now!
Speakers

Eric Klein Eric Klein
Director, Content Distribution – Disney+/ESPN+, Disney Streaming Services
Co-Chair, Open Cache Working Group, Streaming Video Alliance
Yoav Gressel Yoav Gressel
Vice President of R&D,
Qwilt
Sanjay Mishra Sanjay Mishra
Director, Technology
Verizon
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Media Alliance

Video: Mitigating Online Video Delivery Latency

Real-world solutions to real-world streaming latency in this panel from the Content Delivery Summit at Streaming Media East. With everyone chasing reductions in latency, many with the goal of matching traditional broadcast latencies, there are a heap of tricks and techniques at each stage of the distribution chain to get things done quicker.

The panel starts by surveying the way these companies are already serving video. Comcast, for example, are reducing latency by extending their network to edge CDNs. Anevia identified encoding as latency-introducer number 1 with packaging at number 2.

Bitmovin’s Igor Oreper talks about Periscope’s work with low-latency HLS (LHLS) explaining how Bitmovin deployed their player with Twitter and worked closely with them to ensure LHLS worked seamlessly. Periscope’s LHLS is documented in this blog post.

The panel shares techniques for avoiding latency such as keeping ABR ladders small to ensure CDNs cache all the segments. Damien from Anevia points out that low latency can quickly become pointless if you end up with a low-latency stream arriving on an iPhone before Android; relative latency is really important and can be more so than absolute latency.

The importance of HTTP and the version is next up for discussion. HTTP 1.1 is still widely used but there’s increasing interest in HTTP 2 and QUIC which both handle connections better and reduce overheads thus reducing latency, though often only slightly.

The panel finishes with a Q&A after discussing how to operate in multi-CDN environments.

Watch now!
Speakers

Damien Lucas Damien Lucas
CTO & Co-Founder,
Anevia
Ryan Durfey Ryan Durfey
CDN Senior Product Manager,
Comcast Technology Solutions
Igor Oreper Igor Oreper
Vice President, Solutions
Bitmovin
Eric Klein Eric Klein
Director, Content Distribution,
Disney Streaming Services (was BAMTECH Media)
Dom Robinson Dom Robinson
Director,
id3as

Video: Engineering a Live Streaming Workflow for Super Bowl LIII


Super Bowl 53 has come and gone with another victory for the New England Patriots. CBS Interactive responsible for streaming of this event built a new system to deal with all the online viewers. Previously they used one vendor for acquisition and encoding and another vendor for origin storage, service delivery and security. This time the encoders were located in CBS Broadcast Centre in New York and all other systems moved to AWS cloud. Such approach gave CBS full control over the streams.

Due to a very high volume of traffic (between 30 and 35 terabits) four different CDN vendors had to be engaged. A cloud storage service optimized for live streaming video not only provided performance, consistency, and low latency, but also allowed to manage multi-CDN delivery in effective way.

In this video Krystal presents a step-by-step approach to creating a hybrid cloud/on premise infrastructure for the Super Bowl, including ad insertion, Multi-CDN delivery, monitoring and operational visibility. She emphasizes importance of scaling infrastructure to meet audience demands, taking ownership of end to end workflow, performing rigorous testing and handling communication across multiple teams and vendors.

You can download the slides from here.

Watch now!

Speaker

Krystal Mejia Krystal Mejia
Software Engineer,
CBS Interactive