Video: Workflow Evolution Within the CDN

The pandemic has shone a light on CDNs as they are the backbone of much of what we do with video for streaming and broadcast. CDNs aim to scale up in a fast, sophisticated way so you don’t have to put in the research to achieve this yourself. This panel from the Content Delivery Summit sees Dom Robinson bringing together Jim Hall from Fastly with Akamai’s Peter Chave, Ted Middleton from Amazon and Paul Tweedy from BBC Design + Engineering.

The panel discusses the fact that although much video conferencing traffic being WebRTC isn’t supported, there are a lot of API calls that are handled by the CDN. In fact, over 300 trillion API calls were made to Amazon last year. Zoom and other solutions do have an HLS streaming option that has been used and can benefit from CDN scaling. Dom asks whether people’s expectations have changed during the pandemic and then we hear from Paul as he talks a little about the BBC’s response to Covid.

 

 

THE CTA’s Common Media Client Data standard, also known as CTA 5004, is a way for a video player to pass info back to the CDN. In fact, this is so powerful that it can provide highly granular real-time reports for customers but also enables hints to be handed back from the players so the CDNs can pre-fetch content that is likely to be needed. Furthermore, having a standard for logging will be great for customers who are multi-CDN and need a way to match logs and analyse their system in its entirety. This work is also being extended, under a separate specification to be able to look upstream in a CDN workflow to understand the status of other systems like edge servers.

The panel touches on custom-made analytics, low latency streaming such as Apples LL-HLS and why it’s not yet been adopted, current attempts in the wild to bring HLS latency down, Edge computing and piracy.

Watch now!
Speakers

Peter Chave Peter Chave
Principal Architect,
Akamai Technologies
Paul Tweedy Paul Tweedy
Lead Architect, Online Technology Group,
BBC Design + Engineering
Ted Middleton Ted Middleton
Global Leader – Specialized Solution Architects, Edge Services
Amazon
Jim Hall Jim Hall
Principal Sales Engineer,
Fastly
Dom Robinson Moderator: Dom Robinson
Director and Creative Firestarter, id3as
Contributing Editor, StreamingMedia.com, UK

Video: CDNs: Delivering a Seamless and Engaging Viewing Experience

This video brings together broadcasters, telcos and CDNs to talk about the challenges of delivering a perfect streaming experience to large audiences. Eric Klein from Disney+ addresses the issues along with Fastly’s Gonzalo de la Vega, Jorge Hernandez from Telefonica, Adriaan Bloem with Shahid moderated by Robert Ambrose.

Eric starts by talking from the perspective of Disney+. Robert asked if scaling up quickly enough to meet Disney+’s extreme growth has been a challenge. Eric replies that scale is built by having multiple routes to markets using multiple CDNs so the main challenge is making sure they can quickly move to the next new market as they are announced. Before launching, they do a lot of research to work out which bitrates are likely to be streamed and on what devices for the market and will consider offering ABR ladders to match. They work with ISPs and CDNs using Open Caching. Eric has spoken previously about open caching which is a specification from the Streaming Video Alliance to standardise the API between for CDNs and ISPs. Disney+ uses 7-8 different provers currently and never rely on only one method to get content to the CDN. Eric and his team have built their own equipment to manage cache fill.

Adriaan serves the MENA market and whilst the gulf is fairly easy to address, north Africa is very difficult as internet bandwidths are low and telcos don’t peer except in Marseille. Adriaan feels that streaming in Europe and North America as ‘a commodity’ as, relatively, it’s so much easier compared to north Africa. They have had to build their own CDN to reach their markets but because they are not in competition with the telcos, unlike CDNs, they find it relatively easy to strike the deals needed for the CDN. Shahid has a very large library so getting assets in the right place can be difficult. They see an irony that their AVOD services are very popular and get many hits for a lot of the popular content meaning it is well cached. Their SVOD content has a very long tail meaning that despite viewers paying for the service, they risk getting a worse service because most of the content isn’t being cached.

Jorge presents his view as both a streaming provider, Movistar, and a telco, Telefonica which services Spain and South America. With over 100 POPs, Telefonica provides a lot of IPTV infrastructure for streaming but also over the internet. They have their own CDN, TCDN, which delivers most of their traffic, bursting to commercial CDNs when necessary. Telefonica also supports Open Caching.

Eric explains that the benefit of Open Caching is that, because certain markets are hard to reach, you’re going to need a variety of approaches to get to these markets. This means you’ll have a lot of different companies involved but to have stability in your platform you need to be interfacing with them in the same way. With Open Caching, one command for purge can be sent to everyone at once. For Adriaan, this is “almost like a dream” as he has 6 different dashboards and is living through the antithesis of Open Caching. He says it can be very difficult to track the different failovers on the CDNs and react.

Gonzalo points out how far CDNs like Fastly have come. Recently they had 48 hours’ notice to enable resources for 1-million concurrent views which is the same size as the whole of the Fastly CDN some years go. Fastly are happy to be part of customers’ multi-CDN solutions and when their customers do live video, Fastly recommend that they have more than one simply for protection against major problems. Thinking about live video, Eric says that everything at Disney+ is designed ‘live first’ because if it works for live, it will work for VoD.

The panel finishes by answering questions from the audience.

Watch now!
Free registration required

Speakers

Eric Klein Eric Klein
Director, Media Distribution, CDN Technology,
Disney+
Jorge Hernandez Jorge Hernandez
Head of CDN Development and Deployment,
Telefonica/Movistar
Adriaan Bloem Adriaan Bloem
Head of Infrastructure,
Shahid
Gonzalo de la Vega Gonzalo de la Vega
VP Strategic Projects,
Fastly
Robert Ambrose Robert Ambrose
Co-Founder and Research Director,
Caretta Research

Video: LL-HLS Discussion with THEO, Wowza & Fastly

Roundtable discussion with Fastly, Theo and Wowza

iOS 14 has finally started to hit devices and with it, LL-HLS is now available in millions of devices. Low-Latency HLS is Apple’s latest evolution of HLS, a streaming protocol which has been widely used for over a decade. Its typical latency has gradually come down from 60 seconds to, between 6 and 15 seconds now. There are still a lot of companies that want to bring that down further and LL-HLS is Apple’s answer to people who want to operate at around 2-4 seconds total latency, which matches or beats traditional broadcast.

LL-HLS was introduced last year and had a rocky reception. It came after a community-driven low-latency scheme called LHLS and after MPEG DASH announced CMAF’s ability to hit the same 2-4 second window. Famously, this original context, as well as the technical questions over the new proposal, were summed up well in Phil Cluff’s blog post which was soon followed by a series of talks trying to make sense of LL-HLS ahead of implementation. This is the Apple video introducing LL-HLS in its first form. And the reactions from AL Shenker from CBS Interactive, Marina Kalkanis from M2A Media and Akamai’s Will Law which also nicely sums up the other two contenders. Apple have now changed some of the spec in response to their own further reasearch and external feedback which was received positively and summed up in, THEO CTO, Pieter-Jan Speelmans’ recent webinar bringing us the updates.

In this panel, Pieter is joined by Chris Buckley from Fastly Inc. and Wowza’s Jamie Sherry discussing pressing LL-HLS into action. Moderator Alison Kolodny hosts the talk which covers a wide variety of points.

“Wide adoption” is seen as the day-1 benefit. If you support LL-HLS then you’ll know you’re able to hit a large number of iPads, iPhones and Macs. Typically Apple sees a high percentage of the userbase upgrade fairly swiftly and easily see more than 75% of devices updated within four months of release. The panel then discusses how implementation has become easier given the change in protocol where the use of HTTP/2’s push technology was dropped which would have made typical CDN techniques like hosting the playlists separately to the media impossible. Overall, CDN implementation has become more practical since with pre-load hints, a CDN can host many, many connections into to it, all waiting for a certain chunk and collapse that down to a single link to the origin.

One aspect of implementation which has improved, we hear from Pieter-Jan, is building effective Adaptive Bit Rate (ABR) switching. With low-latency protocols, you are so close to live that it becomes very hard to download a chunk of video ahead of time and measure the download speed to see if it arrived quicker than realtime. If it did, you’d infer there was spare bit rate. LL-HLS’s use of rendition reports, however, make that a lot easier. Pieter-Jan also points out SSAI is easier with rendition reports.

The rest of the discussion covers device support for LL-HLS, subtitles workflows, the benefits of TLS 1.3 being recommended, and low-latency business cases.

Watch now!
The webinar is free to watch, on demand, in exchange for your email details. The link is emailed to you immediately.
Speaker

Chris Buckley
Senior Sales Engineer,
Fastly Inc.
Pieter-Jan Speelmans Pieter-Jan Speelmans
CTO,
THEO Technologies
Jamie Sherry Jamie Sherry
Senior Product Manager,
Wowza
Alison Kolodny Moderator: Alison Kolodny
Senior Product Manager of Media Services,
Frame.io

Video: How CBS Sports Digital streams live events at scale

Delivering high scale in streaming really exposes the weaknesses of every point of your workflow, so even those of us who are not streaming at maximum scale, there are many lessons to be learnt. CBS Sports Digital delivered the Super Bowl using the principles of ‘practice, practice, practice’, keeping the solution as simple as possible and making mitigation of problems primary to solving them.

Taylor Busch tells walks us through their solution explaining how it supported their key principles and highlighting the technology used. Starting with Acquisition, he covers the SDI fibre delivery to a backup facility as well as the AWS Direct Connect links for their Elemental Live encoders. The origin servers were in two different regions and both received data from both sets of encoders.

CBS used ‘Output locking’ which ensures that the TS segments are all aligned even across different encoders which is done by respecting the timecode in the SDI and helps in encoder failover situations. QVBR encoding is a method of encoding up to a quality level rather than simply saying ‘7000 kbps’. QVBR provides a more efficient use a bandwidth since in the situations where a scene doesn’t require a lot of bandwidth, it won’t be sent. This variability, even if you run in capped mode to limit the bandwidth of particularly complex scenes, can look like a failing encoder to some systems, so the fact this is now in ‘VBR’ mode, needs to be understood by all the departments and companies who monitor your feed.

Advertising is famously important for the Super Bowl, so Taylor gives an overview of how they used the CableLabs ESAM protocol and SCTE to receive information about and trigger the adverts. This combined SCTE-104, ESAM and SCTE-35 as we’ll as allowing clients to use VAST for tracking. Extra caching was provided by Fastly’s Media Shield which tests for problems with manifests, origin servers and encoders. This fed a Multi-CDN setup using 4 CDNs which could be switched between. There is a decision point for requests to determine which CDN should answer.

Taylor then looks at the tools, such as Mux’s dashboard, which they used to spot problems in the system; both NOC-style tools and multiviewers. They set up three war rooms which looked at different aspects of the system, connectivity, APIs etc. This allowed them to focus on what should be communicated keeping ‘noise’ down to give people the space they needed to do their work at the same time as providing the information required. Taylor then opens up to questions from the floor.

Watch now!
Speaker

Taylor Busch, Sr. Taylor Busch
Senior Director Engineering,
CBS Sports Digital