Video: CDNs: Delivering a Seamless and Engaging Viewing Experience

This video brings together broadcasters, telcos and CDNs to talk about the challenges of delivering a perfect streaming experience to large audiences. Eric Klein from Disney+ addresses the issues along with Fastly’s Gonzalo de la Vega, Jorge Hernandez from Telefonica, Adriaan Bloem with Shahid moderated by Robert Ambrose.

Eric starts by talking from the perspective of Disney+. Robert asked if scaling up quickly enough to meet Disney+’s extreme growth has been a challenge. Eric replies that scale is built by having multiple routes to markets using multiple CDNs so the main challenge is making sure they can quickly move to the next new market as they are announced. Before launching, they do a lot of research to work out which bitrates are likely to be streamed and on what devices for the market and will consider offering ABR ladders to match. They work with ISPs and CDNs using Open Caching. Eric has spoken previously about open caching which is a specification from the Streaming Video Alliance to standardise the API between for CDNs and ISPs. Disney+ uses 7-8 different provers currently and never rely on only one method to get content to the CDN. Eric and his team have built their own equipment to manage cache fill.

Adriaan serves the MENA market and whilst the gulf is fairly easy to address, north Africa is very difficult as internet bandwidths are low and telcos don’t peer except in Marseille. Adriaan feels that streaming in Europe and North America as ‘a commodity’ as, relatively, it’s so much easier compared to north Africa. They have had to build their own CDN to reach their markets but because they are not in competition with the telcos, unlike CDNs, they find it relatively easy to strike the deals needed for the CDN. Shahid has a very large library so getting assets in the right place can be difficult. They see an irony that their AVOD services are very popular and get many hits for a lot of the popular content meaning it is well cached. Their SVOD content has a very long tail meaning that despite viewers paying for the service, they risk getting a worse service because most of the content isn’t being cached.

Jorge presents his view as both a streaming provider, Movistar, and a telco, Telefonica which services Spain and South America. With over 100 POPs, Telefonica provides a lot of IPTV infrastructure for streaming but also over the internet. They have their own CDN, TCDN, which delivers most of their traffic, bursting to commercial CDNs when necessary. Telefonica also supports Open Caching.

Eric explains that the benefit of Open Caching is that, because certain markets are hard to reach, you’re going to need a variety of approaches to get to these markets. This means you’ll have a lot of different companies involved but to have stability in your platform you need to be interfacing with them in the same way. With Open Caching, one command for purge can be sent to everyone at once. For Adriaan, this is “almost like a dream” as he has 6 different dashboards and is living through the antithesis of Open Caching. He says it can be very difficult to track the different failovers on the CDNs and react.

Gonzalo points out how far CDNs like Fastly have come. Recently they had 48 hours’ notice to enable resources for 1-million concurrent views which is the same size as the whole of the Fastly CDN some years go. Fastly are happy to be part of customers’ multi-CDN solutions and when their customers do live video, Fastly recommend that they have more than one simply for protection against major problems. Thinking about live video, Eric says that everything at Disney+ is designed ‘live first’ because if it works for live, it will work for VoD.

The panel finishes by answering questions from the audience.

Watch now!
Free registration required

Speakers

Eric Klein Eric Klein
Director, Media Distribution, CDN Technology,
Disney+
Jorge Hernandez Jorge Hernandez
Head of CDN Development and Deployment,
Telefonica/Movistar
Adriaan Bloem Adriaan Bloem
Head of Infrastructure,
Shahid
Gonzalo de la Vega Gonzalo de la Vega
VP Strategic Projects,
Fastly
Robert Ambrose Robert Ambrose
Co-Founder and Research Director,
Caretta Research

Video: Standardising Microservices on the Basis of MCMA

Microservices are a way of splitting up large programs and systems into many, many, smaller parts. Building up complex workflows from these single-function modules makes has many benefits including simplifying programming and testing, upgrading your system seamlessly with no downtime, scalability and the ability to run on the cloud. Microservices were featured last week on The Broadcast Knowledge. Microservices do present challenges, such as orchestrating hundreds of processes into a coherent media workflow.

The EBU is working with SMPTE and the Open Services Alliance for Media on a cloud-agnostic open source project called MCMA, Media Cloud Microservice Architecture. The MCMA project isn’t a specification, rather it a set of software providing tools to enable a move to microservices. We hear from Alexandre Rouxel from the EBU and Loïc Barbou from Bloomberg that this project started out of a need from some broadcasters to create a scalable infrastructure that could sit on a variety of cloud infrastructure.

Direct link

What is a service? Created a standard idea of a service that contains standard operations. Part of the project is a set of libraries that work with NodeJS and .net which deal with the code needed time and time again such as logging, handling data repositories, security etc. Joost Rovers explains how the Job Processor and Service Registry work together to orchestrate the media workflows and ensure there’s a list of every microservice available, and how to communicate with it. MCMA places shims in front of cloud services on GCP, AWS, Azure etc in order that each service looks the same. Joost outlines the libraries and modules available for MCMA and how they could be used.

Watch now!
Speakers

Loic Barbou Loïc Barbou
Consultant,
Bloomberg
Alexandre Rouxel Alexandre Rouxel
Data Scientist & Project Coordinator,
EBU
Joost Rovers Joost Rovers
Managing Director,
Rovers IT

Video: Overview of MPEG’s Network-Based Media Processing

Building complex services from microservices not simple. While making a static workflow can be practical, though time-consuming, making one that is able to be easily changed to match a business’s changing needs is another matter. If an abstraction layer could be placed over the top of the microservices themselves, that would allow people to concentrate on making the workflow correct and leave the abstraction layer to orchestrate the microservices below. This is what MPEG’s Network-Based Media Processing (NBMP) standard achieves.

Developed to counteract the fragmentation in cloud and single-vendor deployments, NBMP delivers a unified way to describe a workflow with the platform controlled below. Iraj Sodagar spoke at Mile High Video 2020 to introduce NBMP, now published as ISO/IEC 23090-8. NBMP provides a framework that allows you to deploy and control media processing using existing building blocks called functions fed by sources and sinks, also known as inputs and outputs. A Workflow Manager process is used to actually start and control the media processing, fed with a workflow description that describes the processing wanted as well as the I/O formats to use. This is complemented by a Function Discovery API and a Function Repository to discover and get hold of the functions needed. The Workflow Manager gets the function and uses the Task API to initiate the processing of media. The Workflow Manager also deals with finding storage and understanding networking.

Next, Iraj takes us through the framework APIs which allow the abstraction layer to operate, in principle, across multiple cloud providers. The standard contains 3 APIs: Workflow, Task & Function. The APIs use a CRUD architecture each having ‘update’ ‘Discover’ ‘Delete’ and similar actions which apply to Tasks, Functions and the workflows i.e. CreateWorkflow. The APIs can operate synchronously or asynchronously.

Split rendering is possible by splitting up the workflow into sub workflows which allows you to run certain tasks nearer to certain resources, say storage, or in certain locations like in the case of edge computing where you want to maintain low-latency by processing close to the user. In fact, NBMP has been created with a view to being able to be used by 5G operators and is the subject of two study items in 3GPP.

Watch now!
Speaker

Iraj Sodagar Iraj Sodagar
Principal Researcher
Tencent America

Iraj Sodagar,
Tencent America

Video: The Fenix Project: Cloud-Based Disaster Recovery

“Moving to the cloud” is different for each broadcaster, some are using it for live production, some for their archives, some just for streaming. While confidence in the cloud is increasing and the products are maturing, many companies are choosing to put their ‘second MCR’ in the cloud or, say, tier-2 playout to test the waters, gain experience and wait for a fuller feature set. Sky Italia, has chosen to put all its disaster recovery transmission capability in the cloud.

Davide Gandino joins us from Mile High 2020 to show – and demo – their disaster recovery deployment which covers playout, processing, distribution and delivery to the end-user. Davide explains this was all driven by a major fire at their facility in Rome. At the time, they managed to move their services to Milan with minimal on-air impact, but with destroyed equipment, they were left to rebuild. It wasn’t long before that rebuild was planned for the cloud.

This is no insignificant project, with 117 channels of which only 39 are third-party pass-through going on to four platforms, the full deployment uses 800 cloud encoders. This amounts to 4Gbps being sent up to the cloud and 8Gbps returning. David highlights the design uses both Google and Amazon cloud infrastructure with 3 availability zones in use for both.

A vital part of this project design is that not all 800 encoders would be working 24×7. This misses the point of the cloud, but the only scalable alternative is fully automated deployment which is exactly what Sky chose to do. The key tenants of the project are:

  • Everything automated – Deployment and configuration are automatic
  • Software Defined – All Applications to be software defined
  • Distributed – Distributed solution to absorb the loss of one site
  • Synchronised – All BAU (business as usual) changes to automatically update the DR configuration. This is done with what Sky call the ‘Service Control Layer’.
  • Observed – Monitoring of the DR system will be as good or better than usual operation

To active the DR, Davide tells us that there is a first stage script which launches a Kubernetes cluster on which the management software sits and 13 Kubernetes clusters across Google and AWS which will run the infrastructure itself. The second script, uses Jenkins jobs to deploy and configure the infrastructure such as encoders and DRM modules etc. Davide finishes the talk showing us a video of the deployment of the infrastructure, explaining what is happening as we see the platform being built.Watch now!
Speaker

Davide Gandino Davide Gandino
Head of Streaming, Cloud & Computing Systems,
Sky Italia