Video: ABR Streaming and CDN Performance

Hot on the heel’s of yesterday’s video all about Adaptive Bitrate (ABR) streaming we have research engineer Yuriy Reznik from Brightcove looking at the subject in detail. We outlined the use of ABR yesterday showing how it is fundamental to online streaming.

Brightcove, an online video hosting platform with its own video player, has a lot of experience of delivery over the CDN. We saw yesterday the principles that the player, and to an extent the server, can use to deal with changing network (and to an extent changing client CPU usage) by going up and down through the ABR ladder. However this talk focusses on how the CDN in the middle complicates matters as it tries its best to get the right chunks in the right place at the right time.

How often are there ‘cache misses’ where the right file isn’t already in place? And how can you predict what’s necessary?

Yuriy even goes in to detail about how to work out when HEVC deployment makes sense for you. After all, even if you do deploy HEVC – do you need to do it for all assets? And if you do only deploy for some assets, how do you know which? Also, when does it make sense to deploy CMAF? In this talk, we hear the answers.

The slides for this talk

Watch the video now!

Speaker

Yuriy Reznik Yuriy Reznik
VP, Research
Brightcove

Video: Adaptive Bitrate Algorithms: How They Work

Streaming on the net relies on delivering video at a bandwidth you can handle. Called ‘Adaptive Bitrate’ or ABR, it’s hardly possible to think of streaming without it. While the idea might seem simple initially – just send several versions of your video – it quickly gets nuanced.

Streaming experts Streamroot take us through how ABR works at Streaming Media East from 2016. While the talk is a few years old, the facts are still the same so this remains a useful talk which not only introduces the topic but goes into detail on how to implement ABR.

The most common streaming format is HLS which relies on the player downloading the video in sections – small files – each representing around 3 to 10 seconds of video. For HLS and similar technologies, the idea is simply to allow the player, when it’s time to download the next part of the video, to choose from a selection of files each with the same video content but each at a different bitrate.

Allowing a player to choose which chunk it downloads means it can adapt to changing network conditions but does imply that each file has contain exactly the same frames of video else there would be a jump when the next file is played. So we have met our first complication. Furthermore, each encoded stream needs to be segmented in the same way and in MPEG, where you can only cut files on I-frame boundaries, it means the encoders need to synchronise their GOP structure giving us our second complication.

These difficulties, many more and Streamroot’s solutions are presented by Erica Beavers and Nikolay Rodionov including experiments and proofs of concept they have carried out to demonstrate the efficacy.

Watch now!

Speakers

Erica Beavers Erica Beavers
Head of Marketing & Partnerships,
Streamroot
Nikolay Rodionov Nikolay Rodionov
Co-Founder, CPO
Streamroot

Video: Google Next 19 – Building a Next-Generation Streaming Platform with Sky

Google Cloud, also called GCP – Google Cloud Platform, continues to invest in Media & Entertainment at a time when many broadcasters, having completed their first cloud projects, are considering ways to ensure they are not beholden to any one cloud provider.

Google’s commitment is evident in their still-recent appointment of ex-Discovery CTO John Honeycutt, this month’s announcement of Viacom’s Google Cloud adoption and the launch of their ‘deploy on any cloud platform’ service called Anthos (official site)

So it’s no surprise that, here, Google asked UK broadcaster Sky and their technology partner for the project, Harmonic Inc., to explain how they’ve been delivering channels in the cloud and cutting costs.

Melika Golkaram from Google Cloud sets the scene by explaining some of the benefits of Google Cloud for Media and Entertainment making it clear that, for them, M&E business isn’t simply a ‘nice to have’ on the side of being a cloud platform. Highlighting their investment in undersea cable and globally-distributed edge servers among the others, Melika hands over to Sky’s Jeff Webb to talk about how Sky have leveraged the platform.

Jeff explains some of the ways that Sky deals with live sports. Whilst sports require high quality video, low latency workflows and have high peak live-streaming audiences, they can also cyclical and left unused between events. High peak workload and long times of equipment left fallow play directly into the benefits of cloud. So we’re not surprised when Jeff says it halved the replacement cost of an ageing system, rather, we want to know more about how they did it.

The benefits that Sky saw revolve around fault healing, geographic resilience, devops, speed of deployment, improved monitoring including more options to leverage open source. Jeff describes these, and other, drivers before mentioning the importance of the ability to move this system between on-premise and different cloud providers.

Before handing over to Harmonic’s Moore Macauley, we’re shown the building blocks of the Sky Sports F1 channel in the cloud and discuss ways that fault healing happens. Moore then goes on to show how Harmonic harnessed their ‘VOS’ microservices platform which deals with ingest, compression, encryption, packaging and origin servers. Harmonic delivered this using GTK, Google Cloud’s Kubernetes deployment platform in multiple regions for fault testing, to allow for A/B testing and much more.

Let’s face it, even after all this time, it can still be tricky getting past the hype of cloud. Here we get a glimpse of a deployed-in-real-life system which not only gives an insight into how these services can (and do) work, but it also plots another point on the graph showing major broadcasters embracing cloud, each in their own way.

Watch now!

Speakers

Jeff Webb Jeff Webb
Principal Streaming, Architect
Sky
Moore Macauley Moore Macauley
Director, Product Architecture
Harmonic
Melika Golkram Melika Golkram
Customer Engineer,
Google Cloud Media

Video: Everything You Wanted to Know About ATSC 3.0

ATSC 3.0 is the next sea change in North American broadcasting, shared with South Korea, Mexico and other locations. Depending on your viewpoint, this could be as fundamental as the move to digital lockstep with the move to HD programming all those years ago.

ATSC 3.0 takes terrestrial broadcasting in to the IP world meaning everything transmitted over the air is done over IP and it brings with it the ability to split the bandwidth into separate pipes.

Here, Dr. Richard Chernock presents a detailed description of the available features within ATSC. He explains the new constellations and modulation properties delving into the ability to split your transmission bandwidth into separate ‘pipes’. These pipes can have different modulation parameters, robustness etc. The switch from 8VSB to OFDM allows for Single Frequency Networks which can actually help reception (due to guard intervals).

Additionally, the standard supports HEVC and scalable video (SHVC) whereby a single UHD encode can be sent which has an HD base-layer which can be decoded by every decoder plus an ‘enhancement layer’ which can be optionally decoded to produce a full UHD output for those decoders/displays which an support it.

With the move to IP, there is a blurring of broadcast and broadband. This can be used to deliver extra audios via broadband to be played with the main video and can be used as a return path to the broadcaster which can help with interactivity and audience measurement.

Dr. Chernock covers HDR, better pixels and Next Generation Audio as well as Emergency Alerts functionality improvements and accessibility features.

Speaker

Dr. Richard Chernock Dr. Richard Chernock
Chief Science Officer,
Triveni Digital