Latency seems to be the new battleground for streaming services. While optimising bandwidth and quality are still highly important, they are becoming mature parts of the business of streaming where as latency, and technologies to minimise it – as Apple showed this month – are still developing and vying for position.
Here, the Streaming Video Alliance brings together people from large streaming services to explore this topic finding out what they’ve been doing to reduce it, the problems they’ve faced and the solutions which are on the table.
Low latency streaming is always a compromise, but what can be done to keep QOE high?
This on-demand webinar looks at CMAF and presents some real-world data on this low latency technique. The webinar starts by explaining that CMAF is a low-latency streaming technology similar to HLS and other streaming protocols where the idea is to deliver the video as small files. Olivier and Alain from Harmonic explain how this is done and look at some of the trade-offs and compromises that are needed and introduce techniques to keep QOE high. They also look at deployment in cloud vs. on premise.
Pieter-Jan Speelmans talks about play tradeoffs and optimisations within the player. CMAF allows the buffer to be reduced and whilst a bad network may mean you buffer is similar to ‘normal’, but in good networks, this buffer can be brought down significantly. He also talks about how ABR switching is impacted by GOP length even in CMAF.
Viaccess-Orca explains how DRM works with CMAF and looks at some of the challenges including licences acquisition time and overloading licence servers at the beginning of events. Akamai’s Will Law explains some benefits of CMAF and the near-real-time of chunk-based transfer (HTTP 1.1) and how downloading chunks at full speed leads to problems when the same broadband link is used by several clients.
There are lots of good talks on CMAF, but this is one of the few which talks about CMAF not as theory, but as is deployable today.
Google Cloud, also called GCP – Google Cloud Platform, continues to invest in Media & Entertainment at a time when many broadcasters, having completed their first cloud projects, are considering ways to ensure they are not beholden to any one cloud provider.
So it’s no surprise that, here, Google asked UK broadcaster Sky and their technology partner for the project, Harmonic Inc., to explain how they’ve been delivering channels in the cloud and cutting costs.
Melika Golkaram from Google Cloud sets the scene by explaining some of the benefits of Google Cloud for Media and Entertainment making it clear that, for them, M&E business isn’t simply a ‘nice to have’ on the side of being a cloud platform. Highlighting their investment in undersea cable and globally-distributed edge servers among the others, Melika hands over to Sky’s Jeff Webb to talk about how Sky have leveraged the platform.
Jeff explains some of the ways that Sky deals with live sports. Whilst sports require high quality video, low latency workflows and have high peak live-streaming audiences, they can also cyclical and left unused between events. High peak workload and long times of equipment left fallow play directly into the benefits of cloud. So we’re not surprised when Jeff says it halved the replacement cost of an ageing system, rather, we want to know more about how they did it.
The benefits that Sky saw revolve around fault healing, geographic resilience, devops, speed of deployment, improved monitoring including more options to leverage open source. Jeff describes these, and other, drivers before mentioning the importance of the ability to move this system between on-premise and different cloud providers.
Before handing over to Harmonic’s Moore Macauley, we’re shown the building blocks of the Sky Sports F1 channel in the cloud and discuss ways that fault healing happens. Moore then goes on to show how Harmonic harnessed their ‘VOS’ microservices platform which deals with ingest, compression, encryption, packaging and origin servers. Harmonic delivered this using GTK, Google Cloud’s Kubernetes deployment platform in multiple regions for fault testing, to allow for A/B testing and much more.
Let’s face it, even after all this time, it can still be tricky getting past the hype of cloud. Here we get a glimpse of a deployed-in-real-life system which not only gives an insight into how these services can (and do) work, but it also plots another point on the graph showing major broadcasters embracing cloud, each in their own way.
Tuesday March 12th, 2019. 8am PT / 11 ET / 16:00 GMT
ATSC 3.0 is a big change from previous ATSC and DVB transmission standards due to its ability to mix IP with traditional broadcast signals. By merging the best of IP with the best of DTH transmission, ATSC 3.0 enables new business models and helps broadcasters bring their current offerings up to date.
But what about the reality? Weigel Broadcast joined forces with top-tier companies to build out the station, including: Rohde Schwarz, Harmonic Inc., Triveni Digital, Enensys, Alive Telecommunications, and Sony. Each partner contributed essential equipment and resources for the sign-on of the ATSC 3.0 roll-out dubbed ‘Chicago 3.0.’
In this webinar, Harmonic’s Jean Macher is joined by Kyle Walker, VP technology at Weigel broadcast to take us through why the native IP transport is such a benefit and how they managed the experience across all viewers.
The webinar covers what was deployed, how it worked and the results. Plus they’ll also cover the principles of ATSC 3.0 services and the use-cases involved in Chicago 3.0.