Video: WAVE (Web Application Video Ecosystem) Update

With wide membership including Apple, Comcast, Google, Disney, Bitmovin, Akamai and many others, the WAVE interoperability effort is tackling the difficulties web media encoding, playback and platform issues utilising global standards.

John Simmons from Microsoft takes us through the history of WAVE, looking at the changes in the industry since 2008 and WAVE’s involvement. CMAF represents an important milestone in technology recently which is entwined with WAVE’s activity backed by over 60 major companies.

The WAVE Content Specification is derived from the ISO/IEC standard, “Common media application format (CMAF) for segmented media”. CMAF is the container for the audio, video and other content. It’s not a protocol like DASH, HLS or RTMP, rather it’s more like an MPEG 2 transport stream. CMAF nowadays has a lot of interest in it due to its ability to delivery very low latency streaming of less than 4 seconds, but it’s also important because it represents a standardisation of fMP4 (fragmented MP4) practices.

The idea of standardising on CMAF allows for media profiles to be defined which specify how to encapsulate certain codecs (AV1, HEVC etc.) into the stream. Given it’s a published specification, other vendors will be able to inter-operate. Proof of the value of the WAVE project are the 3 amendments that John mentions issued from MPEG on the CMAF standard which have come directly from WAVE’s work in validating user requirements.

Whilst defining streaming is important in terms of helping in-cloud vendors work together and in allowing broadcasters to more easily build systems, its vital the decoder devices are on board too, and much work goes into the decoder-device side of things.

On top of having to deal with encoding and distribution, WAVE also specifies an HTML5 APIs interoperability with the aim of defining baseline web APIs to support media web apps and creating guidelines for media web app developers.

This talk was given at the Seattle Video Tech meetup.

Watch now!
Slides from the presentation
Check out the free CTA specs

Speaker

John Simmons John Simmons
Media Platform Architect,
Microsoft

Video: Deploying CMAF In 2019

It’s all very good saying “let’s implement CMAF”, but what’s implemented so far and what can you expect in the real world, away from hype and promises? RealEyes took the podium at the Video Engineering Summit to explain.

CMAF represents an evolution of the tried and tested technologies HLS and DASH. With massive scalability and built upon the well-worn tenants of HTTP, Netflix and a whole industry was born and is thriving on these still-evolving technologies. CMAF stands for the Common Media Application Format because it was created to allow both HLS and DASH to be implemented in one common standard. But the push to reduce latency further and further has resulted in CMAF being better known for it’s low-latency form which can be used to deliver streams with five to ten times lower latencies.

John Gainfort tackles explaining CMAF and highlights all the non-latency-related features before then tackling its low-latency form. We look at what it is (a manfest) and where it came from (ISO BMFF before diving in to the current possibilities and the ‘to do list’ of DRM.

Before the Q&A, John then moves on to how CMAF is implemented to deliver low-latency stream: what to expect in terms of latency and the future items which, when achieved, will deliver the full low-latency experience.

Watch now!

Speaker

John Gainfort John Gainfort.
Development Manager,
RealEyes

Video: Making Live Streaming More ‘Live’ with LL-CMAF

Squeezing streaming latency down to just a few seconds is possible with CMAF. Bitmovin guides us through what’s possible now and what’s yet to come.

CMAF represents an evolution of the tried and tested technologies HLS and DASH. With massive scalability and built upon the well-worn tenants of HTTP, Netflix and a whole industry was born and is thriving on these still-evolving technologies. But the push to reduce latency further and further has resulted in CMAF which can be used to deliver streams with five to ten times lower latencies.

Paul MacDougall is a Solutions Architect with Bitmovin so is well placed to explain the application of CMAF. Starting with a look at what we mean by low latency, he shows that it’s still quite possible to find HLS latencies of up to a minute but more common latencies now are closer to 30 seconds. But 5 seconds is the golden latency which matches many broadcast mechanisms including digital terrestrial, so it’s no surprise that this is where low latency CMAF is aimed.

CMAF itself is simply a format which unites HLS and DASH under one standard. It doesn’t, in and of itself, mean your stream will be low latency. In fact, CMAF was born out of MPEG’s MP4 standard – officially called ISO BMFF . But you can use CMAF in a low-latency mode which is what this talk focusses on.

Paul looks at what makes up the latency of a typical feed discussing encoding times, playback latency and the other key places. With this groundwork laid, it’s time to look at the way CMAF is chunked and formatted showing that the smaller chunk sizes allow the encoder and player to be more flexible reducing several types of latency down to only a few seconds.

In order to take full advantage of CMAF, the play needs to understand CMAF and Paul explains these adaptations before moving on to the limitations and challenges of using CMAF today. One important change, for instance, is that chunked streaming players (i.e. HLS) have always timed the download of each chunk to get a feel for whether bandwidth was plentiful (download was quicker than time taken to play the chunk) or bandwidth was constrained (the chunk arrived slower than real-time). Based on this, the player could choose to increase or decrease the bandwidth of the stream it was accessing which, in HLS, means requesting a chunk from a different playlist. Due to the improvements in downloading smaller chunks and using real-time transfer techniques such as HTTP/1.1 Chunked Transfer the chunks are all arriving at the download speed. This makes it very hard to make ABR work for LL-CMAF, though there are approaches being tested and trialed not mentioned in the talk.

Watch now!

Speakers

Paul MacDougall Paul MacDougall
Solutions Architect,
Bitmovin

Webinar: Reducing Stream Latency


Latency seems to be the new battleground for streaming services. While optimising bandwidth and quality are still highly important, they are becoming mature parts of the business of streaming where as latency, and technologies to minimise it – as Apple showed this month – are still developing and vying for position.

Thursday June 27th 2019, 10am PDT / 1pm EDT / 18:00 GMT

Here, the Streaming Video Alliance brings together people from large streaming services to explore this topic finding out what they’ve been doing to reduce it, the problems they’ve faced and the solutions which are on the table.

Register now!
Speakers

Kevin Johns Kevin Johns
Distinguished Network Architect, Content and Media
CenturyLink
Chris Sammoury Chris Sammoury
Principal Engineer II,
Charter Communications
Richard Oesterreicher Richard Oesterreicher
CEO
Streaming Global/Hellastorm
Patrick Gendron Patrick Gendron
Director, Innovation
Harmonic
Johan Bolin Johan Bolin
Chief Product and Technology Officer,
Edgeware
Steve Miller-Jones Steve Miller-Jones
Vice President of Product Strategy,
Limelight Networks
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: Low Latency and High QOE for Live Streaming


Low latency streaming is always a compromise, but what can be done to keep QOE high?

This on-demand webinar looks at CMAF and presents some real-world data on this low latency technique. The webinar starts by explaining that CMAF is a low-latency streaming technology similar to HLS and other streaming protocols where the idea is to deliver the video as small files. Olivier and Alain from Harmonic explain how this is done and look at some of the trade-offs and compromises that are needed and introduce techniques to keep QOE high. They also look at deployment in cloud vs. on premise.

Pieter-Jan Speelmans talks about play tradeoffs and optimisations within the player. CMAF allows the buffer to be reduced and whilst a bad network may mean you buffer is similar to ‘normal’, but in good networks, this buffer can be brought down significantly. He also talks about how ABR switching is impacted by GOP length even in CMAF.

Viaccess-Orca explains how DRM works with CMAF and looks at some of the challenges including licences acquisition time and overloading licence servers at the beginning of events. Akamai’s Will Law explains some benefits of CMAF and the near-real-time of chunk-based transfer (HTTP 1.1) and how downloading chunks at full speed leads to problems when the same broadband link is used by several clients.

There are lots of good talks on CMAF, but this is one of the few which talks about CMAF not as theory, but as is deployable today.

Watch now!

Speakers

Olivier Karra Olivier Karra
SaaS Business Development Director,
Harmonic Inc.
Alain Pellen Alain Pellen
Sr. Manager, OTT & IPTV Solutions,
Harmonic Inc
Will Law Will Law
Chief Architect – Media Devision,
Akamai
Pieter-Jan Speelmans Pieter-Jan Speelmans
Founder & CTO,
THEOplayer
Nicolas Delahaye Nicolas Delahaye
VP Engineering Player,
Viaccess-Orca