Video: Integrating CMAF Into A VOD Workflow

CMAF is often seen as the best hope for streaming to match the latency of broadcast. Fully standards based, many see this as the best route over Apple’s LL-HLS. Another benefit of it over LL-HLS is that it’s already a completed standard with growing support.

This talk from Tomas Bacik starts by explaining CMAF to us. Standing for the Common Media Application Format, it’s based on the standardised ISOBMFF container format and whilst CMAF isn’t by default low-latency, it is flexible enough to deliver just that. However, as Tomas from CDN77 points out, there are other major benefits in terms of its use of the Common Encryption format, reduces storage fees and more.

MPEG DASH is a commonly found streaming format based on ISO BMFF. It has always had the benefit of supporting other codecs such as HEVC and AV1 over HLS which is an AVC-only specification. CMAF is an extension of MPEG DASH which goes one step further in that it can deal with both HLS-style manifest files (.hls) as well as MPEG DASH format (.mpd) inheriting, of course, the multi-codec ability of DASH itself.

Next is central theme of the talk, looking at VoD workflows showing how CMAF fits in and, indeed, changes workflows for the better. CMAF directly impacts packaging, storage and CDN which is where we focus now. Given that some devices can play HLS and some can play DASH, if you try to serve both, you will double your requirements of packaging, storage etc. Dynamic packaging allows for immediately repackaging your chunks into either HLS or DASH as needed. Whilst this reduces the storage requirements, it increases processing and also increases the time to first byte. As you might expect, using CMAF throughout, Tomas explains in this talk, allows you to package once and store once which solves these problems.

Tomas continues by explaining the DRM abilities of CMAF including AES-CBC and finishes by taking questions from the audience.

Watch now!
See Streamflow’s blog post supporting the talk
Speakers

Tomas Bacik Tomas Bacik
VP of Product Development, Streamflow by CDN77
CDN77

Webinar: Securing Live Streams

Piracy in France cost €1.2bn in 2017 and worldwide the loss has been valued up to US$52 billion. Even if these numbers are inflated, over-counted or similar, it’s clear there is a lot of money at stake in online streaming. There are a number of ways of getting to protect your content, encryption, Digital Rights Management (DRM) and tokenisation are three key ones and this webinar will examine what works best in the real world.

All these technologies used together don’t always stop piracy 100%, but they can significantly impact the ease of pirating and the quality of the final material.

Date: Thursday January 30th – 10a.m. PT / 1p.m. / 18:00 GMT

It’s important to understand the difference between encryption and Digital Rights Management. In general DRM relies on encryption, whereby encryption is a way of making sure that decodable video only lands in the hands of people who have been given the encryption key. This means that people who are snooping on traffic between the video provider and consumer can’t see what the video is and can be accomplished in a similar way to secure web pages which are secured against eavesdroppers. The problem with encryption is, however, that it doesn’t intrinsically decide who is allowed to decode the video meaning anyone with the decryption key can video the content. Often this is fine, but if you want to run a pay-TV service, even ignoring content, it’s much better to target customer by customer who can video the video. And this is where DRM comes in.

DRM is multi-faceted and controls the way in which consumers can view/use the content as much as whether they can access it to start with. DRM, for instance, can determine that a display device can show the work, but a recorder is not allowed to make a recording. It can also determine access based on location. Another aspect of DRM is tracking in the form of insertion of watermarks and metadata which mean that if a work is pirated, there is a way to work back to the original subscriber to determine the source of the leak.

Tokenisation is a method in which the player requests access to the material and is passed a token, by means of a response from the server after it has checked if the player is allowed access. Because of the way this token is created, it is not possible for another player to use it to access the content which means that sharing a URI won’t allow another user access to the video. Without some form of access control, once one subscriber has received a URI to access the video, they could pass that to another user who could also then access it.

What’s the best way to use these technologies? What are the pros and cons and what are the other methods of securing media? These questions and more will be discussed in this Streaming Video Alliance webinar on January 30th.

Register now!
Speakers

Peter Cossack Peter Cossack
Vice President Cybersecurity services,
Irdeto
Kei Foo Kei Foo
Director of Advanced Video Engineering,
Charter Communications
Orly Amsalem Orly Amsalem
Product Manager, AI/ML based video security and anti-piracy solutions ,
Synamedia
Marvin Van Schalkwyk Marvin Van Schalkwyk
Senior Solutions Architect,
FriendMTS
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Media Alliance

Video: WAVE (Web Application Video Ecosystem) Update

With wide membership including Apple, Comcast, Google, Disney, Bitmovin, Akamai and many others, the WAVE interoperability effort is tackling the difficulties web media encoding, playback and platform issues utilising global standards.

John Simmons from Microsoft takes us through the history of WAVE, looking at the changes in the industry since 2008 and WAVE’s involvement. CMAF represents an important milestone in technology recently which is entwined with WAVE’s activity backed by over 60 major companies.

The WAVE Content Specification is derived from the ISO/IEC standard, “Common media application format (CMAF) for segmented media”. CMAF is the container for the audio, video and other content. It’s not a protocol like DASH, HLS or RTMP, rather it’s more like an MPEG 2 transport stream. CMAF nowadays has a lot of interest in it due to its ability to delivery very low latency streaming of less than 4 seconds, but it’s also important because it represents a standardisation of fMP4 (fragmented MP4) practices.

The idea of standardising on CMAF allows for media profiles to be defined which specify how to encapsulate certain codecs (AV1, HEVC etc.) into the stream. Given it’s a published specification, other vendors will be able to inter-operate. Proof of the value of the WAVE project are the 3 amendments that John mentions issued from MPEG on the CMAF standard which have come directly from WAVE’s work in validating user requirements.

Whilst defining streaming is important in terms of helping in-cloud vendors work together and in allowing broadcasters to more easily build systems, its vital the decoder devices are on board too, and much work goes into the decoder-device side of things.

On top of having to deal with encoding and distribution, WAVE also specifies an HTML5 APIs interoperability with the aim of defining baseline web APIs to support media web apps and creating guidelines for media web app developers.

This talk was given at the Seattle Video Tech meetup.

Watch now!
Slides from the presentation
Check out the free CTA specs

Speaker

John Simmons John Simmons
Media Platform Architect,
Microsoft

Video: Deploying CMAF In 2019

It’s all very good saying “let’s implement CMAF”, but what’s implemented so far and what can you expect in the real world, away from hype and promises? RealEyes took the podium at the Video Engineering Summit to explain.

CMAF represents an evolution of the tried and tested technologies HLS and DASH. With massive scalability and built upon the well-worn tenants of HTTP, Netflix and a whole industry was born and is thriving on these still-evolving technologies. CMAF stands for the Common Media Application Format because it was created to allow both HLS and DASH to be implemented in one common standard. But the push to reduce latency further and further has resulted in CMAF being better known for it’s low-latency form which can be used to deliver streams with five to ten times lower latencies.

John Gainfort tackles explaining CMAF and highlights all the non-latency-related features before then tackling its low-latency form. We look at what it is (a manfest) and where it came from (ISO BMFF before diving in to the current possibilities and the ‘to do list’ of DRM.

Before the Q&A, John then moves on to how CMAF is implemented to deliver low-latency stream: what to expect in terms of latency and the future items which, when achieved, will deliver the full low-latency experience.

Watch now!

Speaker

John Gainfort John Gainfort.
Development Manager,
RealEyes