Video: Specification of Live Media Ingest

“Standardisation is more than just a player format”. There’s so much to a streaming service than the video, a whole ecosystem needs to work together. In this talk from Comcast’s Mile High Video 2019, we see how different parts of the ecosystem are being standardised for live ingest.

RTMP and Smooth streaming are being phased out – without proper support for HEVC, VVC, HDR etc. they are losing relevance as well as, in the case of RTMP, support from the format itself. Indeed it’s clear that fragmented MP4 (fMP4) and CMAF are taking hold in their place so it makes sense for a new ingest standard to coalesce around these formats.

Rufael Mekuria from Unified streaming explains this effort to create a spec around live media ingest that is happening as part of MPEG DASH-IF. The work itself started at the end of 2017 with the aim of publishing summer 2019 supporting CMAF and DASH/HLS interfaces.

Rufael explains CMAF ingest used HTTP post to move each media stream to the origin packager. The tracks are separated into video, audio, timed text, subtitle and timed metadata. They are all transferred on separate tracks and is compatible with future codecs. He also covers security and timed text before covering DASH/HLS ingest which can also contain CMAF because HLS contains the capability to contain CMAF.

Reference software is available along with the <a href=”http://”https://dashif-documents.azurewebsites.net/Ingest/master/DASH-IF-Ingest.pdf” rel=”noopener noreferrer” target=”_blank”>specification.

Watch now!
Speaker

Rufael Mekuria Rufael Mekuria
Head of Research & Standardisation,
Unified Streaming

Video: DASH Updates

MPEG DASH is a standardised method for encapsulating media for streaming similar to Apple’s HLS. Based on TCP, MPEG DASH is a widely compatible way of streaming video and other media over the internet.

MPEG DASH is now on its 3rd edition, its first standard being in 2011. So this talk starts by explaining what’s new as of July 2019 in this edition. Furthermore, there are amendments already worked on which are soon to add more features.

Iraj Sodagar explains Service Descriptors which will be coming that allow the server to encapsulate metadata for the player which describes how the publisher intended to show the media. Maximum and minimum latency and quality is specified. for instance. The talk explains how these are used and why they are useful.

Another powerful metadata feature is the Initialization Set, Group and Presentation which gives the decoder a ‘heads up’ on what the next media will need in terms of playback. This allows the player to politely decline to play the media if it can’t display it. For instance, if a decoder doesn’t supply AV1, this can be identified before needing to attempt a decode or download a chunk.

Iraj then explains what will be in the 4th edition including the above, signalling leap seconds and much more. This should be published over the next few months.

Amendement 1 is working towards a more accurate timing model of events and defining a specific DASH profile for CMAF (the low-latency streaming technology based on DASH) which Iraj explains in detail.

Finishing off with session based DASH operations, a look over the DASH workplan/roadmap, ad insertion, event and timed metadata processing, this is a great, detailed look at the DASH of today and of 2020.

Watch now!
Speaker

Iraj Sodagar Iraj Sodagar
Independant Consultant

Video: WAVE (Web Application Video Ecosystem) Update

With wide membership including Apple, Comcast, Google, Disney, Bitmovin, Akamai and many others, the WAVE interoperability effort is tackling the difficulties web media encoding, playback and platform issues utilising global standards.

John Simmons from Microsoft takes us through the history of WAVE, looking at the changes in the industry since 2008 and WAVE’s involvement. CMAF represents an important milestone in technology recently which is entwined with WAVE’s activity backed by over 60 major companies.

The WAVE Content Specification is derived from the ISO/IEC standard, “Common media application format (CMAF) for segmented media”. CMAF is the container for the audio, video and other content. It’s not a protocol like DASH, HLS or RTMP, rather it’s more like an MPEG 2 transport stream. CMAF nowadays has a lot of interest in it due to its ability to deliver very low latency streaming of less than 4 seconds, but it’s also important because it represents a standardisation of fMP4 (fragmented MP4) practices.

The idea of standardising on CMAF allows for media profiles to be defined which specify how to encapsulate certain codecs (AV1, HEVC etc.) into the stream. Given it’s a published specification, other vendors will be able to inter-operate. Proof of the value of the WAVE project is the 3 amendments that John mentions issued from MPEG on the CMAF standard which have come directly from WAVE’s work in validating user requirements.

Whilst defining streaming is important in terms of helping in-cloud vendors work together and in allowing broadcasters to more easily build systems, it’s vital the decoder devices are on board too, and much work goes into the decoder-device side of things.

On top of having to deal with encoding and distribution, WAVE also specifies an HTML5 APIs interoperability with the aim of defining baseline web APIs to support media web apps and creating guidelines for media web app developers.

This talk was given at the Seattle Video Tech meetup.

Watch now!
Slides from the presentation
Check out the free CTA specs

Speaker

John Simmons John Simmons
Media Platform Architect,
Microsoft

Video: How to Identify Real-World Playout Options

There are so many ways to stream video, how can you find the one that suits you best? Weighing up the pros and cons in this talk is Robert Reindhardt from videoRx.

Taking each of the main protocols in turn, Robert explains the prevalence of each technology from HLS and DASH through to WebRTC and even Websockets. Commenting on each from his personal experience of implementing each with clients, we build up a picture of when the best situations to use each of them.

Speakers

Robert Reinhardt Robert Reinhardt
CTO,
videoRX