Video: CMAF Live Media Ingest Protocol Masterclass

We’ve heard before on The Broadcast Knowledge about CMAF’s success at bringing down the latency for live dreaming to around 3 seconds. CMAF is standards based and works with Apple devices, Android, Windows and much more. And while that’s gaining traction for delivery to the home, many are asking whether it could be a replacement technology for contribution into the cloud.

Rufael Mekuria from Unified Streaming has been working on bringing CMAF to encoders and packagers. All the work in the DASH Industry forum has centred around to key points in the streamin architecture. The first is on the output of the encoder to the input of the packager, the second between the packager and the origin. This is work that’s been ongoing for over a year and a half, so let’s pause to ask why we need a new protocol for ingest.

 

 

RTMP and Smooth streaming have not been deprecated but they have not been specified to carry the latest codecs and while people have been trying to find alternatives, they have started to use fragmented MP4 and CMAF-style technologies for contribution in their own, non-interoperable ways. Push-based DASH and HLS are common but in need of standardisation and in the same work, support for timed metadata such as splice information for ads could be addressed.

The result of the work is a method of using a separate TCP connection for each essence track; there is a POST command for each subtitles stream, metadata, video etc. This can be done with fixed length POST, but is better achieved with chunked tranfer encoding.

Rufael next shows us an exmaple of a CMAF track. Based on the ISO BMFF standard, CMAF specifies which ‘boxes’ can be used. The CMAF specification provides for optional boxes which would be used in the CMAF fragements. Time is important so is carried in ‘Live basemedia decodetime’ which is a unix-style time stamp that can be inserted into both the fragment and the CMAF header.

With all media being sent separately, the standard provides a way to define groups of essences both implicitly and explicity. Redundancy and hot failover have been provided for with multiple sources ingesting to multiple origins using the timestamp synchronisation, identical fragments can be detected.

The additional timed metadata track is based on the ISO BMFFF standard and can be fragmented just like other media. This work has extended the standard to allow the carrying of the DASH EventMessageBox in the time metadata track in order to reuse existing specifications like id3 and SCTE 214 for carrying SCTE 35 messages.

Rufael finishes by explaining how SCTE messages are inserted with reference to IDR frames and outlines how the DASH/HLS ingest interface between the packager and origin server works as well as showing a demo.

Watch now!
Speaker

Rufael Mekuria Rufael Mekuria
Head of Research & Standardisation,
Unifed Streaming

From WebRTC to RTMP

Continuing our look at the most popular videos of 2020, in common with the previous post on SRT, today we look at replacing RTMP for ingest. This time, WebRTC is demonstrated as an option. With sub-second latency, WebRTC is a compelling replacement for RTMP.
 

 
Read what we said about it the first time in the original article, but you’ll see that Nick Chadwick from Mux takes us through the how RTMP works and where the gaps are as it’s phased out. He steps through the alternatives showing how even the low-latency delivery formats don’t fit the bill for contribution and shows how WebRTC can be a sub-second solution.

RIST and SRT saw significant and continued growth in use throughout 2020 as delivery formats and appear to be more commonly used than WebRTC, though that’s not to say that WebRTC isn’t continuing to grow within the broadcast community. SRT and RIST are both designed for contribution in that they actively manage packet loss, allow any codecs to be used and provide for other data to be sent, too. Overall, this tends to give them the edge, particularly for hardware products. But WebRTC’s wide availability on computers can be a bonus in some circumstances. Have a listen and come to your own conclusion.

Watch now!
Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux

Video: The fundamentals of online video & getting started with live streaming

There are plenty of videos detailing the latest streaming protocols, but not many which teach you how to literally put one together let alone ones that build it during the talk. Being a system of many components, there are countless permutations of how you could go about building a system, so how can you work out which ones you need and is there an easier way?

MUX’s Phil Cluff presents this talk for WeAreDevelopers to explain streaming and implement it as we watch. He begins by helping us think through exactly what we’re looking to get out of our service and using the budget we have to steer us towards, or way from, free services like YouTube and Twitch. The alternatives being OVPs such as Brightcove or aides supporting your self-sufficiency.

With motivations out of the way, Phil examines the whole chain starting with ‘Capture’. Whilst you’ll need a camera, he recommends the open-source project OBS to provide easy web page integration and a system which can be for general operation or for emergencies. Next is processing which typically includes dealing with old films/negatives. For distribution, Phil spends a couple of minutes describing the CDN in use.

Phil looks at why simply using the ‘video’ entity in HTML isn’t a solution for most streaming applications quickly moving on to discuss the large amount of ingest which still happens via RTMP, explaining the information needed to ensure the RTMP stream can connect. Phil next discusses ABR (Adaptive Bitrate Streaming) showing how it works with different resolutions and chunks. We then look further afield to MPEG-DASH to see how that delivers ‘MPEG Dynamic Adaptive Streaming over HTTP’ and look at the internals of manifest files.

In the next part of the talk, Phil shows us how to put together a page which delivers ABR streaming from an OBS camera which he also sets up and adds graphics to. Streaming into the cloud using RTMP we see the way Phil sets up OBS and configures it with a Stream Key. He then shows us how to create a player with HLS.js by prototyping a page, as we watch, in codesandbox.io. Finally he looks at some of the more advanced things you can do such as watermarking, getting credentials for social media simulcasts before fielding questions from the audience such as how to stream from the browser, realtime engagement APIs, Low Latency delivery (including Apple LL-HLS) and data privacy.

Watch now!
Speakers

Phil Cluff Phil Cluff
Streaming Architect,
MUX
Stefan Steinbauer Moderator: Stefan Steinbauer
Director, Developer Experience
WeAreDevelopers GmbH

Video: RTMP: A Quick Deep-Dive

RTMP hasn’t left us yet, though, between HLS, DASH, SRT and RIST, the industry is doing its best to get rid of it. At the time RTMP’s latency was seen as low and it became a defacto standard. But as it hasn’t gone away, it pays to take a little time to understand how it works

Nick Chadwick from Mux is our guide in this ‘quick deep-dive’ into the protocol itself. To start off he explains the history of the Adobe-created protocol to help put into context why it was useful and how the specification that Adobe published wasn’t quite as helpful as it could have been.

Nick then gives us an overview of the protocol explaining that it’s TCP-based and allows for multiple, bi-directional streams. He explains that RTMP multiplexes larger, say video, messages along with very short data requests, such as RPC, but breaking down the messages into chunks which can be multiplexed over just the one TCP connection. Multiplexing at the packet level allows RTMP to be asking the other end a question at the same time as delivering a long message.

Nick has a great ability to make describing the protocol and showing ASCII tables accessible and interesting. We quickly start looking at the header for chunks explaining what the different chunks are and how you can compress the headers to save bit rate. He also describes how the RTMP timestamp works and the control message and command message mechanism. Before answering Q&A questions, Nick outlines the difficulty in extending RTMP to new codecs due to the hard-coded list of codecs that can be used as well as recommending improvements to the protocol. It’s worth noting that this talk is from 2017. Whilst everything about RTMP itself will still be correct, it’s worth remembering that SRT, RIST and Zixi have taken the place of a lot of RTMP workflows.

Watch now!
Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux