It’s all very good saying “let’s implement CMAF”, but what’s implemented so far and what can you expect in the real world, away from hype and promises? RealEyes took the podium at the Video Engineering Summit to explain.
CMAF represents an evolution of the tried and tested technologies HLS and DASH. With massive scalability and built upon the well-worn tenants of HTTP, Netflix and a whole industry was born and is thriving on these still-evolving technologies. CMAF stands for the Common Media Application Format because it was created to allow both HLS and DASH to be implemented in one common standard. But the push to reduce latency further and further has resulted in CMAF being better known for it’s low-latency form which can be used to deliver streams with five to ten times lower latencies.
John Gainfort tackles explaining CMAF and highlights all the non-latency-related features before then tackling its low-latency form. We look at what it is (a manfest) and where it came from (ISO BMFF before diving in to the current possibilities and the ‘to do list’ of DRM.
Before the Q&A, John then moves on to how CMAF is implemented to deliver low-latency stream: what to expect in terms of latency and the future items which, when achieved, will deliver the full low-latency experience.
Squeezing streaming latency down to just a few seconds is possible with CMAF. Bitmovin guides us through what’s possible now and what’s yet to come.
CMAF represents an evolution of the tried and tested technologies HLS and DASH. With massive scalability and built upon the well-worn tenants of HTTP, Netflix and a whole industry was born and is thriving on these still-evolving technologies. But the push to reduce latency further and further has resulted in CMAF which can be used to deliver streams with five to ten times lower latencies.
Paul MacDougall is a Solutions Architect with Bitmovin so is well placed to explain the application of CMAF. Starting with a look at what we mean by low latency, he shows that it’s still quite possible to find HLS latencies of up to a minute but more common latencies now are closer to 30 seconds. But 5 seconds is the golden latency which matches many broadcast mechanisms including digital terrestrial, so it’s no surprise that this is where low latency CMAF is aimed.
CMAF itself is simply a format which unites HLS and DASH under one standard. It doesn’t, in and of itself, mean your stream will be low latency. In fact, CMAF was born out of MPEG’s MP4 standard – officially called ISO BMFF . But you can use CMAF in a low-latency mode which is what this talk focusses on.
Paul looks at what makes up the latency of a typical feed discussing encoding times, playback latency and the other key places. With this groundwork laid, it’s time to look at the way CMAF is chunked and formatted showing that the smaller chunk sizes allow the encoder and player to be more flexible reducing several types of latency down to only a few seconds.
In order to take full advantage of CMAF, the play needs to understand CMAF and Paul explains these adaptations before moving on to the limitations and challenges of using CMAF today. One important change, for instance, is that chunked streaming players (i.e. HLS) have always timed the download of each chunk to get a feel for whether bandwidth was plentiful (download was quicker than time taken to play the chunk) or bandwidth was constrained (the chunk arrived slower than real-time). Based on this, the player could choose to increase or decrease the bandwidth of the stream it was accessing which, in HLS, means requesting a chunk from a different playlist. Due to the improvements in downloading smaller chunks and using real-time transfer techniques such as HTTP/1.1 Chunked Transfer the chunks are all arriving at the download speed. This makes it very hard to make ABR work for LL-CMAF, though there are approaches being tested and trialed not mentioned in the talk.
There are so many ways to stream video, how can you find the one that suits you best? Weighing up the pros and cons in this talk is Robert Reindhardt from videoRx.
Taking each of the main protocols in turn, Robert explains the prevalence of each technology from HLS and DASH through to WebRTC and even Websockets. Commenting on each from his personal experience of implementing each with clients, we build up a picture of when the best situations to use each of them.
Server-Side Ad Insertion (SSAI) it’s the best defence against ad-blockers, but switching in an ad at source can be tricky particularly in low latency streams. This talk at the OTT Leadership Summit at Streaming Media East brings together leaders in the field to explain where they’re up to in delivering this technology and the benefits they see.
Magnus Svensson tells us about the instrumental role Eyevinn Technology, the consultancy who run the technical conference Streaming Tech Sweden , is played in Sweden creating an open standard for all the broadcasters to work to in order to agree how to track SSAI allowing the correct payments to be made. Magnus also talks about aligning SCTE insertion with MPEG structure and the importance of correct preparation of the source video.
Tony Brown from Newsy talks about the centralised nature of SSAI making management easier and gives ana overview of decisioning bringing together buys and sellers of ads. Tony also discusses other analytics such as adjacency and targeting.
Jason Justman of Sinclair Broadcasting Group, explains SCTE insertion and talks about the technical difficulties in reacting to live changes in programming.
Geir Magnusson, Jr. from fuboTV covers the difficulties of preparing the ads quickly enough for thousands or millions of streams to get customised, SSAI ads at the same time and discusses his strategy to start pre-fetching ads from the ad server to prepare them ahead of time. Geir also highlights the misunderstanding that can exist where streaming provides the same video and programme experience as traditional broadcast but ad buyers don’t all understand how much more targeting is possible – even with SSAI.