Video: Scaling Live OTT with DASH


MPEG DASH is a standard for streaming which provides a stable, open chain for distribution detailing aspects like packaging and DRM as well as being the basis for low-latency CMAF streaming.

DASH Manifest files, text files which list the many small files which make up the stream, can be complicated, long and take a long time to parse, demonstrates Hulu’s Zachary Cava. As the live event continues, the number of chunks to describe increases and so manifest files can easily grow to hundred of KB and eventually to megabytes meaning the standard way of producing these .mpd files will end up slowing the player down to the point it can’t keep up with the stream.

Zachary goes over some initial optimisations which help a lot in reducing the size o the manifests before introducing a method of solving the scalability issue. He explains that patching the mid file is the way to go meaning you can reference just the updated values in the latest .mpd.

With on-secreen examples of manifest files, we clearly see how this works and we see that this method is still compatible with branching of the playback e.g. for regionalisation of advertising or programming.

Zachary finishes by explaining that this technique is arriving in the 4th edition of MPEG-DASH and by answering questions from the audience.

Watch now!

Speaker

Zachary Cava Zachary Cava
Video Platform Architect.
Hulu

Video: Introducing Low-Latency HLS

HLS has taken the world by storm since its first release 10 years ago. Capitalising on the already widely understood and deployed technologies already underpinning websites at the time, it brought with it great scalability and the ability to seamlessly move between different bitrate streams to help deal with varying network performance (and computer performance!)

HLS has continued to evolve over the years with the new versions being documented as RFC drafts under the IETF. Its biggest problem for today’s market is its latency. As originally specified, you were guaranteed at least 30 seconds latency and many viewers would see a minute. This has improved over the years, but only so far.

Low-Latency HLS (LL-HLS) is Apple’s answer to the latency problem. A way of bringing down latency to be comparable with broadcast television for those live broadcast where immediacy really matters.

Please note: Since this video was recorded, Apple has released a new draft of LL-HLS. As described in this great article from Mux, the update’s changes are

  • “Delivering shorter sub-segments of the video stream (Apple call these parts) more frequently (every 0.3 – 0.5s)
  • Using HTTP/2 PUSH to deliver these smaller parts, pushed in response to a blocking playlist request
  • Blocking playlist requests, eliminating the current speculative manifest request polling behaviour in HLS
  • Smaller, delta rendition playlists, which reduces playlist size, which is important since playlists are requested more frequently
  • Faster rendition switching, enabled by rendition reports, which allows clients to see what is happening in another playlist without requesting it in its entirety”[0]

Read the full article for the details and implications, some of which address some points made in the talk.

Furthermore, THEOplayer have released this talk explaining the changes and discussing implementation.

This talk from Apple’s HLS Technical Lead, Roger Pantos, given at Apple’s WWDC conference this year goes through the problems and the solution, clearly describing LL-HLS. Over the following weeks here on The Broadcast Knowledge we will follow up with some more talks discussing real-world implementations of LL-HLS, but to understand them, we really need to understand the fundamental proposition.

Apple has always been the gatekeeper to HLS and this is one reason the MPEG DASH exists; a streaming standard that is separate to any one corporation and has the benefits of being passed by a standards body (MPEG). So who better to give the initial introduction.

HLS is a chunk-based streaming protocol meaning that the illusion of a perfect stream of data is given by downloading in quick succession many different files and it’s the need to have a pipeline of these files which causes much of the delay, both in creating them and in stacking them up for playback. LL-HLS uses techniques such as reducing chunk length and moving only parts of them in order to drastically reduce this intrinsic latency.

Another requirement of LL-HLS is HTTP/2 which is an advance on HTTP bringing with it benefits such as having multiple requests over a single HTTP connect thereby reducing overheads and request pipelining.

Roger carefully paints the whole picture and shows how this is intended to work. So while the industry is still in the midst of implementing this protocol, take some time to understand it from the source – from Apple.

Watch now!
Download the presentation

Speaker

Roger Pantos Roger Pantos
HLS Technical Lead,
Apple

Video: A Standard for Video QoE Metrics

A standard in progress for quality of experience networks, rebufereing time etc. Under the CTA standards body wanting to create a standard around these metrics. The goal of the group is to come up with a standard set of player events, metrics & terminology around QoE streaming. Concurrent viewers, isn’t that easy to define? If the user is paused, are they concurrently viewing the video? Buffer underruns is called rebuffering, stalling, waiting. Intentionally focussing on what the viewers actually see and experience. QoS is a measurement of how well the platform is performing, not necessarily the same as what they are experiencing.

The standard has ideas of different levels. There are player properties and events which are standardised ways of signalling that certain things are happening. Also Session Metrics are defined which then can feed into Aggregate Metrics. The first set of metrics include things such as playback failure percentage, average playback stalled rate, average startup time and playback rate with the aim of setting up a baseline and to start to get feedback from companies as they implement these, seemingly simple, metrics.

This first release can be found on github.

Watch now!
Speaker

Steve Heffernan Steve Heffernan
Co-Founder, Head of Product,
Mux

Video: Specification of Live Media Ingest

“Standardisation is more than just a player format”. There’s so much to a streaming service than the video, a whole ecosystem needs to work together. In this talk from Comcast’s Mile High Video 2019, we see how different parts of the ecosystem are being standardised for live ingest.

RTMP and Smooth streaming are being phased out – without proper support for HEVC, VVC, HDR etc. they are losing relevance as well as, in the case of RTMP, support from the format itself. Indeed it’s clear that fragmented MP4 (fMP4) and CMAF are taking hold in their place so it makes sense for a new ingest standard to coalesce around these formats.

Rufael Mekuria from Unified streaming explains this effort to create a spec around live media ingest that is happening as part of MPEG DASH-IF. The work itself started at the end of 2017 with the aim of publishing summer 2019 supporting CMAF and DASH/HLS interfaces.

Rufael explains CMAF ingest used HTTP post to move each media stream to the origin packager. The tracks are separated into video, audio, timed text, subtitle and timed metadata. They are all transferred on separate tracks and is compatible with future codecs. He also covers security and timed text before covering DASH/HLS ingest which can also contain CMAF because HLS contains the capability to contain CMAF.

Reference software is available along with the <a href=”http://”https://dashif-documents.azurewebsites.net/Ingest/master/DASH-IF-Ingest.pdf” rel=”noopener noreferrer” target=”_blank”>specification.

Watch now!
Speaker

Rufael Mekuria Rufael Mekuria
Head of Research & Standardisation,
Unified Streaming