Video: Scaling Live OTT with DASH


MPEG DASH is a standard for streaming which provides a stable, open chain for distribution detailing aspects like packaging and DRM as well as being the basis for low-latency CMAF streaming.

DASH Manifest files, text files which list the many small files which make up the stream, can be complicated, long and take a long time to parse, demonstrates Hulu’s Zachary Cava. As the live event continues, the number of chunks to describe increases and so manifest files can easily grow to hundred of KB and eventually to megabytes meaning the standard way of producing these .mpd files will end up slowing the player down to the point it can’t keep up with the stream.

Zachary goes over some initial optimisations which help a lot in reducing the size o the manifests before introducing a method of solving the scalability issue. He explains that patching the mid file is the way to go meaning you can reference just the updated values in the latest .mpd.

With on-secreen examples of manifest files, we clearly see how this works and we see that this method is still compatible with branching of the playback e.g. for regionalisation of advertising or programming.

Zachary finishes by explaining that this technique is arriving in the 4th edition of MPEG-DASH and by answering questions from the audience.

Watch now!

Speaker

Zachary Cava Zachary Cava
Video Platform Architect.
Hulu

Video: Broadcast 101 – Audio in an IP Infrastructure

Uncompressed audio has been in the IP game a lot longer than uncompressed video. Because of its long history, it’s had chance to create a fair number of formats ahead of the current standard AES67. Since many people were trying to achieve the same thing, we find that some formats are compatible with AES67 – in part, whilst we that others are not compatible.

To navigate this difficult world of compatibility, Axon CTO Peter Schut continues the Broadcast 101 webinar series with this video recorded this month.

Peter starts by explaining the different audio formats available today including Dante, RAVENNA and others and outlines the ways in which they do and don’t interoperate. After spending a couple of minutes summarising each format individually, including the two SMPTE audio formats -30 and -31, he shows a helpful table comparing the,

Timing is next on the list discussing PTP and the way that SMPTE ST 2059 is used then packet time is covered explaining how the RTP payload fits into the equation. This payload directly affects the duration of audio you can fit into a packet. The duration is important in terms of keeping a low latency and is restricted to either 1ms or 125 microseconds by SMPTE ST 2110-30.

Peter finishes up this webinar talking about some further details about the interoperability problems between the formats.

Watch now!

Speaker

Peter Schut Peter Schut
CTO,
Axon

Video: Introducing Low-Latency HLS

HLS has taken the world by storm since its first release 10 years ago. Capitalising on the already widely understood and deployed technologies already underpinning websites at the time, it brought with it great scalability and the ability to seamlessly move between different bitrate streams to help deal with varying network performance (and computer performance!)

HLS has continued to evolve over the years with the new versions being documented as RFC drafts under the IETF. Its biggest problem for today’s market is its latency. As originally specified, you were guaranteed at least 30 seconds latency and many viewers would see a minute. This has improved over the years, but only so far.

Low-Latency HLS (LL-HLS) is Apple’s answer to the latency problem. A way of bringing down latency to be comparable with broadcast television for those live broadcast where immediacy really matters.

Please note: Since this video was recorded, Apple has released a new draft of LL-HLS. As described in this great article from Mux, the update’s changes are

  • “Delivering shorter sub-segments of the video stream (Apple call these parts) more frequently (every 0.3 – 0.5s)
  • Using HTTP/2 PUSH to deliver these smaller parts, pushed in response to a blocking playlist request
  • Blocking playlist requests, eliminating the current speculative manifest request polling behaviour in HLS
  • Smaller, delta rendition playlists, which reduces playlist size, which is important since playlists are requested more frequently
  • Faster rendition switching, enabled by rendition reports, which allows clients to see what is happening in another playlist without requesting it in its entirety”[0]

Read the full article for the details and implications, some of which address some points made in the talk.

Furthermore, THEOplayer have released this talk explaining the changes and discussing implementation.

This talk from Apple’s HLS Technical Lead, Roger Pantos, given at Apple’s WWDC conference this year goes through the problems and the solution, clearly describing LL-HLS. Over the following weeks here on The Broadcast Knowledge we will follow up with some more talks discussing real-world implementations of LL-HLS, but to understand them, we really need to understand the fundamental proposition.

Apple has always been the gatekeeper to HLS and this is one reason the MPEG DASH exists; a streaming standard that is separate to any one corporation and has the benefits of being passed by a standards body (MPEG). So who better to give the initial introduction.

HLS is a chunk-based streaming protocol meaning that the illusion of a perfect stream of data is given by downloading in quick succession many different files and it’s the need to have a pipeline of these files which causes much of the delay, both in creating them and in stacking them up for playback. LL-HLS uses techniques such as reducing chunk length and moving only parts of them in order to drastically reduce this intrinsic latency.

Another requirement of LL-HLS is HTTP/2 which is an advance on HTTP bringing with it benefits such as having multiple requests over a single HTTP connect thereby reducing overheads and request pipelining.

Roger carefully paints the whole picture and shows how this is intended to work. So while the industry is still in the midst of implementing this protocol, take some time to understand it from the source – from Apple.

Watch now!
Download the presentation

Speaker

Roger Pantos Roger Pantos
HLS Technical Lead,
Apple

Video: M6 France – Master Control and Playout IP Migration

French broadcast company M6 Group has recently moved to an all-IP workflow, employing the SMPTE ST 2110 suite of standards for professional media delivery over IP networks. The two main playout channels and MCR have been already upgraded and the next few channels will be transitioned to the new core soon.

The M6 system comprises equipment from five different vendors (Evertz, Tektronix, Harmonic, Ross and TSL), all managed and controlled using the AMWA NMOS IS-04 and IS-05 specifications. Such interoperability is an inherent feature of SMPTE ST 2110 suite of standards allowing customers to focus on the operational workflows and flexibility that IP brings them. Centralised management and configuration of the system is provided through web interfaces which also allows for easy and automated addition of a new equipment.

Thanks to Software Defined Orchestration and intuitive touch screen interfaces information such as source paths, link bandwidth / status, and device details can be quickly accessed via a web GUI. As the system is based on IP network, it is possible to come in and out of fabric numerous times without the same costs implications that you would have in the SDI world. Every point of the signal chain can be easily visualised which enables broadcast engineers to maintain and configure the system with ease.

You can see the slides here.

Watch now!

Speaker

Slavisa Gruborovic
Solution Architect
Evertz Microsystems Inc.
Fernando Solanes
Director Solutions Engineering
Evertz Microsystems Inc.