Video: Introducing Low-Latency HLS

HLS has taken the world by storm since its first release 10 years ago. Capitalising on the already widely understood and deployed technologies already underpinning websites at the time, it brought with it great scalability and the ability to seamlessly move between different bitrate streams to help deal with varying network performance (and computer performance!)

HLS has continued to evolve over the years with the new versions being documented as RFC drafts under the IETF. Its biggest problem for today’s market is its latency. As originally specified, you were guaranteed at least 30 seconds latency and many viewers would see a minute. This has improved over the years, but only so far.

Low-Latency HLS (LL-HLS) is Apple’s answer to the latency problem. A way of bringing down latency to be comparable with broadcast television for those live broadcast where immediacy really matters.

Please note: Since this video was recorded, Apple has released a new draft of LL-HLS. As described in this great article from Mux, the update’s changes are

  • “Delivering shorter sub-segments of the video stream (Apple call these parts) more frequently (every 0.3 – 0.5s)
  • Using HTTP/2 PUSH to deliver these smaller parts, pushed in response to a blocking playlist request
  • Blocking playlist requests, eliminating the current speculative manifest request polling behaviour in HLS
  • Smaller, delta rendition playlists, which reduces playlist size, which is important since playlists are requested more frequently
  • Faster rendition switching, enabled by rendition reports, which allows clients to see what is happening in another playlist without requesting it in its entirety”[0]

Read the full article for the details and implications, some of which address some points made in the talk.

Furthermore, THEOplayer have released this talk explaining the changes and discussing implementation.

This talk from Apple’s HLS Technical Lead, Roger Pantos, given at Apple’s WWDC conference this year goes through the problems and the solution, clearly describing LL-HLS. Over the following weeks here on The Broadcast Knowledge we will follow up with some more talks discussing real-world implementations of LL-HLS, but to understand them, we really need to understand the fundamental proposition.

Apple has always been the gatekeeper to HLS and this is one reason the MPEG DASH exists; a streaming standard that is separate to any one corporation and has the benefits of being passed by a standards body (MPEG). So who better to give the initial introduction.

HLS is a chunk-based streaming protocol meaning that the illusion of a perfect stream of data is given by downloading in quick succession many different files and it’s the need to have a pipeline of these files which causes much of the delay, both in creating them and in stacking them up for playback. LL-HLS uses techniques such as reducing chunk length and moving only parts of them in order to drastically reduce this intrinsic latency.

Another requirement of LL-HLS is HTTP/2 which is an advance on HTTP bringing with it benefits such as having multiple requests over a single HTTP connect thereby reducing overheads and request pipelining.

Roger carefully paints the whole picture and shows how this is intended to work. So while the industry is still in the midst of implementing this protocol, take some time to understand it from the source – from Apple.

Watch now!
Download the presentation

Speaker

Roger Pantos Roger Pantos
HLS Technical Lead,
Apple

Webinar: AWS – Behind the Stream

Date: November 14, 2019 / 8am PST / 11am EST / 16:00 GMT

Behind The Stream is an online show containing three webinars designed for sports media broadcasters, athletic teams, and digital rights holders.

The first of the three sessions here covers creating the right experience for the service. Particularly in sports, there are different ways to present graphics and stats, to have interactivity and to innovate in order to keep the audience with you and interested.

The second session is an intriguing look into using machine learning to analyse the video to create metadata, including player tracking and then how to process and display that data to add an extra layer of interest for the audience.

Lastly, but the longest session of the three, is an hour spent whiteboarding the streaming system itself, how the different elements in the cloud work together and the things to look out for when implementing this for yourself.

Whilst these sessions are specifically about AWS services, much of the principles can be carried over to other cloud providers. With this factor and AWS being synonymous, for many, with ‘cloud’, learning the AWS way of doing things is a fantastic way to learn about operating in the cloud in general.

Register now!

Video: Mitigating Online Video Delivery Latency

Real-world solutions to real-world streaming latency in this panel from the Content Delivery Summit at Streaming Media East. With everyone chasing reductions in latency, many with the goal of matching traditional broadcast latencies, there are a heap of tricks and techniques at each stage of the distribution chain to get things done quicker.

The panel starts by surveying the way these companies are already serving video. Comcast, for example, are reducing latency by extending their network to edge CDNs. Anevia identified encoding as latency-introducer number 1 with packaging at number 2.

Bitmovin’s Igor Oreper talks about Periscope’s work with low-latency HLS (LHLS) explaining how Bitmovin deployed their player with Twitter and worked closely with them to ensure LHLS worked seamlessly. Periscope’s LHLS is documented in this blog post.

The panel shares techniques for avoiding latency such as keeping ABR ladders small to ensure CDNs cache all the segments. Damien from Anevia points out that low latency can quickly become pointless if you end up with a low-latency stream arriving on an iPhone before Android; relative latency is really important and can be more so than absolute latency.

The importance of HTTP and the version is next up for discussion. HTTP 1.1 is still widely used but there’s increasing interest in HTTP 2 and QUIC which both handle connections better and reduce overheads thus reducing latency, though often only slightly.

The panel finishes with a Q&A after discussing how to operate in multi-CDN environments.

Watch now!
Speakers

Damien Lucas Damien Lucas
CTO & Co-Founder,
Anevia
Ryan Durfey Ryan Durfey
CDN Senior Product Manager,
Comcast Technology Solutions
Igor Oreper Igor Oreper
Vice President, Solutions
Bitmovin
Eric Klein Eric Klein
Director, Content Distribution,
Disney Streaming Services (was BAMTECH Media)
Dom Robinson Dom Robinson
Director,
id3as

Video: Understanding esports production

Esports is here to stay and brings a new dimension on big events which combine the usual challenges of producing and broadcasting events at scale with less usual challenges such as non-standard resolutions and frame rates. This session from the IBC 2019 conference looks at the reality of bringing such events to life.

The talk starts with a brief introduction to some Esports-only terms before heading into the discussions starting with Simon Eicher who talks about his switch toward typical broadcast tools for Esports which has helped drive better production values and story telling. Maxwell Trauss from Riot Games explains how they incubated a group of great producers and were able keep production values high by having them working on shows remotely worldwide.

Blizzard uses the technique of using a clean ‘world feed’ which is shared worldwide for regions to regionalise it with graphics and language before then broadcasting this to the world. In terms of creating better storytelling, Blizzard have their own software which interprets the game data and presents it in a more consumable way to the production staff.

Observers are people who control in-game cameras. A producer can call out to any one of the observers. The panel talks about how separating the players from the observers from the crowd allows them to change the delay between what’s happening in the game and each of these elements seeing it. At the beginning of the event, this creates the opportunity to move the crowd backwards in time so that players don’t get tipped off. Similarly they can be isolated from the observers for the same effect. However, by the end of the game, the delays have been changed to bring everyone back into present time for a tense finale.

Corey Smith from Blizzard explains the cloud setup including clean feeds where GFX is added in the cloud. This would lead to a single clean feed from the venue, in the end. ESL, on the other hand choose to create their streams locally.

Ryan Chaply from Twitch explains their engagement models some of which reward for watching. Twitch’s real-time chat banner also changes the way productions are made because the producers have direct feedback from the viewers. This leads, day by day, to tweaks to the formats where a production may stop doing a certain thing by day three if it’s not well received, conversely when something is a hit, they can capitalise on this.

Ryan also talks about what they are weighing up in terms of when they will start using UHD. Riot’s Maxwell mentions the question of whether fans really want 4K at the moment, acknowledging it’s an inevitability, he asks whether the priority is actually having more/better stats.

The panel finishes with a look to the future, the continued adoption of broadcast into Esports, timing in the cloud and dealing with end-to-end metadata and a video giving a taste of the Esports event.

Watch now!
Speakers

Simon Eicher Simon Eicher
Executive Producer, Director of Broadcast, eSports Services,
ESL
Ryan Chaply Ryan Chaply
Senior Esports Program Manager,
Twitch
Corey Smith Corey Smith
Director, Live Operations Broadcast Technology Group,
Blizzard
Maxwell Trauss Maxwell Trauss
Broadcast Architect,
Riot Games
Jens Fischer Jens Fischer
Global Esport Specialist and Account Manager D.A.CH,
EVS