Video: Low Latency Streaming

There are two phases to reducing streaming latency. One is to optimise the system you already have, the other is to move to a new protocol. This talk looks at both approaches achieving parity with traditional broadcast media through optimisation and ‘better than’ by using CMAF.

In this video from the Northern Waves 2019 conference, Koen van Benschop from Deutsche Telekom examines the large and low-cost latency savings you can achieve by optimising your current HLS delivery. With the original chunk sizes recommended by Apple being 10 seconds, there are still many services out there which are starting from a very high latency so there are savings to be had.

Koen explains how the total latency is made up by looking at the decode, encode, packaging and other latencies. We quickly see that the player buffer is one of the largest, the second being the encode latency. We explore the pros and cons of reducing these and see that the overall latency can fall to or even below traditional broadcast latency depending, of course, on which type (and which country’s) you are comparing it too.

While optimising HLS/DASH gets you down to a few seconds, there’s a strong desire for some services to beat that. Whilst the broadcasters themselves may be reticent to do this, not wanting to deliver online services quicker than their over-the-air offerings, online sports services such as DAZN can make latency a USP and deliver better value to fans. After all, DAZN and similar services benefit from low-second latency as it helps bring them in line with social media which can have very low latency when it comes to key events such as goals and points being scored in live matches.

Stefan Arbanowski from Fraunhofer leads us through CMAF covering what it is, the upcoming second edition and how it works. He covers its ability to use .m3u8 (from HLS) and .mpd (from DASH) playlist/manifest files and that it works both with fMP4 and ISO BMFF. One benefit from DASH is it’s Common Encryption standard. Using this it can work with PlayReady DRM, Fairplay and others.

Stefan then takes a moment to consider WebRTC. Given it proposes latency of less than one second, it can sound like a much better idea. Stefan outlines concerns he has about the ability to scale above 200,000 users. He then turns his attention back to CMAF and outlines how the stream is composed and how the player logic works in order to successfully play at low latency.

Watch now!
Speakers

Koen van Benschop Koen van Benschop
Senior Manager TV Headend and DRM,
Deutsche Telekom
Stefan Arbanowski Stefan Arbanowski
Director Future Applications and Media,
Fraunhofer FOKUS

Video: Introducing Low-Latency HLS

HLS has taken the world by storm since its first release 10 years ago. Capitalising on the already widely understood and deployed technologise already underpinning websites at the time, it brought with it great scalability and the ability to seamlessly move between different bitrate streams to help deal with varying network performance (and computer performance!)

HLS has continued to evolve over the years with the new versions being documented as RFC drafts under the IETF. It’s biggest problem for today’s market is its latency. As originally specified, you were guaranteed at least 30 seconds latency and many viewers would see a minute. This has improved over the years, but only so far.

Low-Latency HLS (LL-HLS) is Apple’s answer to the latency problem. A way of bringing down latency to be comparable with broadcast television for those live broadcast where immediacy really matters.

This talk from Apple’s HLS Technical Lead, Roger Pantos, given at Apple’s WWDC conference this year goes through the problems and the solution, clearly describing LL-HLS. Over the following weeks here on The Broadcast Knowledge we will follow up with some more talks discussing real-world implementations of LL-HLS, but to understand them, we really need to understand the fundamental proposition.

Apple has always been the gatekeeper to HLS and this is one reason the MPEG DASH exists; a streaming standard that is separate to any one corporation and has the benefits of being passed by a standards body (MPEG). So who better to give the initial introduction.

HLS is a chunk-based streaming protocol meaning that the illusion of a perfect stream of data is given by downloading in quick succession many different files and it’s the need to have a pipeline of these files which causes much of the delay, both in creating them and in stacking them up for playback. LL-HLS uses techniques such as reducing chunk length and moving only parts of them in order to drastically reduce this intrinsic latency.

Another requirement of LL-HLS is HTTP/2 which is an advance on HTTP bringing with it benefits such as having multiple requests over a single HTTP connect thereby reducing overheads and request pipelining.

Roger carefully paints the whole picture and shows how this is intended to work. So while the industry is still in the midst of implementing this protocol, take some time to understand it from the source – from Apple.

Watch now!
Download the presentation

Speaker

Roger Pantos Roger Pantos
HLS Technical Lead,
Apple

Video: Understanding esports production

Esports is here to stay and brings a new dimension on big events which combine the usual challenges of producing and broadcasting events at scale with less usual challenges such as non-standard resolutions and frame rates. This session from the IBC 2019 conference looks at the reality of bringing such events to life.

The talk starts with a brief introduction to some Esports-only terms before heading into the discussions starting with Simon Eicher who talks about his switch toward typical broadcast tools for Esports which has helped drive better production values and story telling. Maxwell Trauss from Riot Games explains how they incubated a group of great producers and were able keep production values high by having them working on shows remotely worldwide.

Blizzard uses the technique of using a clean ‘world feed’ which is shared worldwide for regions to regionalise it with graphics and language before then broadcasting this to the world. In terms of creating better storytelling, Blizzard have their own software which interprets the game data and presents it in a more consumable way to the production staff.

Observers are people who control in-game cameras. A producer can call out to any one of the observers. The panel talks about how separating the players from the observers from the crowd allows them to change the delay between what’s happening in the game and each of these elements seeing it. At the beginning of the event, this creates the opportunity to move the crowd backwards in time so that players don’t get tipped off. Similarly they can be isolated from the observers for the same effect. However, by the end of the game, the delays have been changed to bring everyone back into present time for a tense finale.

Corey Smith from Blizzard explains the cloud setup including clean feeds where GFX is added in the cloud. This would lead to a single clean feed from the venue, in the end. ESL, on the other hand choose to create their streams locally.

Ryan Chaply from Twitch explains their engagement models some of which reward for watching. Twitch’s real-time chat banner also changes the way productions are made because the producers have direct feedback from the viewers. This leads, day by day, to tweaks to the formats where a production may stop doing a certain thing by day three if it’s not well received, conversely when something is a hit, they can capitalise on this.

Ryan also talks about what they are weighing up in terms of when they will start using UHD. Riot’s Maxwell mentions the question of whether fans really want 4K at the moment, acknowledging it’s an inevitability, he asks whether the priority is actually having more/better stats.

The panel finishes with a look to the future, the continued adoption of broadcast into Esports, timing in the cloud and dealing with end-to-end metadata and a video giving a taste of the Esports event.

Watch now!
Speakers

Simon Eicher Simon Eicher
Executive Producer, Director of Broadcast, eSports Services,
ESL
Ryan Chaply Ryan Chaply
Senior Esports Program Manager,
Twitch
Corey Smith Corey Smith
Director, Live Operations Broadcast Technology Group,
Blizzard
Maxwell Trauss Maxwell Trauss
Broadcast Architect,
Riot Games
Jens Fischer Jens Fischer
Global Esport Specialist and Account Manager D.A.CH,
EVS

Video: Streaming Live Events: When it must be alright on the night

Live Streaming is an important part of not only online viewing, but increasingly of broadcast in general. It’s well documented that live programming is key to keeping linear broadcast’s tradition of ‘everyone watching at once’ which has been diluted – for both pros and cons – by non-linear viewing in recent years.

This panel, as part of IBC’s Content Everywhere, looks at the drivers behind live streaming, how it’s evolving and its future. Bringing together ultra-low-latency platform nanocosmos with managed service provider M2A Media and video player specialists Visual On , Editor of The Broadcast Knowledge, Russell Trafford-Jones starts the conversation asking what gamification is and how this plays in to live streaming.

nanocosmos’s Oliver Lietz explains how gamification is an increasing trend in terms of not only monetising existing content but is a genre in of itself providing content which is either entirely a game or has a significant interactive element. With such services, it’s clear that latency needs to be almost zero so his company’s ability to deliver one second latency is why he has experience in these projects.

We hear also from VisualOn’s Michael Jones who explains the low-latency service they were involved in delivering. Here, low-latency CMAF was used in conjunction with local synced-screen technology to ensure that not only was latency low, but second screen devices were not showing video any earlier/later than the main screen. The panel then discussed the importance of latency compared to synchronised viewing and where ultra-low latency was unnecessary.

Valentijn Siebrands from M2A talks about the ability to use live streaming and production in the cloud to deliver lower cost sports events but also deliver new types of programming. Valentijn then takes us into the topic of analytics, underlining the importance of streaming analytics which reveal the health of your platform/infrastructure as much as the analytics which are most usually talked about; those which tell you the quality of experience your viewers are having and their activities on your app.

The talk concludes with a look to the future, talking about the key evolving technologies of the moment and how they will help us move forward between now and IBC’s Content Everywhere Hub in 2021.

Watch now!

Speakers

Oliver Lietz Oliver Lietz
CEO & Founder,
nanocosmos
Michael Jones Michael Jones
SVP and Head of Business Development,
VisualOn Inc
Valentijn Siebrands Valentijn Siebrands
Solutions Architect,
M2A Media
Russell Trafford-Jones Russell Trafford-Jones – Moderator
Manager, Support & Services – Techex
Executive Member – IET Media Technical Network