Video: There and back again: reinventing UDP streaming with QUIC

QUIC is a encrypted transport protocol for increased performance compared to HTTP but will this help video streaming platforms? Often conflated with HTTP/3, QUIC is a UDP-based way evolution of HTTP/2 which, in turn, was a shake-up of the standard HTTP/1.1 delivery method of websites. HTTP/3 uses the same well-known security handshake from TLS 1.3 that is well adopted now in websites around the world to provide encryption by default. Importantly, it creates a connection between the two endpoints into which data streams are multiplexed. This prevents the need to constantly be negotiating new connections as found in HTTP/1.x so helping with speed and efficiency. These are known as QUIC streams.

QUIC streams provide reliable delivery, explains Lucas Pardue from Cloudflare, meaning it will recover packets when they are lost. Moreover, says Lucas, this is done in an extensible way with the standard specifying a basic model, but this is extensible. Indeed, the benefit of basing this technology on UDP is that changes can be done, programmatically, in user-space in lieu of the kernel changes that are typically needed for improved TCP handling on which HTTP/1.1, for example, is based.

QUIC hailed from a project of the same name created by Google which has been taken in by the IETF and, in the open community, honed and rounded into the QUIC we are hearing about today which is notably different from the original but maintaining the improvements proved in the first release. HTTP/3 is the syntax which is a development on from HTTP/2 which uses the QUIC transport protocol underneath or as Lucas would say, “HTTP/3 is the HTTP application mapping to the QUIC transport layer.” Lucas is heavily involved with in the IETF effort to standardise HTTP/3 and QUIC so he continues in this talk to explain how QUIC streams are managed, identified and used.

It’s clear that QUIC and HTTP/3 are being carefully created to be tools for future, unforeseen applications with clear knowledge that they have wide applicability. For that reason we are already seeing projects to add datagrams and RTP into the mix, to add multiparty or multicast. In many ways mimicking what we already have in our local networks. Putting them on QUIC can enable them to work on the internet and open up new ways of delivering streamed video.

The talk finishes with a nod to the fact that SRT and RIST also deliver many of the things QUIC delivers and Lucas leaves open the question of which will prosper in which segments of the broadcast market.

The Broadcast Knowledge has well over 500 talks/videos on many topics so to delve further into anything discussed above, just type into the search bar on the right. Or, for those who like URLs, just add your search query to the end of this URL http://thebroadcastknowledge.com/tag/.

Lucas has already written in detail about his work and what HTTP 3 is on his Cloudflare blog post.

Watch now!
Speaker

Lucas Pardue Lucas Pardue
Senior Software Engineer,
Cloudflare

Video: Low Latency Streaming

There are two phases to reducing streaming latency. One is to optimise the system you already have, the other is to move to a new protocol. This talk looks at both approaches achieving parity with traditional broadcast media through optimisation and ‘better than’ by using CMAF.

In this video from the Northern Waves 2019 conference, Koen van Benschop from Deutsche Telekom examines the large and low-cost latency savings you can achieve by optimising your current HLS delivery. With the original chunk sizes recommended by Apple being 10 seconds, there are still many services out there which are starting from a very high latency so there are savings to be had.

Koen explains how the total latency is made up by looking at the decode, encode, packaging and other latencies. We quickly see that the player buffer is one of the largest, the second being the encode latency. We explore the pros and cons of reducing these and see that the overall latency can fall to or even below traditional broadcast latency depending, of course, on which type (and which country’s) you are comparing it too.

While optimising HLS/DASH gets you down to a few seconds, there’s a strong desire for some services to beat that. Whilst the broadcasters themselves may be reticent to do this, not wanting to deliver online services quicker than their over-the-air offerings, online sports services such as DAZN can make latency a USP and deliver better value to fans. After all, DAZN and similar services benefit from low-second latency as it helps bring them in line with social media which can have very low latency when it comes to key events such as goals and points being scored in live matches.

Stefan Arbanowski from Fraunhofer leads us through CMAF covering what it is, the upcoming second edition and how it works. He covers its ability to use .m3u8 (from HLS) and .mpd (from DASH) playlist/manifest files and that it works both with fMP4 and ISO BMFF. One benefit from DASH is it’s Common Encryption standard. Using this it can work with PlayReady DRM, Fairplay and others.

Stefan then takes a moment to consider WebRTC. Given it proposes latency of less than one second, it can sound like a much better idea. Stefan outlines concerns he has about the ability to scale above 200,000 users. He then turns his attention back to CMAF and outlines how the stream is composed and how the player logic works in order to successfully play at low latency.

Watch now!
Speakers

Koen van Benschop Koen van Benschop
Senior Manager TV Headend and DRM,
Deutsche Telekom
Stefan Arbanowski Stefan Arbanowski
Director Future Applications and Media,
Fraunhofer FOKUS

Video: Introducing Low-Latency HLS

HLS has taken the world by storm since its first release 10 years ago. Capitalising on the already widely understood and deployed technologise already underpinning websites at the time, it brought with it great scalability and the ability to seamlessly move between different bitrate streams to help deal with varying network performance (and computer performance!)

HLS has continued to evolve over the years with the new versions being documented as RFC drafts under the IETF. It’s biggest problem for today’s market is its latency. As originally specified, you were guaranteed at least 30 seconds latency and many viewers would see a minute. This has improved over the years, but only so far.

Low-Latency HLS (LL-HLS) is Apple’s answer to the latency problem. A way of bringing down latency to be comparable with broadcast television for those live broadcast where immediacy really matters.

This talk from Apple’s HLS Technical Lead, Roger Pantos, given at Apple’s WWDC conference this year goes through the problems and the solution, clearly describing LL-HLS. Over the following weeks here on The Broadcast Knowledge we will follow up with some more talks discussing real-world implementations of LL-HLS, but to understand them, we really need to understand the fundamental proposition.

Apple has always been the gatekeeper to HLS and this is one reason the MPEG DASH exists; a streaming standard that is separate to any one corporation and has the benefits of being passed by a standards body (MPEG). So who better to give the initial introduction.

HLS is a chunk-based streaming protocol meaning that the illusion of a perfect stream of data is given by downloading in quick succession many different files and it’s the need to have a pipeline of these files which causes much of the delay, both in creating them and in stacking them up for playback. LL-HLS uses techniques such as reducing chunk length and moving only parts of them in order to drastically reduce this intrinsic latency.

Another requirement of LL-HLS is HTTP/2 which is an advance on HTTP bringing with it benefits such as having multiple requests over a single HTTP connect thereby reducing overheads and request pipelining.

Roger carefully paints the whole picture and shows how this is intended to work. So while the industry is still in the midst of implementing this protocol, take some time to understand it from the source – from Apple.

Watch now!
Download the presentation

Speaker

Roger Pantos Roger Pantos
HLS Technical Lead,
Apple

Video: Understanding esports production

Esports is here to stay and brings a new dimension on big events which combine the usual challenges of producing and broadcasting events at scale with less usual challenges such as non-standard resolutions and frame rates. This session from the IBC 2019 conference looks at the reality of bringing such events to life.

The talk starts with a brief introduction to some Esports-only terms before heading into the discussions starting with Simon Eicher who talks about his switch toward typical broadcast tools for Esports which has helped drive better production values and story telling. Maxwell Trauss from Riot Games explains how they incubated a group of great producers and were able keep production values high by having them working on shows remotely worldwide.

Blizzard uses the technique of using a clean ‘world feed’ which is shared worldwide for regions to regionalise it with graphics and language before then broadcasting this to the world. In terms of creating better storytelling, Blizzard have their own software which interprets the game data and presents it in a more consumable way to the production staff.

Observers are people who control in-game cameras. A producer can call out to any one of the observers. The panel talks about how separating the players from the observers from the crowd allows them to change the delay between what’s happening in the game and each of these elements seeing it. At the beginning of the event, this creates the opportunity to move the crowd backwards in time so that players don’t get tipped off. Similarly they can be isolated from the observers for the same effect. However, by the end of the game, the delays have been changed to bring everyone back into present time for a tense finale.

Corey Smith from Blizzard explains the cloud setup including clean feeds where GFX is added in the cloud. This would lead to a single clean feed from the venue, in the end. ESL, on the other hand choose to create their streams locally.

Ryan Chaply from Twitch explains their engagement models some of which reward for watching. Twitch’s real-time chat banner also changes the way productions are made because the producers have direct feedback from the viewers. This leads, day by day, to tweaks to the formats where a production may stop doing a certain thing by day three if it’s not well received, conversely when something is a hit, they can capitalise on this.

Ryan also talks about what they are weighing up in terms of when they will start using UHD. Riot’s Maxwell mentions the question of whether fans really want 4K at the moment, acknowledging it’s an inevitability, he asks whether the priority is actually having more/better stats.

The panel finishes with a look to the future, the continued adoption of broadcast into Esports, timing in the cloud and dealing with end-to-end metadata and a video giving a taste of the Esports event.

Watch now!
Speakers

Simon Eicher Simon Eicher
Executive Producer, Director of Broadcast, eSports Services,
ESL
Ryan Chaply Ryan Chaply
Senior Esports Program Manager,
Twitch
Corey Smith Corey Smith
Director, Live Operations Broadcast Technology Group,
Blizzard
Maxwell Trauss Maxwell Trauss
Broadcast Architect,
Riot Games
Jens Fischer Jens Fischer
Global Esport Specialist and Account Manager D.A.CH,
EVS