Video: There and back again: reinventing UDP streaming with QUIC

QUIC is an encrypted transport protocol for increased performance compared to HTTP but will this help video streaming platforms? Often conflated with HTTP/3, QUIC is a UDP-based way evolution of HTTP/2 which, in turn, was a shake-up of the standard HTTP/1.1 delivery method of websites. HTTP/3 uses the same well-known security handshake from TLS 1.3 that is well adopted now in websites around the world to provide encryption by default. Importantly, it creates a connection between the two endpoints into which data streams are multiplexed. This prevents the need to constantly be negotiating new connections as found in HTTP/1.x so helping with speed and efficiency. These are known as QUIC streams.

QUIC streams provide reliable delivery, explains Lucas Pardue from Cloudflare, meaning it will recover packets when they are lost. Moreover, says Lucas, this is done in an extensible way with the standard specifying a basic model, but this is extensible. Indeed, the benefit of basing this technology on UDP is that changes can be done, programmatically, in user-space in lieu of the kernel changes that are typically needed for improved TCP handling on which HTTP/1.1, for example, is based.

QUIC hailed from a project of the same name created by Google which has been taken in by the IETF and, in the open community, honed and rounded into the QUIC we are hearing about today which is notably different from the original but maintaining the improvements proved in the first release. HTTP/3 is the syntax which is a development on from HTTP/2 which uses the QUIC transport protocol underneath or as Lucas would say, “HTTP/3 is the HTTP application mapping to the QUIC transport layer.” Lucas is heavily involved within the IETF effort to standardise HTTP/3 and QUIC so he continues in this talk to explain how QUIC streams are managed, identified and used.

It’s clear that QUIC and HTTP/3 are being carefully created to be tools for future, unforeseen applications with clear knowledge that they have wide applicability. For that reason, we are already seeing projects to add datagrams and RTP into the mix, to add multiparty or multicast. In many ways mimicking what we already have in our local networks. Putting them on QUIC can enable them to work on the internet and open up new ways of delivering streamed video.

The talk finishes with a nod to the fact that SRT and RIST also deliver many of the things QUIC delivers and Lucas leaves open the question of which will prosper in which segments of the broadcast market.

The Broadcast Knowledge has well over 500 talks/videos on many topics so to delve further into anything discussed above, just type into the search bar on the right. Or, for those who like URLs, just add your search query to the end of this URL https://thebroadcastknowledge.com/tag/.

Lucas has already written in detail about his work and what HTTP 3 is on his Cloudflare blog post.

Watch now!
Speaker

Lucas Pardue Lucas Pardue
Senior Software Engineer,
Cloudflare

Video: 2019 What did I miss? – Introducing Reliable Internet Streaming Transport

By far the most visited video of 2019 was the Merrick Ackermans’ review of RIST first release. RIST, the Reliable Internet Stream Transport protocol, aims to be an interoperable protocol allowing even lossy networks to be used for mission-critical broadcast contribution. Using RIST can change a bade internet link into a reliable circuit for live programme material, so it’s quite a game changer in terms of cost for links.

An increasing amount of broadcast video is travelling over the public internet which is currently enabled by SRT, Zixi and other protocols. Here, Merrick Ackermans explains the new RIST specification which aims to allow interoperable internet-based video contribution. RIST, which stands for Reliable Internet Stream Transport, ensures reliable transmission of video and other data over lossy networks. This enables broadcast-grade contribution at a much lower cost as well as a number of other benefits.

Many of the protocols which do similar are based on ARQ (Automatic Repeat-reQuest) which, as you can read on wikipedia, allows for recovery of lost data. This is the core functionality needed to bring unreliable or lossy connections into the realm of usable for broadcast contribution. Indeed, RIST is an interesting merging of technologies from around the industry. Many people use Zixi, SRT, and VideoFlow all of which can allow safe contribution of media. Safe meaning it gets to the other end intact and un-corrupted. However, if your encoder only supports Zixi and you use it to deliver to a decoder which only supports SRT, it’s not going to work out. The industry as accepted that these formats should be reconciled into a shared standard. This is RIST.

File-based workflows are mainly based on TCP (Transmission Control Protocol) although, notably, some file transfer service just as Aspera are based on UDP where packet recovery, not unlike RIST, is managed as part of the the protocol. This is unlike web sites where all data is transferred using TCP which sends an acknowledgement for each packet which arrives. Whilst this is great for ensuring files are uncorrupted, it can impact arrival times which can lead to live media being corrupted.

RIST is being created by the VSF – the Video Standards Forum – who were key in introducing VS-03 and VS-04 into the AIMS group on which SMPTE ST 2022-6 was then based. So their move now into a specification for reliable transmission of media over the internet has many anticipating great things. At the point that this talk was given the simple profile has been formed. Whist Merrick gives the details, it’s worth pointing out that this doesn’t include intrinsic encryption. It can, of course, be delivered over a separately encrypted tunnel, but an intrinsic part of SRT is the security that is provided from within the protocol.

Despite Zixi, a proprietary solution, and Haivision’s open source SRT being in competition, they are both part of the VSF working group creating RIST along with VideoFlow. This is because they see the benefit of having a widely accepted, interoperable method of exchanging media data. This can’t be achieved by any single company alone but can benefit all players in the market.

This talk remains true for the simple profile which just aims to recover packets. The main protocol, as opposed to ‘simple’, has since been released and you can hear about it in a separate video here. This protocol adds FEC, encryption and other aspects. Those who are familiar with the basics may whoosh to start there.

Speaker

Merrick Ackermans Merrick Ackermans
Chair,
VSF RIST Activity Group

Video: 2019 What did I miss? HDR Formats and Trends

The second most popular video of 2019 looked at HDR. A long promised format which routinely wows spectators at conferences and shops a like is increasingly seen, albeit tentatively, in the wild. For instance, this Christmas UK viewers were able to watch HDR Premiership football in HDR with Amazon Prime, but only a third of the matches benefitted from the format. Whilst there are many reasons for this, many of them due to commercial and practical reasons rather than technical reasons, this is an important part of the story.

Brian Alvarez from Amazon Prime Video goes into detail on the background and practicalities of HDR in this talk given at the Video Tech Seattle meet up in August, part of the world-wide movement of streaming video engineers who meet to openly swap ideas and experiences in making streaming work. We are left with a not only understanding HDR better, but with a great insight into the state of the consumer market – who can watch HDR and in what format – as well as who’s transmitting HDR.

Read more about the video or just hit play below!

If you want to start from the beginning on HDR, check out the other videos on the topic. HDR relies on both the understanding of how people see, the way we describe colour and light, how we implement it and how theworkflows are modified to suit. Fortunately, you’re already at the one place that brings all this together! Explore, learn and enjoy.

Speaker

Brian Alvarez Brian Alvarez
Principal Product Manager,
Amazon Prime Video

Video: Wide Area Facilities Interconnect with SMPTE ST 2110

Adoption of SMPTE’s 2110 suite of standards for transport of professional media is increasing with broadcasters increasingly choosing it for use within their broadcast facility. Andy Rayner takes the stage at SMPTE 2019 to discuss the work being undertaken to manage using ST 2110 between facilities. In order to do this, he looks at how to manage the data out of the facility, the potential use of JPEG-XS, timing and control.

Long established practices of using path protection and FEC are already catered for with ST 2022-7 for seamless path protection and ST 2022-5. New to 2110 is the ability to send the separate essences bundled together in a virtual trunk. This has the benefit of avoiding streams being split up during transport and hence potentially suffering different delays. It also helps with FEC efficiency and allows transport of other types of traffic.

Timing is key for ST 2110 which is why it natively uses Precision Timing Protocol, PTP which has been formalised for use in broadcast under ST 2059. Andy highlights the problem of reconciling timing at the far end but also the ‘missed opportunity’ that the timing will usually get regenerated therefore the time of media ingest is lost. This may change over the next year.

The creation of ST 2110-22 includes, for the first time, compressed media into ST 2110. Andy mentions that JPEG XS can be used – and is already being deployed. Control is the next topic with Andy focussing on the secure sharing of NMOS IS-04 & 05 between facilities covering registration, control and the security needed.

The talk ends with questions on FEC Latency, RIST and potential downsides of GRE trunking.

Watch now!
Speaker

Andy Rayner Andy Rayner
Chief Technologist,
Nevion