Video: HTTP over QUIC is the next generation

There’s a lot to like about HTTP/3 from encryption as standard, faster set-up time, better compression and promises better throughput by removing head-of-line blocking. A new protocol making its way through the IETF and based on QUIC, this could have a real impact on anyone involved in streaming.

wolfSSL’s Daniel Stenberg and cURL maintainer, talks to us about HTTP/3 but starts at the beginning with HTTP 1 and 1/1. He outlines some of the issues we had in 1997 such as head-of-line blocking and ephemeral TCP connections. Zooming forward to 2005, HTTP/2 comes on the scene with a single HTTP connection, thus removing the significant overhead of ephemeral TCP connections. HTTP/2 went with a ‘streamed’ connection and could have multiple such streams but one thing that wasn’t solved was head-of-line blocking.

Before moving beyond HTTP/2, Daniel describes the problems that have set in due to ‘ossification’, that is to say, that the routers that time forgot which are still on very old, and often buggy TCP implementations. Innovating is very difficult if replacing the TCP within even a subset of boxes would mean I wasn’t able to send my website globally.

Addressing this ‘ossification’ issue, QUIC has stepped in. Developed on UDP instead of TCP QUIC solves a number of problems. First off, moving from TCP to UDP allows the protocol to live in userspace making it easier to update. Working on UDP instead of TCP means that the protocol regains control of the retransmissions allowing for something more efficient than TCP’s strict acknowledgement rules.

So QUIC becomes the transport layer of HTTP/3. Freeing ourselves from TCP, Daniel explains, allows us to remove the TCP head-of-line blocking problem. HTTP/3 on QUIC brings with it faster handshakes and a connection ID. This connection ID allows you to change IP addresses and still maintain your connection which is a significant improvement on what has gone before. Daniel continues by explaining more benefits of QUICK and HTTP/3 such as its encryption and the ability to have multiple streams.

Daniel finishes up outlining eight challenges for HTTP/3. These include the fact that up to 7% of QUICK attempts fail, dealing with ‘fall back’ algorithms, UDP having seen historically low usage and are less optimised as well as the downsides of userland protocol stacks being that it’s harder to get a standard.

Watch now!
Download the presentation
Speakers

Daniel Stenberg Daniel Stenberg
curl master, wolfSSL
main author,

Video: HTTP/3 ?

There’s a lot to like about HTTP/3 from encryption as standard, faster set-up time, better compression and promises better throughput by removing head of line blocking. A new protocol making its way through the IETF and based on QUIC, this could have a real impact to anyone involved in streaming.

Nick Shadrin from F5, focussed on NGINX, explains what the trade-offs are, the benefits, how it works and the likely implementation timelines. Nick starts with a look at HTTP from version 0.9 in the early 90s through to HTTP/2. This lays the groundwork for understanding HTTP/3. We quickly move on to looking at the latency improvements that HTTP/3 brings, particularly for high-latency connections where an improvement of up to four-times could be seen.

Nick outlines the benefits of the protocol such as the move of the transport layer out of the kernel. TCP is baked in to the kernel, but QUIC is not which allows for a faster pace of evolution of the protocol to improve it over time. Although TCP has changed since its inception, the rate of change is very slow since there are so many TCP devices, many of which can’t be updated or updates are very difficult. Built-in encryption is great, although browsers have mandated security with HTTP/2 over and above the specification. However, HTTP/3 goes one step further and also encrypts sequence numbers which helps reduce side-channel attacks.

Another useful addition for modern uses is connections based on connection ID. This means that even if the IP changes mid connection, the server can continue to immediately respond to requests on from the new IP as it’s still identifying with the same connection ID.

Nick talks through the different types of protocol negotiations starting with HTTP. It’s easy to upgrade HTTP to HTTPS with a simple 30x redirect. He discusses HSTS, Websockets use with upgrade headers, the way HTTP 1.x negotiates up to HTTP/2 and finally explains the ‘Alt-Svc’ header. The difficulty with moving from HTTP/2 to 3 is that the it’s not just a change in flavour of HTTP, but a lot of the network stack adjusts.

Looking towards the challenges, Nick points to the need for all boxes to understand HTTP/3 for full support to be practical on the internet at large, citing HTTP/2 adoption being only at 40% after three years – HTTP/2 being a TCP based. Another starting issue is UDP having had less attention than TCP in terms of optimisation, so there are currently cases where it’s much faster and times when it’s lower.

In practical terms, life is made harder by not having a plaintext version, since all tools will have to be able to support the encrypted data and, at this stage in its evolution the toolset is still basic.

Watch now!
Speaker

Nick Shadrin Nick Shadrin
Software Architect, NGINX
F5

Video: HTTP/2 – Abstraction, protocol design, and practical use

HTTP/2 is an evolution of what most people know as HTTP with the aim of increasing the speed of websites by streamlining the request and delivery of resources. Apple have mandated the use of HTTP/2 for their LL-HLS protocol. Within a typical web page there can easily be 100 requests to the web server so it’s easy to see how increased efficiency could be a benefit. For low latency streaming such as LL-HLS, there are many requests each second so again, even small gains in efficiency can add up.

Rolf W Rasmussen from VizRT explains in this talk the benefits of HTTP/2 taking us through the differences from HTTP. He starts simply by looking at HTTP/1.1 with the messages sent between the client and the server and shows how the requests and responses are sent. Rolf then looks at how the messages are sent at each of the layers of the OSI model. By doing this we discover that the messages are sent in binary.

Binary sending and header compression are ways in which the data to be sent is minimised. We see though that the HTTP/2 is a connection which multiplexes different streams on the same connection. Maintaining the same connection for multiple data streams again reduces the amount of negotiation needed. Multiplexing helps increase the efficient use of that connection. Unlike before, we now see that small requests are cheap whereas there has traditionally been a lot of work to reduce the number of small requests in HTTP/1.1.

Server Push is another key improvement where the server itself can push data into the open connection without a corresponding request. This was originally a requirement of the LL-HLS protocol but has been made optional since. For web pages, there are times when if a page needs resource A, the server knows that it will require resource B later. It’s in these situations that server push is used. Clearly for online streaming, it’s known when the client will need certain chunks or playlist files hence the potential use of server push.

Rolf concludes with questions from the flow and looking at some practical examples of debugging with curl, using proxies and Wireshark as well as dealing with encryption.

Watch now!
Speakers

Rolf W. Ramussen Rolf W. Ramussen
Chief Software Architect,
VizRT

Video: QUIC in Theory and Practice


Most online video streaming uses HTTP to deliver the video to the player in the same way web pages are delivered to the browser. So QUIC – a replacement for HTTP – will affect us professionally and personally.

This video explains how HTTP works and takes us on the journey to seeing why QUIC (which should eventually be called HTTP/3) speeds up the process of requesting and delivering files. Simply put there are ways to reduce the number of times messages have to be passed between the player and the server which reduces overall overhead. But one big win is its move away from TCP to UDP.

Robin Marx delivers these explanations by reference to superheroes and has very clear diagrams leading to this low-level topic being pleasantly accessible and interesting.

There are plenty of examples which show easy-to-see gains website speed using QUIC over both HTTP and HTTP/2 but QUIC’s worth in the realm of live streaming is not yet clear. There are studies showing it makes streaming worse, but also ones showing it helps. Video players have a lot of logic in them and are the result of much analysis, so it wouldn’t surprise me at all to see the state of the art move forward, for players to optimise for QUIC delivery and then all tests to show an improvement with QUIC streaming.

QUIC is coming, one way or another, so find out more.
Watch now!

Speaker

Robin Marx Robin Marx
Web Performance Researcher,
Hasslet University