Video: HTTP/3 ?

There’s a lot to like about HTTP/3 from encryption as standard, faster set-up time, better compression and promises better throughput by removing head of line blocking. A new protocol making its way through the IETF and based on QUIC, this could have a real impact to anyone involved in streaming.

Nick Shadrin from F5, focussed on NGINX, explains what the trade-offs are, the benefits, how it works and the likely implementation timelines. Nick starts with a look at HTTP from version 0.9 in the early 90s through to HTTP/2. This lays the groundwork for understanding HTTP/3. We quickly move on to looking at the latency improvements that HTTP/3 brings, particularly for high-latency connections where an improvement of up to four-times could be seen.

Nick outlines the benefits of the protocol such as the move of the transport layer out of the kernel. TCP is baked in to the kernel, but QUIC is not which allows for a faster pace of evolution of the protocol to improve it over time. Although TCP has changed since its inception, the rate of change is very slow since there are so many TCP devices, many of which can’t be updated or updates are very difficult. Built-in encryption is great, although browsers have mandated security with HTTP/2 over and above the specification. However, HTTP/3 goes one step further and also encrypts sequence numbers which helps reduce side-channel attacks.

Another useful addition for modern uses is connections based on connection ID. This means that even if the IP changes mid connection, the server can continue to immediately respond to requests on from the new IP as it’s still identifying with the same connection ID.

Nick talks through the different types of protocol negotiations starting with HTTP. It’s easy to upgrade HTTP to HTTPS with a simple 30x redirect. He discusses HSTS, Websockets use with upgrade headers, the way HTTP 1.x negotiates up to HTTP/2 and finally explains the ‘Alt-Svc’ header. The difficulty with moving from HTTP/2 to 3 is that the it’s not just a change in flavour of HTTP, but a lot of the network stack adjusts.

Looking towards the challenges, Nick points to the need for all boxes to understand HTTP/3 for full support to be practical on the internet at large, citing HTTP/2 adoption being only at 40% after three years – HTTP/2 being a TCP based. Another starting issue is UDP having had less attention than TCP in terms of optimisation, so there are currently cases where it’s much faster and times when it’s lower.

In practical terms, life is made harder by not having a plaintext version, since all tools will have to be able to support the encrypted data and, at this stage in its evolution the toolset is still basic.

Watch now!
Speaker

Nick Shadrin Nick Shadrin
Software Architect, NGINX
F5

Video: RTMP: A Quick Deep-Dive

RTMP hasn’t left us yet, though, between HLS, DASH, SRT and RIST, the industry is doing its best to get rid of it. At the time RTMP’s latency was seen as low and it became a defacto standard. But as it hasn’t gone away, it pays to take a little time to understand how it works

Nick Chadwick from Mux is our guide in this ‘quick deep-dive’ into the protocol itself. To start off he explains the history of the Adobe-created protocol to help put into context why it was useful and how the specification that Adobe published wasn’t quite as helpful as it could have been.

Nick then gives us an overview of the protocol explaining that it’s TCP-based and allows for multiple, bi-directional streams. He explains that RTMP multiplexes larger, say video, messages along with very short data requests, such as RPC, but breaking down the messages into chunks which can be multiplexed over just the one TCP connection. Multiplexing at the packet level allows RTMP to be asking the other end a question at the same time as delivering a long message.

Nick has a great ability to make describing the protocol and showing ASCII tables accessible and interesting. We quickly start looking at the header for chunks explaining what the different chunks are and how you can compress the headers to save bit rate. He also describes how the RTMP timestamp works and the control message and command message mechanism. Before answering Q&A questions, Nick outlines the difficulty in extending RTMP to new codecs due to the hard-coded list of codecs that can be used as well as recommending improvements to the protocol. It’s worth noting that this talk is from 2017. Whilst everything about RTMP itself will still be correct, it’s worth remembering that SRT, RIST and Zixi have taken the place of a lot of RTMP workflows.

Watch now!
Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux

Video: Scalable Per-User Ad Insertion in Live OTT

Targetted ads are the most valuable ads, but making sure the right person gets the right ad is tricky, not only in deciding who to show which ad to, but in scaling – and keeping track of – the ad infrastructure to thousands or millions of viewers. This video explains how this complexity arises and the techniques that Hulu have implemented to improve the situation.

Zachary Cava from Hulu lays out the way that standard advertising works for live streams. Whilst he uses MPEG DASH as an example, much the same is true of HLS. This starts with cutting up the video into sections which all start with an IDR frame for seeking. SCTE 35 is used to indicate times when ads can be inserted. These are called SCTE Markers. As DASH has the principle of defining a period (exactly as it sounds, just a way of marking a section of time), we can define periods of ‘programme’ and periods for ‘ads’. This allows the possibility of swapping out a whole period for a section of several ads.

If it were as simple as just swapping out whole periods, that would be Server-Side Ad Insertion. For per-user targetted ads, the streaming service has to keep track of every ad which was given to a user so that when they rewind, they have a consistent experience. This can mean remembering millions of ads for services which have a large rewind buffer. Moreover, traffic can become overwhelming as, since the requests are unique, a CDN can’t help in the caching. Whilst you can scale your system, the cost can spiral up beyond the revenue practical.

Enter MPD Patch Requests. This addition to MPEG Dash requires the client to remember the whole of the manifest. Where the client has a gap in its knowledge, it can simply request that section from the server which generates a ‘diff’, returning only the changes, which the client then assimilates into memory. The benefit here is that all the clients end up converging on only requesting what’s happening ‘now’ and so CDNs come back in to play. Zachary explains how this works in more detail and shows examples before explaining how URLQueryInfo helps reduce the complexity of URL parameters, again in order to interoperate better with CDNs and allows the ad system to be scaled separately to the main video assets.

Finally, Zachary takes a look at coming back from an ad break where you may find that your ads were longer then the ad period allotted or that the programme hasn’t returned before the ads finished. During the ad break, the client is still polling for updates so it’s possible to quickly update the manifest and swap back to programme video early. Similarly at the end of a break, if there is still no content, the server can start issuing its own ad or content, effectively moving back to server-side ad insertion. However, this is not necessarily just plain ad insertion, explains Zachary, rather Hulu cal it ‘Server-Guided’ ad insertion. There is no stitching on the server, but the server is informing you where to get the next video from. It also allows for some levels of user separation where some larger geographies can see different ads to those from other areas.

Zachary finishes by outlining the work Hulu is doing to feedback this learning into the DASH spec, via the DASH Industry Forum and their work with the industry at large to bring more consistency to SCTE 35 markers.

Watch now!
Speaker

Zachary Cava Zachary Cava
Software Architect,
Hulu

Video: Media’s Brave New World of Interop Microservices

‘Microservices’ can have several meanings, but centres on the ability to create a workflow from individual building blocks using very simple, individual services/programs running on a number of computers. Microservices are generally understood to improve interoperability, which is one of the many benefits of a microservices environment that this panel explores.

Splitting your work into microservices promises to allow your products to be deployed in a more automated way and may help them work with a decentralised structure (where such structure makes sense). Because microservices are intended to be very simple, self-contained programs, you can be very specific about what you run and therefore only pay for the compute you need, in a cloud context.

Watch now Free registration required

Indeed, the cloud is pushing software architects in the right direction. Whilst cloud isn’t intrinsically microservices-based, it’s highly modular which promotes similar coding practices in developers as they would need working directly with native microservices. For instances, many programs have an Amazon S3 interface. Working to this type of standard API is exactly what is needed for microservice architectures.

One of the benefits to splitting everything into the simplest building blocks is time to market. This can be considered in two ways, how long take it takes to update/change an existing workflow and how quickly you can iterate. Both linked, being flexible in the workflow means you can quickly iterate when necessary; you don’t need a two-year project in order to update your way of working and the cost of failure is low.

What’s the alternative to microservices? Often referred to as a monolithic, it’s actually more about a having about mono-workflow. When your workflow is wrapped up into one product or binary, you can’t easily integrate new elements into this workflow. Microservices allow data to flow in the ‘open’ and allow the workflow be rerouted. Data at all different parts of the chain is available to any program that needs it.

The aim of the OSA looking at fundamental issues that can’t just fix unilaterally by one customer leading the roadmap with a vendor, rather it is seeking a wider agreement on how to interoperate between all these services.

Watch now!Free registration required
Speakers

Loic Barbou Loic Barbou
Bloomberg Television
Wes Rosenberg Wes Rosenberg
CTO,
Levels Beyond
Ankur Jain Ankur Jain
Prime Focus Technologies
Shawn Maynard Shawn Maynard
SVP & General Manager,
Florical Systems
Chris Lennon Moderator: Chris Lennon
Executive Director
Open Services Alliance