Video: What’s the Deal with LL-HLS?

Low latency streaming was moving forward without Apple’s help – but they’ve published their specification now, so what does that mean for the community efforts that were already underway and, in some places, in use?

Apple is responsible for HLS, the most prevalent protocol for streaming video online today. In itself, it’s a great success story as HLS was ideal for its time. It relied on HTTP which was a tried and trusted technology of the day, but the fact it was file-based instead of a stream pushed from the origin was a key factor in its wide adoption.

As life has moved on and demands have moved from “I’d love to see some video – any video – on the internet!” to “Why is my HD stream arriving after my flat mate’s TV’s?” we see that HLS isn’t quite up to the task of low-latency delivery. Using pure HLS as originally specified, a latency of less than 20 seconds was an achievement.

Various methods were, therefore, employed to improve HLS. These ideas included cutting the duration of each piece of the video, introducing HTTP 1.1’s Chunked Transfer Encoding, early announcement of chunks and many others. Using these, and other, techniques, Low Latency HLS (LHLS) was able to deliver streams of 9 down to 4 seconds.

Come WWDC this year, Apple announced their specification on achieving low latency streaming which the community is calling ALHLS (Apple Low-latency HLS). There are notable differences in Apple’s approach to that already adopted by the community at large. Given the estimated 1.4 billion active iOS devices and the fact that Apple will use adherence to this specification to certify apps as ‘low latency’, this is something that the community can’t ignore.

Zac Shenker from Comcast explains some of this backstory and helps us unravel what this means for us all. Zac first explains what LHS is and then goes into detail on Apple’s version which includes interesting, mandatory, elements like using HTTP/2. Using HTTP/2 and the newer QUIC (which will become effectively HTTP/3) is very tempting for streaming applications but it requires work both on the server and the player side. Recent tests using QUIC have been, when taken as a whole, inconclusive in terms of working out whether this it has a positive or a negative impact on streaming performance; experiments have shown both results.

The talk is a detailed look at the large array of requirements in this specification. The conclusion is a general surprise at the amount of ‘moving parts’ given there is both significant work to be done on the server as well as the player. The server will have to remember state and due to the use of HTTP/2, it’s not clear that the very small playlist.m3u8 files can be served from a playlist-optimised CDN separately from the video as is often the case today.

There’s a whole heap of difference between serving a flood of large files and delivering a small, though continually updated, file to thousands of endpoints. As such, CDNs currently optimised separately for the text playlists and the media files they serve. They may even be delivered by totally separate infrastructures.

Zac explains why this changes with LL-HLS both in terms of separation but also in the frequency of updating the playlist files. He goes on to explore the other open questions like how easy it will be to integrate Server-Side Ad Insertion (SSAI) and even the appetite for adoption of HTTP/2.

Watch now!
Speaker

Zac Shenker Zac Shenker
Director of Engineering, Video Experience & Optimization,
CBS Interactive

Video: Deployment of Ultra HD Services Around the Globe

In some parts of the industry UHD is entirely absent. Thierry Fautier is here to shine a light on the progress being made around the globe in deploying UHD.

Thierry starts off by defining terms – important because Ultra HD actually hides several, often unmentioned, formats behind the term ‘UHD’. This also shows how all of the different aspects of UHD, which include colour (WCG), HDR, audio (NGA) and frame rate to name only a few, fit together.

There’s then a look at the stats, where is HDR deployed? How is UHD typically delivered? And the famed HDR Venn diagram showing which TVs support which formats.

As ever, live sports is a major testing ground so the talk examines some lessons learnt, and features a BBC case study, from the 2018 World Cup. Not unrelated, there is a discussion on the state of UHD streaming including discussion of CMAF.

Leading nicely onto Content Aware Encoding (CAE), which was also in use at the world cup.

Watch now!
Free registration required

Speaker

Thierry Fautier Thierry Fautier
President-Chair, Ultra HD Forum
VP Video Strategy, Harmonic

Video: Network Automation Using Python and Google Sheets

“I’m lazy and I’m a master procrastinator.” If you sympathise, learn how to automate network configuration with some code and spreadsheets.

In this video, the EBU’s Ievgen Kostiukevych presents a simple way to automate basic operations on Arista switches working in a SMPTE ST 2110 environment. This is done with a Python script which retrieves parameters stored in Google Sheets and uses Arista’s eAPI to implement changes to the switch.

The Python script was created as a proof of concept for the EBU’s test lab where frequent changes of VLAN configuration on the switches were required. Google Sheets has been selected as a collaborative tool which allows multiple people to modify settings and keep track of changes at the same time. This approach makes repetitive tasks like adding or changing descriptions of the ports easier as well.

Functionality currently supported:

  • Creating VLANs and modyfying their descriptions based on the date in a Google Sheets
  • Changing access VLANs and interface descriptions for the ports based on the date in a Google Sheets
  • Reading interfaces status and the mac address table from the switch and writing the data to the spreadsheet

The script can be downloaded from GitHub.

Speaker

Ievgen Kostiukevych
Senior IP Media Technology Architect and Trainer
EBU

Video: A Survey Of Per-Title Encoding Technologies

Optimising encoding by per-title encoding is very common nowadays, though per-scene is slowly pushing it aside. But with so many companies offering per-title encoding, how do we determine which way to turn?

Jan Ozer experimented with them, so we didn’t have to. Jan starts by explaining the principles of per-title encoding and giving an overview of the market. He then explains some of the ways in which it works including the importance of changing resolution as much as changing

As well as discussing the results, with Bitmovin being the winner, Jan explains ‘Capped CRF’ – how it works, how it differs from CBR & VBR and why it’s good.

Finally, we are left with some questions to ask when searching for our own per-title technology to solve the problem we have such as “can it adjust rung resolutions?”, “Can you apply traditional data rate controls?” amongst others.

Watch now!

Speaker

Jan Ozer Jan Ozer
Principal,
Streaming Learning Center