Video: Introducing Low-Latency HLS

HLS has taken the world by storm since its first release 10 years ago. Capitalising on the already widely understood and deployed technologies already underpinning websites at the time, it brought with it great scalability and the ability to seamlessly move between different bitrate streams to help deal with varying network performance (and computer performance!)

HLS has continued to evolve over the years with the new versions being documented as RFC drafts under the IETF. Its biggest problem for today’s market is its latency. As originally specified, you were guaranteed at least 30 seconds latency and many viewers would see a minute. This has improved over the years, but only so far.

Low-Latency HLS (LL-HLS) is Apple’s answer to the latency problem. A way of bringing down latency to be comparable with broadcast television for those live broadcast where immediacy really matters.

Please note: Since this video was recorded, Apple has released a new draft of LL-HLS. As described in this great article from Mux, the update’s changes are

  • “Delivering shorter sub-segments of the video stream (Apple call these parts) more frequently (every 0.3 – 0.5s)
  • Using HTTP/2 PUSH to deliver these smaller parts, pushed in response to a blocking playlist request
  • Blocking playlist requests, eliminating the current speculative manifest request polling behaviour in HLS
  • Smaller, delta rendition playlists, which reduces playlist size, which is important since playlists are requested more frequently
  • Faster rendition switching, enabled by rendition reports, which allows clients to see what is happening in another playlist without requesting it in its entirety”[0]

Read the full article for the details and implications, some of which address some points made in the talk.

Furthermore, THEOplayer have released this talk explaining the changes and discussing implementation.

This talk from Apple’s HLS Technical Lead, Roger Pantos, given at Apple’s WWDC conference this year goes through the problems and the solution, clearly describing LL-HLS. Over the following weeks here on The Broadcast Knowledge we will follow up with some more talks discussing real-world implementations of LL-HLS, but to understand them, we really need to understand the fundamental proposition.

Apple has always been the gatekeeper to HLS and this is one reason the MPEG DASH exists; a streaming standard that is separate to any one corporation and has the benefits of being passed by a standards body (MPEG). So who better to give the initial introduction.

HLS is a chunk-based streaming protocol meaning that the illusion of a perfect stream of data is given by downloading in quick succession many different files and it’s the need to have a pipeline of these files which causes much of the delay, both in creating them and in stacking them up for playback. LL-HLS uses techniques such as reducing chunk length and moving only parts of them in order to drastically reduce this intrinsic latency.

Another requirement of LL-HLS is HTTP/2 which is an advance on HTTP bringing with it benefits such as having multiple requests over a single HTTP connect thereby reducing overheads and request pipelining.

Roger carefully paints the whole picture and shows how this is intended to work. So while the industry is still in the midst of implementing this protocol, take some time to understand it from the source – from Apple.

Watch now!
Download the presentation

Speaker

Roger Pantos Roger Pantos
HLS Technical Lead,
Apple

Video: M6 France – Master Control and Playout IP Migration

French broadcast company M6 Group has recently moved to an all-IP workflow, employing the SMPTE ST 2110 suite of standards for professional media delivery over IP networks. The two main playout channels and MCR have been already upgraded and the next few channels will be transitioned to the new core soon.

The M6 system comprises equipment from five different vendors (Evertz, Tektronix, Harmonic, Ross and TSL), all managed and controlled using the AMWA NMOS IS-04 and IS-05 specifications. Such interoperability is an inherent feature of SMPTE ST 2110 suite of standards allowing customers to focus on the operational workflows and flexibility that IP brings them. Centralised management and configuration of the system is provided through web interfaces which also allows for easy and automated addition of a new equipment.

Thanks to Software Defined Orchestration and intuitive touch screen interfaces information such as source paths, link bandwidth / status, and device details can be quickly accessed via a web GUI. As the system is based on IP network, it is possible to come in and out of fabric numerous times without the same costs implications that you would have in the SDI world. Every point of the signal chain can be easily visualised which enables broadcast engineers to maintain and configure the system with ease.

You can see the slides here.

Watch now!

Speaker

Slavisa Gruborovic
Solution Architect
Evertz Microsystems Inc.
Fernando Solanes
Director Solutions Engineering
Evertz Microsystems Inc.

 

Video: Tech Talk: Production case studies – the gain after the pain

Technology has always been harnessed to improve, change and reinvent production. Automated cameras, LED walls, AR, LED lighting among many other technologies have all enabled productions to be done differently creating new styles and even types of programming.

In this Tech Talk from IBC 2019, we look at disruptive new technologies that change production, explained by the people who are implementing them and pushing the methods forward.

TV2 Norway’s Kjell Ove Skarsbø explains how they have developed a complete IP production flow and playout facility. This system allows them more flexibility and scalability. They did this by creating their own ESB (Enterprise Service Bus) to decouple the equipment from direct integrations. Working in an agile fashion, they delivered incremental improvements. This means that Provys, Mayam, Viz, Mediator amongst other equipment communicate with each other by delivering messages in to a system framework which passes messages on their behalf in a standard format.

Importantly, Kjell shares with us some mistakes that were made on the way. For instance, the difficulties of the size of the project, the importance of programmers understanding broadcast. “Make no compromise” is one of the lessons learnt which he discusses.

Olie Baumann from MediaKind presents live 360º video delivery, “Experiences that people have in VR embed themselves more like memories than experiences like television” he explains. Olie starts. by explaining the lay of the land in today’s VR equipment landscape then looking at some of the applications of 360º video such as looking around from an on-car camera in racing.

Olie talks us through a case study where he worked with Tiledmedia to deliver an 8K viewport which is delivered in full resolution only in the direction the 360º viewer and a lower resolution for the rest. When moving your head, the area in full resolution moves to match. We then look through the system diagram to understand which parts are in the cloud and what happens.

Matthew Brooks with Thomas Preece from BBC R&D explain their work in taking Object-based media from the research environment into mainstream production. This work allows productions to deliver object-based media meaning that the receiving device can display the objects in the best way for the display. In today’s world of second screens, screen sizes vary and small screens can benefit from larger, or less, text. It also allows for interactivity where programmes fork and can adapt to the viewers tastes, opinions and/or choices. Finally, they have delivered a tool to help productions manage this themselves and they can even make a linear version of the programme to maximise the value gained out of the time and effort spent in creating these unique productions.

Watch now!

Watch now!
Speakers

Kjell Ove Skarsbø Kjell Ove Skarsbø
Chief Technology Architect,
TV2 Norway
Olie Baumann Olie Baumann
Senior Technical Specialist,
MediaKind
Matthew Brooks Matthew Brooks
Lead Engineer,
BBC Research & Development
Thomas Preece Thomas Preece
Research Engineer,
BBC Research & Development
Stephan Heimbecher

Video: Real World IP – PTP

PTP, Precision Time Protocol, underpins the recent uncompressed video and audio over IP standards. It takes over the role of facility-wide synchronisation from black and burst signals. So it’s no surprise that The Broadcast Bridge invited Meinberg to speak at their ‘Real World IP’ event exploring all aspects of video over IP.

David Boldt, head of software engineering at Meinberg, explains how you can accurately transmit time over a network. He summarises the way that PTP accounts for the time taken for messages to move from A to B. David covers different types of clock explaining the often-heard terms ‘boundary clock’ and ‘transparent clock’ exploring their pros and cons.

Unlike black and burst which is a distributed signal, PTP is a system with bi-directional communication which makes redundancy all the more critical and, in some ways, complicated. David talks about different ways to attack the main/reserve problem.

PTP is a cross-industry standard which needs to be interpreted by devices to map the PTP time into an understanding of how the signal should look in order for everything to be in time. SMPTE 2059 does this task which David cover.

PTP-over-WAN: David looks at a case study of delivering PTP over a WAN. Commonly assumed not practical by many, David shows how this was done without using a GPS antenna at the destination. To finish off the talk, there’s a teaser of the new features coming up in the backwards-compatible PTP Version 2.1 before a Q&A.

This is part of a series of videos from The Broadcast Bridge

Watch now!
Speakers

Daniel Boldt

Daniel Boldt
Head of Software Engineering
Meinberg