Video: Three Roads to Jerusalem

With his usual entertaining vigour, Will Law explains the differences to the three approaches to low-latency streaming: DASH, LHLS and LL-HLS from Apple. Likening them partly to religions that all get you to the same end, we see how they differ and some of the reasons for that.

Please note: Since this video was recorded, Apple has released a new draft of LL-HLS. As described in this great article from Mux, the update’s changes are

  • “Delivering shorter sub-segments of the video stream (Apple call these parts) more frequently (every 0.3 – 0.5s)
  • Using HTTP/2 PUSH to deliver these smaller parts, pushed in response to a blocking playlist request
  • Blocking playlist requests, eliminating the current speculative manifest request polling behaviour in HLS
  • Smaller, delta rendition playlists, which reduces playlist size, which is important since playlists are requested more frequently
  • Faster rendition switching, enabled by rendition reports, which allows clients to see what is happening in another playlist without requesting it in its entirety”[0]

Read the full article for the details and implications, some of which address some points made in the talk.

Furthermore, THEOplayer have released this talk explaining the changes and discussing implementation.

Anyone who saw last year’s Chunky Monkey video, will recognise Will’s near-Oscar-winning animation style as he sets the scene explaining the contenders to the low-latency streaming crown.

We then look at a bullet list of features across each of the three low latency technologies (note Apple’s recent update) which leads on to a discussion on chunked transfer delivery and the challenges of line-rate delivery. A simple view of the universe would say that the ideal way to have a live stream, encoded at a constant bitrate, would be to stream it constantly at that bitrate to the receiver. Whilst this is, indeed, the best way to go, when we stream we’re also keeping one eye on whether we need to change the bitrate. If we get more bandwidth available it might be best to upgrade to a better quality and if we suddenly have contested, slow wifi, it might be time for an emergency drop down to the lowest bitrate stream.

When you are delivered a stream as individual files, you can measure how long they take to download to estimate your available bandwidth. If a file can be downloaded at 1Gbps, then it should always arrive at 1Gbps. Therefore if it arrives at less than 1Gbps we know that there is a bandwidth restriction and can make adjustments. Will explains that for streams delivered with chunked transfer or in real time such as in LL-HLS, this estimation no longer works as the files simply are never available at 1Gbps. He then explains some of the work that has been undertaken to develop more nuanced ways of estimating available bandwidth. It’s well worth noting that the smaller the files you transfer, the less accurate the bandwidth estimation as TCP takes time to speed up to line rate so small 320ms-length video segments are not ideal for maximising throughput.

Continuing to look at the differences, we next look at request rates with DASH at 20 requests per second compared to LL-HLS at 720. This leads naturally to an analysis of the benefits of HTTP/2 PUSH technology used in LL-HLS and the savings that can offer. Will explores the implications, and some of the problems, with last year’s version of the LL-HLS spec, some of which have been mitigated since.

The talk concludes with some work Akamai has done to try and establish a single, common workflow with examples and a GitHub repository. Will shows how this works and the limitations of the approach and finishes with a look at the commonalities in approaches.

[0] From “Low Latency HLS 2: Judgment Day” https://mux.com/blog/low-latency-hls-part-2/

Watch now!
Speakers

Will Law Will Law
Chief Architect,
Akamai

Video: Online Streaming Primer

A trip down memory lane for some, a great intro to the basics of streaming for others, this video from IET Media looks at the history of broadcasting and how that has moved over the years to online streaming posing the question whether, with so many people watching online, is that broad enough to now be considered broadcast?

The first of a series of talks from IET Media, the video starts by highlighting that the recording of video was only practical 20 years after the first television broadcasts then talks about how television has moved on to add colour, resolution and move to digital. The ability to record video is critical to almost all of our use of media today. Whilst film worked well as an archival medium, it didn’t work well, at scale, for recording of live broadcasts. So in the beginning, broadcasting from one, or a few, transmitters was all there was.

Russell Trafford-Jones, from IET Media, then discusses the advent of streaming from its predecessor as file-based music in portable players, through the rise of online radio and how this naturally evolved into the urge to stream video in much the same way.

Being a video from the IET video, Russell then looks at the technology behind getting video onto a network and over the internet. He talks about cutting the stream into chunks, i.e. small files, and how sending files can create a seamless stream of data. One key advantage of this method is Adaptive BitRate (ABR) meaning being able to change from one quality level, to another which typically means changing bitrate to adapt to changing network conditions.

Finishing by talking about the standards available for online streaming, this talk is a great introduction to streaming and an important part of anyone’s foundational understanding of broadcast and streaming.

Watch now!

This video was produced by IET Media, a technical network within the IET which runs events, talks and webinars for networking and education within the broadcast industry. More information

Speakers

Russell Trafford-Jones Russell Trafford-Jones
Exec Member, IET Media
Manager, Support & Services, Techex
Editor, The Broadcast Knowledge

Webinar: ATSC 3.0 Signaling, Delivery, and Security Protocols

ATSC 3.0 is bringing IP delivery to terrestrial broadcast. Streaming data live over the air is no mean feat, but nevertheless can be achieved with standard protocols such as MPEG DASH. The difficulty is telling the other end what’s its receiving and making sure that security is maintained ensuring that no one can insert unintended media/data.

In the second of this webinar series from the IEEE BTS, Adam Goldberg digs deep into two standards which form part of ATSC 3.0 to explain how security, delivery and signalling are achieved. Like other recent standards, such as SMPTE’s 2022 and 2110, we see that we’re really dealing with a suite of documents. Starting from the root document A/300, there are currently twenty further documents describing the physical layer, as we learnt last week in the IEEE BTS webinar from Sony’s Luke Fay, management and protocol layer, application and presentation layer as well as the security layer. In this talk Adam, who is Chair of a group on ATSC 3.0 security and vice-chair one on Management and Protocols, explains what’s in the documents A/331 and A/360 which between them define signalling, delivery and security for ATSC 3.0.

Security in ATSC 3.0
One of the benefits of ATSC 3.0’s drive into IP and streaming is that it is able to base itself on widely developed and understood standards which are already in service in other industries. Security is no different, using the same base technology that secure websites use the world over to achieve security. Still colloquially known by its old name, SSL, the encrypted communication with websites has seen several generations since the world first saw ‘HTTPS’ in the address bar. TLS 1.2 and 1.3 are the encryption protocols used to secure and authenticate data within ATSC 3.0 along with X.509 cryptographic signatures.

Authentication vs Encryption
The importance of authentication alongside encryption is hard to overstate. Encryption allows the receiver to ensure that the data wasn’t changed during transport and gives assurance that no one else could have decoded a copy. It provides no assurance that the sender was actually the broadcaster. Certificates are the key to ensuring what’s called a ‘chain of trust’. The certificates, which are also cryptographically signed, match a stored list of ‘trusted parties’ which means that any data arriving can carry a certificate proving it did, indeed, come from the broadcaster or, in the case of apps, a trusted third party.

Signalling and Delivery
Telling the receiver what to expect and what it’s getting is a big topic and dealt with in many places with in the ATSC 3.0 suite. The Service List Table (SLT) provides the data needed for the receiver to get handle on what’s available very quickly which in turn points to the correct Service Layer Signaling (SLS) which, for a specific service, provides the detail needed to access the media components within including the languages available, captions, audio and emergency services.

ATSC 3.0 Receiver Protocol Stack

ATSC 3.0 Receiver Protocol Stack

Media delivery is achieved with two technologies. ROUTE (Real-Time Object Delivery over Unidirectional Transport ) which is an evolution of FLUTE which the 3GPP specified to deliver MPEG DASH over LTE networks. and MMTP (Multimedia Multiplexing Transport Protocol) an MPEG standard which, like MPEG DASH is based on the container format ISO BMFF which we covered in a previous video here on The Broadcast Knowledge

Register now for this webinar to find out how this all connects together so that we can have safe, connected television displaying the right media at the right time from the right source!

Speaker

Adam Goldberg Adam Goldberg
Chair, ATSC 3.0 Specialist Group on ATSC 3.0 Security
Vice-chair, ATSC 3.0 Specialist Group on Management and Protocols
Director Technical Standards, Sony Electronics

Video: Networking Fundamentals


Date: Thursday 12th December, 1pm EST / 18:00 GMT

Networking is increasingly important throughout the broadcast chain. This recorded webcast picks out the fundamentals that underpin SMPTE ST 2110 and that help deliver video streaming services. We’ll piece them together and explain how they work, leaving you with more confidence in talking about and working with technologies such as multicast video and HTTP Live Streaming (HLS).

Wtach now!
Speaker

Russell Trafford-Jones Russell Trafford-Jones
Editor, http://138.68.177.241
Manager, Support & Services, Techex