Video: Panel Discussion on RIST

RIST is a streaming protocol which allows unreliable/lossy networks such as the internet to be used for critical streaming applications. Called Reliable Internet Stream Protocol, it uses a light-touch mechanism to request any data that’s lost by the network. As losses are often temporary and sporadic, the chances are that the data will get through the second or, perhaps, third time. For a more in-depth explanation of RIST, check out this talk from Merrick Ackermans

The panel here at the IBC 2019 IP Showcase give an brief definition of RIST and then examine how far they’ve got with the ‘Simple Profile’ of RIST calling out things that are yet to be done. Still on the to-do list are such things as ‘pull’ streams, encryption, simplifying the port structure and embedding control.

Fixed Key encryption comes under the microscope next asking whether there’s a practical threat in terms of finding the key but also in terms of whether there are any side-channel attacks in a ‘non-standard’ encryption. The fixed key encryption has been implemented in line with NIST protocols but, as Kieran highlights, getting enough eyes on the detail is difficult with the specification being created outside of an open forum.

The panels covers the recent interop testing which shows overall positive results and then discusses whether RIST is appropriate for uncompressed video. Already, Kieran points out, Amazon Direct Connect is available in 100s of Gb/s links and so it’s completely possible to do uncompressed to the cloud. RTP is over 20 years old and is being used for much more than ever imagined at the time. As technology develops, use of RIST will also develop.

What are the other uses for RIST? Videoconferencing is one possibility, creating a generally secure link to equipment and ingest into the cloud are the others offered.

The panel fishes by looking to the future. Asking how, for instance, the encoder could react to reduced quality of the link. How much of the all the technology needed should be standardised and what features could be added. Sergio Ammirata suggests opening up the protocol for the bandwidth estimation to be requested by any interested device.

This session, bringing together DVEO, OBS, Zixi and Net Insight finishes with questions from the audience.

Watch now!
Speakers

Sergio Ammirata Sergio Ammirata
Deployments and Future Development,
DVEO
Kieran Kunhya Kieran Kunhya
Founder,
Open Broadcast Systems
Uri Avni Uri Avni
Founder,
Zixi
Mikael Wånggren Mikael Wånggren
Senior Software Engineer,
Net Insight
Ciro Noronha Ciro Noronha
Executive Vice President of Engineering,
Cobalt Digital

Video: Scaling Live OTT with DASH


MPEG DASH is a standard for streaming which provides a stable, open chain for distribution detailing aspects like packaging and DRM as well as being the basis for low-latency CMAF streaming.

DASH Manifest files, text files which list the many small files which make up the stream, can be complicated, long and take a long time to parse, demonstrates Hulu’s Zachary Cava. As the live event continues, the number of chunks to describe increases and so manifest files can easily grow to hundred of KB and eventually to megabytes meaning the standard way of producing these .mpd files will end up slowing the player down to the point it can’t keep up with the stream.

Zachary goes over some initial optimisations which help a lot in reducing the size o the manifests before introducing a method of solving the scalability issue. He explains that patching the mid file is the way to go meaning you can reference just the updated values in the latest .mpd.

With on-secreen examples of manifest files, we clearly see how this works and we see that this method is still compatible with branching of the playback e.g. for regionalisation of advertising or programming.

Zachary finishes by explaining that this technique is arriving in the 4th edition of MPEG-DASH and by answering questions from the audience.

Watch now!

Speaker

Zachary Cava Zachary Cava
Video Platform Architect.
Hulu

Video: Broadcast 101 – Audio in an IP Infrastructure

Uncompressed audio has been in the IP game a lot longer than uncompressed video. Because of its long history, it’s had chance to create a fair number of formats ahead of the current standard AES67. Since many people were trying to achieve the same thing, we find that some formats are compatible with AES67 – in part, whilst we that others are not compatible.

To navigate this difficult world of compatibility, Axon CTO Peter Schut continues the Broadcast 101 webinar series with this video recorded this month.

Peter starts by explaining the different audio formats available today including Dante, RAVENNA and others and outlines the ways in which they do and don’t interoperate. After spending a couple of minutes summarising each format individually, including the two SMPTE audio formats -30 and -31, he shows a helpful table comparing the,

Timing is next on the list discussing PTP and the way that SMPTE ST 2059 is used then packet time is covered explaining how the RTP payload fits into the equation. This payload directly affects the duration of audio you can fit into a packet. The duration is important in terms of keeping a low latency and is restricted to either 1ms or 125 microseconds by SMPTE ST 2110-30.

Peter finishes up this webinar talking about some further details about the interoperability problems between the formats.

Watch now!

Speaker

Peter Schut Peter Schut
CTO,
Axon

Video: Introducing Low-Latency HLS

HLS has taken the world by storm since its first release 10 years ago. Capitalising on the already widely understood and deployed technologies already underpinning websites at the time, it brought with it great scalability and the ability to seamlessly move between different bitrate streams to help deal with varying network performance (and computer performance!)

HLS has continued to evolve over the years with the new versions being documented as RFC drafts under the IETF. Its biggest problem for today’s market is its latency. As originally specified, you were guaranteed at least 30 seconds latency and many viewers would see a minute. This has improved over the years, but only so far.

Low-Latency HLS (LL-HLS) is Apple’s answer to the latency problem. A way of bringing down latency to be comparable with broadcast television for those live broadcast where immediacy really matters.

Please note: Since this video was recorded, Apple has released a new draft of LL-HLS. As described in this great article from Mux, the update’s changes are

  • “Delivering shorter sub-segments of the video stream (Apple call these parts) more frequently (every 0.3 – 0.5s)
  • Using HTTP/2 PUSH to deliver these smaller parts, pushed in response to a blocking playlist request
  • Blocking playlist requests, eliminating the current speculative manifest request polling behaviour in HLS
  • Smaller, delta rendition playlists, which reduces playlist size, which is important since playlists are requested more frequently
  • Faster rendition switching, enabled by rendition reports, which allows clients to see what is happening in another playlist without requesting it in its entirety”[0]

Read the full article for the details and implications, some of which address some points made in the talk.

Furthermore, THEOplayer have released this talk explaining the changes and discussing implementation.

This talk from Apple’s HLS Technical Lead, Roger Pantos, given at Apple’s WWDC conference this year goes through the problems and the solution, clearly describing LL-HLS. Over the following weeks here on The Broadcast Knowledge we will follow up with some more talks discussing real-world implementations of LL-HLS, but to understand them, we really need to understand the fundamental proposition.

Apple has always been the gatekeeper to HLS and this is one reason the MPEG DASH exists; a streaming standard that is separate to any one corporation and has the benefits of being passed by a standards body (MPEG). So who better to give the initial introduction.

HLS is a chunk-based streaming protocol meaning that the illusion of a perfect stream of data is given by downloading in quick succession many different files and it’s the need to have a pipeline of these files which causes much of the delay, both in creating them and in stacking them up for playback. LL-HLS uses techniques such as reducing chunk length and moving only parts of them in order to drastically reduce this intrinsic latency.

Another requirement of LL-HLS is HTTP/2 which is an advance on HTTP bringing with it benefits such as having multiple requests over a single HTTP connect thereby reducing overheads and request pipelining.

Roger carefully paints the whole picture and shows how this is intended to work. So while the industry is still in the midst of implementing this protocol, take some time to understand it from the source – from Apple.

Watch now!
Download the presentation

Speaker

Roger Pantos Roger Pantos
HLS Technical Lead,
Apple