Video: Getting Your Virtual Hands On RIST

RIST is one of a number of error correction protocols that provide backwards error correction. These are commonly used to transport media streams into content providers but are increasingly finding use in other parts of the broadcast workflow including making production feeds, such as multiviewers and autocues available to staff at internet-connected locations, such as the home.

The RIST protocol (Reliable Internet Stream Protocol) is being created by a working group in the VSF (Video Services Forum) to provide an open and interoperable specification, available for the whole industry to adopt. This article provides a brief summary, whereas this talk from FOSDEM20 goes into some detail.

We’re led through the topic by Sergio Ammirata, CTO of DVEO who are members of the RIST Forum and collaborating to make the protocol. What’s remarkable about RIST is that several companies which have created their own error-correcting streaming protocols such as DVEO’s Dozer, which Sergio created, have joined together to share their experience and best practices.

Press play to watch:

Sergio starts by explaining why RIST is based on UDP – a topic explored further in this article about RIST, SRT and QUIC – and moves on to explaining how it works through ‘NACK’ messages, also known as ‘Negative Acknowledgement’ messages.

We hear next about the principles of RIST, of which the main one is interoperability. There are two profiles, simple and main. Sergio outlines the Simple profile which provides RTP and error correction, channel bonding. There is also the Main profile, which has been published as VSF TR-06-2. This includes encryption, NULL packet removal, FEC and GRE tunnelling. RIST uses a tunnel to multiplex many feeds into one stream. Using Cisco’s Generic Routing Encapsulation (GRE), RIST can bring together multiple RIST streams and other arbitrary data streams into one tunnel. The idea of a tunnel is to hide complexity from the network infrastructure.

Tunnelling allows for bidirectional data flow under one connection. This means you can create your tunnel in one direction and send data in the opposite direction. This gets around many firewall problems since you can create your tunnel in the direction which is easiest to achieve without having to worry about the direction of dataflow. Setting up GRE tunnels is outside of the scope of RIST.

Sergio finishes by introducing librist, demo applications and answerin questions from the audience.

Watch now!
Speaker

Sergio Ammirata Sergio Ammirata
Chief Technical Officer of DVEO
Managing Partner of SipRadius LLC.

Video: Multicast ABR opens the door to a new DVB era

Multicast ABR is an interesting hybrid technique allowing multicast distribution of video streams to the home but converts into conventional point-to-point streams at a multicast gateway close to the home. Whilst the internet at large and home networks can’t be assumed to support multicast, we can use multicast for video distribution within a managed network such as that an ISP is running.

ISPs are interested in using multicast because it can drastically reduce the bandwidth in use within the network. Currently, each device watching a video requires its own feed. If with multicast, if 1000 people are watching the same stream in one local area, then that multicast gateway need only pull one version of the stream from the ISPs nationwide network. Then it can send out these 1000 individual feeds from the local headend.

Guillaume Bichot from Broadpeak explains how this would work with a multicast server that picks up the streaming files from a CDN/the internet and converts it into multicast. This then needs a gateway at the other end to convert back into multicast. The gateway can run on a set-top-box in the home, as long as multicast can be carried over the last mile to the box. Alternatively, it can be upstream at a local headend or similar.

At the beginning of the talk, we hear from BBC R&D’s Richard Bradbury who explains the current state of the work. Published as DVB Bluebook A176, this is currently written to account for live streaming, but will be extended in the future to deal with video on demand. The gateway is able to respond with a standard HTTP redirect if it becomes overloaded which seamlessly pushes the player’s request direct to the relevant CDN endpoint.

DVB also outlines how players can contact the CDN for missing data or video streams that are not provided, currently, via the gateway. Guillaume outlines which parts of the ecosystem are specified and which are not. For instance, the function of the server is explained but not how it achieves this. He then shows where all this fits into the network stack and highlights that this is protocol-agnostic as far as delivery of media. Whilst they have used DVB-DASH as their assumed target, this could as easily work with HLS or other formats.

Guillaume finishes by showing deployment examples. We see that this can work with uni-directional satellite feeds with a return channel over the internet. It can also work with multiple gateways accessible to a single consumer.

The webinar ends with questions though, during the webinar, Richard Bradbury was answering questions on the chat. DVB has provided a transcript of these questions.

Watch now!
Download the slides from this presentation
Speakers

Richard Bradbury Richard Bradbury
Lead Research Engineer,
BBC R&D
Guillaume Bichot Guillaume Bichot
Principal Engineer, Head of Exploration,
Broadpeak

Video: The Case To Caption Everything

To paraphrase a cliché, “you are free to put black and silence to air, but if you do it without captions, you’ll go to prison.” Captions are useful to the deaf, the hard of hearing as well as those who aren’t. And in many places, to not caption videos is seen as so discriminatory, there is a mandatory quota. The saying at the beginning alludes to the US federal and local laws which lay down fines for lack of compliance – though whether it’s truly possible to go to prison, is not clear.

The case for captioning:
“13.3 Million Americans watch British drama”

In many parts of the world ‘subtitles’ means the same as ‘captions’ does in countries such as the US. In this article, I shall use the word captions to match the terms used in the video. As Bill Bennett from ENCO Systems explains, Closed Captions are sent as data along with the video meaning you ask your receiver to turn off, or turn on, display of the text. 

In this talk from the Midwest Broadcast Multimedia Technology Conference, we hear not only why you should caption, but get introduced to the techniques for both creating and transmitting them. Bill starts by introducing us to stenography, the technique of typing on special machines to do real-time transcripts. This is to help explain how resource-intensive creating captions is when using humans. It’s a highly specialist skill which, alone, makes it difficult for broadcasters to deliver captions en masse.

The alternative, naturally, is to have computers doing the task. Whilst they are cheaper, they have problems understanding audio over noise and with multiple people speaking at once. The compromise which is often used, for instance by BBC Sports, is to have someone re-speaking the audio into the computer. This harnesses the best aspects of the human brain with the speed of computing. The re-speaker can annunciate and emphasise to get around idiosyncrasies in recognition.

Bill re-visits the numerous motivations to caption content. He talks about the legal reasons, particularly within the US, but also mentions the usefulness of captions for situations where you don’t want audio from TVs, such as receptions and shop windows as well as in noisy environments. But he also makes the point that once you have this data, the broadcaster can take the opportunity to use that data for search, sentiment analysis and archive retrieval among other things.

Watch now!
Download the presentation
Speaker

Bill Bennett Bill Bennett
Media Solutions Account Manager
ENCO Systems

Video: ST 2110 Testing Fundamentals

When you’ve chosen to go IP in your facility using ST 2110, you’ll need to know how to verify it’s working correctly, how to diagnose problems and have the right tools available. Vendors participate in several interop tests a year, so we can learn from how they set up their tests and the best practices they develop.

In this talk, Jean Lapierre explains what to test for and the types of things that typically go wrong in ST 2110 systems with PTP. Jean starts by talking about the parts of 2110 which are tested and the network and timing infrastructure which forms the basis of the testing. He then starts to go through problems to look for in deployments.

Jean talks about testing that IGMPv3 multicasts can be joined and then looks at checking the validity of SDP files which can be done by visual inspection and also SDPoker. A visual inspection is still important because whilst SDPoker checks the syntax, there can be basic issues in the content. 2022-7 testing is next. The simplest test is to turn one path off and check for disturbances, but this should be followed up by using a network emulator to deliver a variety of different types of errors of varying magnitudes to ensure there are no edge cases.

ST 2110 uses PTP for timing so, naturally, the timing system also needs to be tested. PTP is a bi-directional system for providing time to all parts of the network instead of a simple waterfall distribution of a centrally created time signal like black and burst. Whilst this system needs monitoring during normal operation, it’s important to check for proper grandmaster failover of your equipment.

PTP is also important when doing 2110 PCAPs in order to have accurate timing and to enable analysis with the EBU’s LIST project. Jean gives some guidelines on using and installing LIST and finishes his talk outlining some of the difficulties he has faced, providing tips on what to look out for.

Watch now!
Speakers

Jean Lapierre Jean Lapierre
Senior Director of Engineering,
Matrox