Video: RIST: What is the Future?

Many see RIST as a new kid on the block, but they’ve worked quickly since their formation 3 years ago, having produced two specifications and now working on the third. RIST makes sending video over the internet reliable as it corrects for missing data. The protocol which, aims at multi-vendor interoperability, continues to gather interest with the RIST Forum now having over 80 companies.

“What does RIST do today” and “what’s next?” are the two questions Rick Ackermans, Chair of the RIST activity group at the VSF, is here to answer. Firstly, then, Rick looks at the documents already published, TR-06-1 and TR-06-2. Also known as the simple profile TR-06-01 has already received an update to allow for continuous measurement of the round trip time (RTT) of the link. Rick makes it clear that these are living specifications and the VSF won’t shy away from updating them when it helps keep the protocol relevant and responsive to the industry. TR-06-2 is the main profile which was released last year.

The simple and main profiles are summarised in this article and by Rick in the video. The simple profile provides a sender or receiver which can speak plain RTP and also run with high-performance packet recovery and seamless switching.

Main Profile brings in encryption and a powerful tool, GRE. As we wrote about last week, the idea of a tunnel is to hide complexity from the network infrastructure. Tunnelling allows for bidirectional data flow under one connection which is transparent to the network carrying the tunnel and to the endpoints. This enables a lot of flexibility. Not only does it allow for the connection to be set up in either direction, to suit whichever is easiest for firewall reasons, but it also allows generic data to be sent meaning you could send PTZ camera control data along with the video and audio.

But the highlight of this presentation is looking to the future and hearing about the advanced profile which is still in progress. Planned, though not promised, are features such as auto-configuration where a receiver works out many of the parameters of the link itself and dynamic reconfiguration where the sender and receiver respond to changing conditions of the link/network. Also in the works is a hybrid operation mode for satellites allowing and an internet connection to be used in addition to the satellite feed to receive and deliver re-requests.

Watch now!
Speakers

Rick Ackermans Rick Ackermans
Rist Activity Group Chair
Director of RF & Transmissions Engineering, CBS
Wes Simpson Wes Simpson
Co-Chair, RIST Activity Group,
Owner, LearnIPVideo.com

Video: The Case To Caption Everything

To paraphrase a cliché, “you are free to put black and silence to air, but if you do it without captions, you’ll go to prison.” Captions are useful to the deaf, the hard of hearing as well as those who aren’t. And in many places, to not caption videos is seen as so discriminatory, there is a mandatory quota. The saying at the beginning alludes to the US federal and local laws which lay down fines for lack of compliance – though whether it’s truly possible to go to prison, is not clear.

The case for captioning:
“13.3 Million Americans watch British drama”

In many parts of the world ‘subtitles’ means the same as ‘captions’ does in countries such as the US. In this article, I shall use the word captions to match the terms used in the video. As Bill Bennett from ENCO Systems explains, Closed Captions are sent as data along with the video meaning you ask your receiver to turn off, or turn on, display of the text. 

In this talk from the Midwest Broadcast Multimedia Technology Conference, we hear not only why you should caption, but get introduced to the techniques for both creating and transmitting them. Bill starts by introducing us to stenography, the technique of typing on special machines to do real-time transcripts. This is to help explain how resource-intensive creating captions is when using humans. It’s a highly specialist skill which, alone, makes it difficult for broadcasters to deliver captions en masse.

The alternative, naturally, is to have computers doing the task. Whilst they are cheaper, they have problems understanding audio over noise and with multiple people speaking at once. The compromise which is often used, for instance by BBC Sports, is to have someone re-speaking the audio into the computer. This harnesses the best aspects of the human brain with the speed of computing. The re-speaker can annunciate and emphasise to get around idiosyncrasies in recognition.

Bill re-visits the numerous motivations to caption content. He talks about the legal reasons, particularly within the US, but also mentions the usefulness of captions for situations where you don’t want audio from TVs, such as receptions and shop windows as well as in noisy environments. But he also makes the point that once you have this data, the broadcaster can take the opportunity to use that data for search, sentiment analysis and archive retrieval among other things.

Watch now!
Download the presentation
Speaker

Bill Bennett Bill Bennett
Media Solutions Account Manager
ENCO Systems

Video: Case Study on a Large Scale Distributed ST 2110 Deployment

We’re “past the early-adopter stage” of SMPTE 2110, notes Andy Rayner from Nevion as he introduces this case study of a multi-national broadcaster who’s created a 2110-based live production network spanning ten countries.

This isn’t the first IP project that Nevion have worked on, but it’s doubtless the biggest to date. And it’s in the context of these projects that Andy says he’s seen the maturing of the IP market in terms of how broadcasters want to use it and, to an extent, the solutions on the market.

Fully engaging with the benefits of IP drives the demand for scale as people are freer to define a workflow that works best for the business without the constraints of staying within one facility. Part of the point of this whole project is to centralise all the equipment in two, shared, facilities with everyone working remotely. This isn’t remote production of an individual show, this is remote production of whole buildings.

SMPTE ST-2110, famously, sends all essences separately so where an 1024×1024 SDI router might have carried 70% of the media between two locations, we’re now seeing tens of thousands of streams. In fact, the project as a whole is managing in the order of 100,000 connections.

With so many connections, many of which are linked, manual management isn’t practical. The only sensible way to manage them is through an abstraction layer. For instance, if you abstract the IP connections from the control, you can still have a panel for an engineer or operator which says ‘Playout Server O/P 3’ which allow you to route it with a button that says ‘Prod Mon 2’. Behind the scenes, that may have to make 18 connections across 5 separate switches.

This orchestration is possible using SDN – Software Defined Networking – where router decisions are actually taken away from the routers/switches. The problem is that if a switch has to decide how to send some traffic, all it can do is look at its small part of the network and do its best. SDN allows you to have a controller, or orchestrator, which understands the network as a whole and can make much more efficient decisions. For instance, it can make absolutely sure that ST 2022-7 traffic is routed separately by diverse paths. It can do bandwidth calculations to stop bandwidths from being oversubscribed.

Whilst the network is, indeed, based on SMPTE ST 2110, one of the key enablers is JPEG XS for international links. JPEG XS provides a similar compression level to JPEG 2000 but with much less latency. The encode itself requires less than 1ms of latency, unlike JPEG 2000’s 60ms. Whilst 60ms may seem small, when a video needs to move 4 or even 10 times as part of a production workflow, it soon adds up to a latency that humans can’t work with. JPEG XS promises to allow such international production to feel responsive and natural. Making this possible was the extension of SMPTE ST 2110, for the first time, to allow carriage of compressed video in ST 2110-22.

Andy finishes his overview of this uniquely large case study talking about conversion between types of audio, operating SDN with IGMP multicast islands, and NMOS Control. In fact, it’s NMOS which the answer to the final question asking what the biggest challenge is in putting this type of project together. Clearly, in a project of this magnitude, there are challenges around every corner, but problems due to quantity can be measured and managed. Andy points to NMOS adoption with manufacturers still needing to be pushed higher whilst he lays down the challenge to AMWA to develop NMOS further so that it’s extended to describe more aspects of the equipment – to date, there are not enough data points.

Watch now!
Speakers

Andy Rayner Andy Rayner
Chief Technologist,
Nevion