Video: RIST and Open Broadcast Systems

RIST is a streaming protocol which allows lossy networks such as the internet to be used for critical streaming applications. Called Reliable Internet Stream Transport, it uses ARQ (Automatic Repeat reQuest) retransmission technology to request any data that is lost by the network, creating reliable paths for video contribution.

In this presentation, Kieran Kunhya from Open Broadcast Systems explains why his company has chosen RIST protocol for their software-based encoders and decoders. Their initial solution for news, sports and linear channels contribution over public internet were based on FEC (Forward Error Correction), a technique used for controlling errors in transmission by sending data in a redundant way using error-correcting code. However, FEC couldn’t cope with large burst losses, there was limited interoperability and the implementation was complex. Protecting the stream by sending the same feed over multiple paths and/or sending a delayed version of the stream on the same path, had a heavy bandwidth penalty. This prompted them, instead, to implement an ARQ technique based on RFC 4585 (Extended RTP Profile for Real-time Transport Control Protocol-Based Feedback), which gave them functionality quite similar to the basic RIST functionality.

Key to the discussion, Kieran explains why they decided not to adopt the SRT protocol. As SRT is based file transfer protocol, it’s difficult or impossible to add features like bonding, multi-network and multi-point support which were available in RIST from day one. Moreover, RIST has a large IETF heritage from other industries and is vendor-independent. In Kieran’s opinion, SRT will become a prosumer solution (similar to RTMP, now, for streaming) and RIST will be the professional solution (analogous to MPEG-2 Transport Streams).

Different applications for the RIST protocol are discussed, including 24/7 linear channels for satellite uplink from playout, interactive (two-way) talking heads for news, high bitrate live events and reverse vision lines for monitoring purposes. Also, there is a big potential for using RIST in cloud solutions for live broadcast production workflows. Kieran hopes that more broadcasters will start using spin-up and spin-down cloud workflows, which will help save space and money on infrastructure.

What’s interesting, Open Broadcast Solutions are not currently interested in RIST Main Profile (the main advantages of this profile are support for encryption, authentication and in-band data). Kieran explains that to control devices in remote locations you need some kind of off-the-shelf VPN anyway. These systems provide encryption and NAT port traversal, so the problem is solved at a different layer in the OSI model and this gives customers more control over the type of encryption they want.

Watch now!

Speaker

Kieran Kunhya Kieran Kunhya
Founder and CEO,
Open Broadcast Systems

Video: RIST in the Cloud

Cloud workflows are starting to become an integral part of broadcasters’ live production. However, the quality of video is often not sufficient for high-end broadcast applications where cloud infrastructure providers such as Google, Oracle or AWS are accessed through the public Internet or leased lines.

A number of protocols based on ARQ (Adaptive Repeat reQuest) retransmission technology have been created (including SRT, Zixi, VideoFlow and RIST) to solve the challenge of moving professional media over the Internet which is fraught with dropped packets and unwanted delays. Protocols such as a SRT and RIST enable broadcast-grade video delivery at a much lower cost than fibre or satellite links.

The RIST (Reliable Internet Streaming Transport) protocol has been created as an open alternative to commercial options such as Zixi. This protocol is a merging of technologies from around the industry built upon current standards in IETF RFCs, providing an open, interoperable and technically robust solution for low-latency live video over unmanaged networks.

In this presentation David Griggs from Amazon Web Services (AWS) talks about how the RIST protocol with cloud technology is transforming broadcast content distribution. He explains that delivery of live content is essential for the broadcasters and they look for a way to deliver this content without using expensive private fibre optics or satellite links. With unmanaged networks you can get content from one side of the world to the other with very little investment in time and infrastructure, but it is only possible with protocols based on ARQ like RIST.

Next, David discusses the major advantages of cloud technology, being dynamic and flexible. Historically dimensioning the entire production environment for peak utilisation was financially challenging. Now it is possible to dimension it for average use, while leveraging cloud resources for peak usage, providing a more elastic cost model. Moreover, the cloud is a good place to innovate and to experiment because the barrier to entry in terms of cost is low. It encourages both customers and vendors to experiment and to be innovative and ultimately build more compelling and better solutions.

David believes that open and interoperable QoS protocols like RIST will be instrumental in building complex distribution networks in the cloud. He hopes that AWS by working together with Net Insight, Zixi and Cobalt Digital can start to build innovative and interoperable cloud solutions for live sports.

Watch now!

Speaker

David Griggs
Senior Product Manager, Media Services
AWS Elemental

Video: Real-Time Remote Production For The FIFA Women’s World Cup

We hear about so many new and improved cloud products and solutions to improve production that, once in a while, you really just need to step back and hear how people have put them together. This session is just that, a look at the whole post production workflow for FOX Sports’ production of the Women’s World Cup.

This panel from the Live Streaming Summit at Streaming Media West is led by FOX Sports’ Director of Post Production, Brandon Potter as he talks through the event with three of his key vendors, IBM Aspera, Telestream and Levels Beyond.

Brandon starts by explaining that this production stood on the back of the work they did with the Men’s World Cup in Russia, both having SDI delivery of media in PAL at the IBC. For this event, all the edit crew was in LA which created problems with some fixed frame-rate products still in use in the US facility.

Data transfer, naturally is the underpinning of any event like this with a total of a petabyte of data being created. Network connectivity for international events is always tricky. With so many miles of cable whether on land or under the sea, there is a very high chance of the fibre being cut. At the very least, the data can be switched to take a different path an in that moment, there will be data loss. All of this means that you can’t assume the type of data loss, it could be seconds, minutes or hours. On top of creating, and affording, redundant data circuits, the time needed for transfer of all the data needs to be considered and managed.

Ensuring complete transfer of files in a timely fashion drove the production to auto archive of all content in real time into Amazon S3 in order to avoid long post-match ingest times of multiple hours, “every bit of high-res content was uploaded.” stated Michael Flathers, CTO of IBM Aspera.

Dave Norman, from Telestream explains how the live workflows stayed on-prem with the high-performance media and encoders and then, “as the match ended, we would then transition…into AWS”. In the cloud, the HLS proxies would then being rendered into a single mp4 proxy editing files.

David Gonzales explains the benefits of the full API integrations they chose to build their multi-vendor solution around, rather than simple watch-folders. For all platforms to know where the errors were was very valuable and was particularly useful for the remote users to know in detail where their files were. This reduces the number of times they would need to ask someone for help and meant that when they did need to ask, they had a good amount of detail to specify what the problem was.

The talk comes to a close with a broad analysis of the different ways that files were moved and cached in order to optimise the workflow. There were a mix of TCP-style workflows and Aspera’s UDP-based transfer technology. Worth noting, also, that HLS manifests needed to be carefully created to only reference chunks that had been transferred, rather than simply any that had been created. Use of live creation of clips from growing files was also an important tool, the in- and out-points being created by viewing a low-latency proxy stream then the final file being clipped from the growing file in France and delivered within minutes to LA.

Overall, this case study gives a good feel for the problems and good practices which go hand in hand with multi-day events with international connectivity and shows that large-scale productions can successfully, and quickly, provide full access to all media to their production teams to maximise the material available for creative uses.

Watch now!
Speakers

Mike Flathers Mike Flathers
CTO,
IBM Aspera
Brandon Potter Brandon Potter
Director of Post Production,
FOX Sports
Dave Norman Dave Norman
Principal Sales Engineer,
Telestream
Daniel Gonzales Daniel Gonzales
Senior Solutions Architect,
Levels Beyond

Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group