Video: How to Deploy an IP-Based Infrastructure

An industry-wide move to any new technology takes time and there is a steady flow of people new to the technology. This video is a launchpad for anyone just coming into IP infrastructures whether because their company is starting or completing an IP project or because people are starting to ask the question “Should we go IP too?”.

Keycode Media’s Steve Dupaix starts with an overview of how SMPTE’s suite of standards called ST 2110 differs from other IP-based video and audio technologies such as NDI, SRT, RIST and Dante. The key takeaways are that NDI provides compressed video with a low delay of around 100ms with a suite of free tools to help you get started. SRT and RIST are similar technologies that are usually used to get AVC or HEVC video from A to B getting around packet loss, something that NDI and ST 2110 don’t protect for without FEC. This is because SRT and RIST are aimed at moving data over lossy networks like the internet. Find out more about SRT in this SMPTE video. For more on NDI, this video from SMPTE and VizRT gives the detail.

 

 

ST 2110’s purpose is to get high quality, usually lossless, video and audio around a local area network originally being envisaged as a way of displacing baseband SDI and was specced to work flawlessly in live production such as a studio. It brings with it some advantages such as separating the essences i.e. video, audio, timing and ancillary data are separate streams. It also brings the promise of higher density for routing operations, lower-cost infrastructure since the routers and switches are standard IT products and increased flexibility due to the much-reduced need to move/add cables.

Robert Erickson from Grass Valley explains that they have worked hard to move all of their product lines to ‘native IP’ as they believe all workflows will move IP whether on-premise or in the cloud. The next step, he sees is enabling more workflows that move video in and out of the cloud and for that, they need to move to JPEG XS which can be carried in ST 2110-20. Thomas Edwards from AWS adds their perspective agreeing that customers are increasingly using JPEG XS for this purpose but within the cloud, they expect the new CDI which is a specification for moving high-bandwidth traffic like 2110-20 streams of uncompressed video from point to point within the cloud.

John Mailhot from Imagine Communications is also the chair of the VSF activity group for ground-cloud-cloud-ground. This aims to harmonise the ways in which vendors provide movement of media, whatever bandwidth, into and out of the cloud as well as from point to point within. From the Imagine side, he says that ST 2110 is now embedded in all products but the key is to choose the most appropriate transport. In the cloud, CDI is often the most appropriate transport within AWS and he agrees that JPEG XS is the most appropriate for cloud<->ground operations.

The panel takes a moment to look at the way that the pandemic has impacted the use of video over IP. As we heard earlier this year, the New York Times had been waiting before their move to IP and the pandemic forced them to look at the market earlier than planned. When they looked, they found the products which they needed and moved to a full IP workflow. So this has been the theme and if anything has driven, and will continue to drive, innovation. The immediate need provided the motivation to consider new workflows and now that the workflow is IP, it’s quicker, cheaper and easier to test new variation. Thomas Edwards points out that many of the current workflows are heavily reliant on AVC or HEVC despite the desire to use JPEG XS for the broadcast content. For people at home, JPEG XS bandwidths aren’t practical but RIST with AVC works fine for most applications.

Interoperability between vendors has long been the focus of the industry for ST 2110 and, in John’s option, is now pretty reliable for inter-vendor essence exchanges. Recently the focus has been on doing the same with NMOS which both he and Robert report is working well from recent, multi-vendor projects they have been involved in. John’s interest is working out ways that the cloud and ground can find out about each other which isn’t a use case yet covered in AMWA’s NMOS IS-04.

The video ends with a Q&A covering the following:

  • Where to start in your transition to IP
  • What to look for in an ST 2110-capable switch
  • Multi-Level routing support
  • Using multicast in AWS
  • Whether IT equipment lifecycles conflict with Broadcast refresh cycles
  • Watch now!
    Speakers

    John Mailhot John Mailhot
    CTO & Director of Product Management, Infrastructure & Networking,
    Imagine Communications
    Ciro Noronha Ciro Noronha
    Executive Vice-President of Engineering,
    Cobalt Digital
    Thomas Edwards Thomas Edwards
    Principal Solutions Architect & Evangelist,
    Amazon Web Services
    Robert Erickson Robert Erickson
    Strategic Account Manager Sports and Venues,
    Grass Valley
    Steve Dupaix Steve Dupaix
    Senior Account Executive,
    Key Code Media

    Video: Public Internet Transport of Live Broadcast Video – SRT, NDI and RIST for Compressed Video

    Getting video over the internet and around the cloud has well-established solutions, but not only are they continuing to evolve, they are still new to some. This video looks at workflows that are possible teaming up SRT, RIST and NDI by getting a glimpse into projects that have gone live in 2020. We also get a deeper look at RIST’s features with a Q&A.

    This video from SMPTE’s New York section starts with Bryan Nelson from Alpha Video who’s been involved in many cloud-based NDI projects many of which also use SRT to get in and out of the cloud. NDI’s a lightly compressed, low-delay codec suitable for production and works well on 1GbE networks. Not dependant on multicast, it’s a technology that lends itself to cloud-based production where it’s found many uses. Bryan looks at a number of workflows that are also enabled by the Sienna production system which can use many video formats including NDI.

    For more information on SRT and RIST, have a look at this SMPTE video outlining how they work and the differences. For a deeper dive into NDI, this SMPTE webinar with VizRT explains how its works and also gives demos of the same software that Bryan uses. To get a feel for how NDI fits in with live production compared to SMPTE’s uncompressed ST 2110, this IBC Panel discussion ‘Where can SMPTE ST 2110 and NDI Co-exist’? explores the topic further.

    Bryan’s first example is the 2020 NFL draft is first up which used remote contribution on iPhones streaming using SRT. All streams were aggregated in AWS and converted to NDI feeding NDI multiviewers and routed. These were passed down to on-prem NDI processors which used HP ProLiant servers to output as SDI for handoff to other broadcast workflows. The router could be controlled by soft panels but also hardware panels on-prem. Bryan explores an extension to this idea where multiple cloud domains can be used, with NDI being the handoff between them. In one cloud system, VizRT vision mixing and graphics can be added with multiviewers and other outputs being sent via SRT to remote directors, producers etc. Another cloud system could be controlled by a third party with other processing ahead of then being sent to side and being decoded to SDI on-prem. This can be totally separate to acquisition from SDI & NDI with cameras located elsewhere. SRT & NDI become the mediators between this decentralised production environment.

    Bryan finishes off by talking about remote NLE monitoring and various types of MCR monitoring. NLE editing is made easy through NDI integration within Adobe Premiere and Avid Media Composer. It’s possible to bring all of these into a processing engine and move them over the public internet for viewing elsewhere via Apple TV or otherwise.

     

     

    Ciro Noronha from Cobalt Digital takes the last half of the video to talk about RIST. In addition to the talks mentioned above, Ciro recently gave a talk exploring the many RIST use cases. A good written overview of RIST can be found here.

    Ciro looks at the two published profiles that form RIST, the simple and main profile. The simple profile defines RTP interoperability with error correction, using re-requested packets with the option of bonding links. Ciro covers its use of RTCP for maintaining the channel and handling the negative acknowledgements (NACKs) which are based on RFC 4585. RIST can bond multiple links or use 2022-7 seamless switching.

    The Main profile builds on the simple profile by adding encryption, authentication and tunnelling. Tunnels allow multiple flows down one connection which simplifies firewall configuration, encryption and allows either end to initiate the bi-directional link. The tunnel can also carry non-RIST traffic for any other purpose. The tunnels are FRE over UDP (RFC 8086). DTLS is used for encryption which is almost identical to TLS used to secure websites. DTLS uses certificates meaning you get to authenticate the other end, not just encrypt the data. Alternatively, you can send a password that avoids the need for certificates when that’s not needed or for one-to-many distribution. Ciro concludes by showing that it can work with up to 50% packet loss and answers many questions in the Q&A.

    Watch now!
    Speakers

    Byran Nelson Bryan Nelson
    Sales Account Executive,
    Alpha Video
    Ciro Noronha Ciro Noronha
    President, RIST Forum
    Executive Vice President of Engineering, Cobalt Digital

    Video: Creating Interoperable Hybrid Workflows with RIST

    TV isn’t made in one place anymore. Throughout media and entertainment, workflows increasingly involve many third parties and being in the cloud. Content may be king, but getting it from place to place is foundational in our ability to do great work. RIST is a protocol that is able to move video very reliably and flexibly between buildings, into, out of and through the cloud. Leveraging its flexibility, there are many ways to use it. This video helps review where RIST is up to in its development and understand the many ways in which it can be used to solve your workflow problems.

    Starting the RIST overview is Ciro Noronha, chair of the RIST Forum. Whilst we have delved in to the detail here before in talks like this from SMPTE and this talk also from Ciro, this is a good refresher on the main points that RIST is published in three parts, known as profiles. First was the Simple Profile which defined the basics, those being that it’s based on RTP and uses an ARQ technology to dynamically request any missing packets in a timely way which doesn’t trip the stream up if there are problems. The Main Profile was published second which includes encryption and authentication. Lastly is the Advanced Profile which will be released later this year.

     

     

    Ciro outlines the importance of the Simple Profile. That it guarantees compatibility with RTP-only decoders, albeit without error correction. When you can use the error correction, you’ll benefit from correction even when 50% of the traffic is being lost unlike similar protocols such as SRT. Another useful feature for many is multi-link support allowing you to use RIST over bonded LTE modems as well as using SMPTE ST 2022-7

    The Main Profile brings with it support for tunnelling meaning you can set up one connection between two locations and put multiple streams of data through. This is great for simplifying data connectivity because only one port needs to be opened in order to deliver many streams and it doesn’t matter in which direction you establish the tunnel. Once established, the tunnel is bi-directional. The tunnel provides the ability to carry general data such as control data or miscellaneous IT.

    Encryption made its debut with the publishing of the Main Profile. RIST can use DTLS which is a version of the famous TLS security used in web sites that runs on UDP rather than TCP. The big advantage of using this is that it brings authentication as well as encryption. This ensures that the endpoint is allowed to receive your stream and is based on the strong encryption we are familiar with and which has been tested and hardened over the years. Certificate distribution can be difficult and disproportionate to the needs of the workflow, so RIST also allows encryption using pre-shared keys.

    Handing over now to David Griggs and Tim Baldwin, we discuss the use cases which are enabled by RIST which is already found in encoders, decoders and gateways which are on the market. One use case which is on the rise is satellite replacement. There are many companies that have been using satellite for years and for whom the lack of operational agility hasn’t been a problem. In fact, they’ve also been able to make a business model work for occasional use even though, in a pure sense, satellite isn’t perfectly suited to occasional use satellites. However, with the ability to use C-band closing in many parts of the world, companies have been forced to look elsewhere for their links and RIST is one solution that works well.

    David runs through a number of others including primary and secondary distribution, links aggregation, premium sports syndication with the handoff between the host broadcaster and the multiple rights-holding broadcasters being in the cloud and also a workflow for OTT where RIST is used for ingest.

    RIST is available as an open source library called libRIST which can be downloaded from videolan and is documented in open specifications TR-06-1 and TR-06-2. LibRIST can be found in gstreamer, Upipe, VLC, Wireshark and FFmpeg.

    The video finishes with questions about how RIST compares with SRT. RTMP, CMAF and WebRTC.

    Watch now!
    Speakers

    Tim Baldwin Tim Baldwin
    Head of Product,
    Zixi
    David Griggs David Griggs
    Senior Product Manager, Distribution Platforms
    Disney Streaming Services
    Ciro Noronha Ciro Noronha
    President, RIST Forum
    Executive Vice President of Engineering, Cobalt Digital

    Video: Bit-Rate Evaluation of Compressed HDR using SL-HDR1

    HDR video can look vastly better than standard dynamic range (SDR), but much of our broadcast infrastructure is made for SDR delivery. SL-HDR1 allows you to deliver HDR over SDR transmission chains by breaking down HDR signals into an SDR video plus enhancement metadata which describes how to reconstruct the original HDR signal. Now part of the ATSC 3.0 suite of standards, people are asking the question whether you get better compression using SL-HDR1 or compressing HDR directly.

    HDR works by changing the interpretation of the video samples. As human sight has a non-linear response to luminance, we can take the same 256 or 1024 possible luminance values and map them to brightness so that where the eye isn’t very sensitive, only a few values are used, but there is a lot of detail where we see well. Humans perceive more detail at lower luminosity, so HDR devotes a lot more of the luminance values to describing that area and relatively few at high brightness where specular highlights tend to be. HDR, therefore, has the benefit of not only increasing the dynamic range but actually provides more detail in the lower light areas than SDR.

    Ciro Noronha from Cobalt has been examining the question of encoding. Video encoders are agnostic to dynamic range. Since HDR and SDR only define the meaning of the luminance values, the video encoder sees no difference. Yet there have been a number of papers saying that sending SL-HDR1 can result in bitrate savings over HDR. SL-HDR1 is defined in ETSI TS 103 433-1 and included in ATSC A/341. The metadata carriage is done using SMPTE ST 2108-1 or carried within the video stream using SEI. Ciro set out to do some tests to see if this was the case with technology consultant Matt Goldman giving his perspective on HDR and the findings.

    Ciro tested with three types of Tested 1080p BT.2020 10-bit content with the AVC and HEVC encoders set to 4:2:0, 10-bit with a 100-frame GOP. Quality was rated using PSNR as well as two special types of PSNR which look at distortion/deviation from the CIE colour space. The findings show that AVC encode chains benefit more from SL-HDR1 than HEVC and it’s clear that the benefit is content-dependent. Work remains to be done now to connect these results with verified subjective tests. With LCEVC and VVC, MPEG has seen that subjective assessments can show up to 10% better results than objective metrics. Additionally, PSNR is not well known for correlating well with visual improvements.

    Watch now!
    Speakers

    Ciro Noronha Ciro Noronha
    Executive Vice President of Engineering, Cobalt Digital
    President, Rist Forum
    Matthew Goldman Matthew Goldman
    Technology Consultant