Video: How to Deploy an IP-Based Infrastructure

An industry-wide move to any new technology takes time and there is a steady flow of people new to the technology. This video is a launchpad for anyone just coming into IP infrastructures whether because their company is starting or completing an IP project or because people are starting to ask the question “Should we go IP too?”.

Keycode Media’s Steve Dupaix starts with an overview of how SMPTE’s suite of standards called ST 2110 differs from other IP-based video and audio technologies such as NDI, SRT, RIST and Dante. The key takeaways are that NDI provides compressed video with a low delay of around 100ms with a suite of free tools to help you get started. SRT and RIST are similar technologies that are usually used to get AVC or HEVC video from A to B getting around packet loss, something that NDI and ST 2110 don’t protect for without FEC. This is because SRT and RIST are aimed at moving data over lossy networks like the internet. Find out more about SRT in this SMPTE video. For more on NDI, this video from SMPTE and VizRT gives the detail.

 

 

ST 2110’s purpose is to get high quality, usually lossless, video and audio around a local area network originally being envisaged as a way of displacing baseband SDI and was specced to work flawlessly in live production such as a studio. It brings with it some advantages such as separating the essences i.e. video, audio, timing and ancillary data are separate streams. It also brings the promise of higher density for routing operations, lower-cost infrastructure since the routers and switches are standard IT products and increased flexibility due to the much-reduced need to move/add cables.

Robert Erickson from Grass Valley explains that they have worked hard to move all of their product lines to ‘native IP’ as they believe all workflows will move IP whether on-premise or in the cloud. The next step, he sees is enabling more workflows that move video in and out of the cloud and for that, they need to move to JPEG XS which can be carried in ST 2110-20. Thomas Edwards from AWS adds their perspective agreeing that customers are increasingly using JPEG XS for this purpose but within the cloud, they expect the new CDI which is a specification for moving high-bandwidth traffic like 2110-20 streams of uncompressed video from point to point within the cloud.

John Mailhot from Imagine Communications is also the chair of the VSF activity group for ground-cloud-cloud-ground. This aims to harmonise the ways in which vendors provide movement of media, whatever bandwidth, into and out of the cloud as well as from point to point within. From the Imagine side, he says that ST 2110 is now embedded in all products but the key is to choose the most appropriate transport. In the cloud, CDI is often the most appropriate transport within AWS and he agrees that JPEG XS is the most appropriate for cloud<->ground operations.

The panel takes a moment to look at the way that the pandemic has impacted the use of video over IP. As we heard earlier this year, the New York Times had been waiting before their move to IP and the pandemic forced them to look at the market earlier than planned. When they looked, they found the products which they needed and moved to a full IP workflow. So this has been the theme and if anything has driven, and will continue to drive, innovation. The immediate need provided the motivation to consider new workflows and now that the workflow is IP, it’s quicker, cheaper and easier to test new variation. Thomas Edwards points out that many of the current workflows are heavily reliant on AVC or HEVC despite the desire to use JPEG XS for the broadcast content. For people at home, JPEG XS bandwidths aren’t practical but RIST with AVC works fine for most applications.

Interoperability between vendors has long been the focus of the industry for ST 2110 and, in John’s option, is now pretty reliable for inter-vendor essence exchanges. Recently the focus has been on doing the same with NMOS which both he and Robert report is working well from recent, multi-vendor projects they have been involved in. John’s interest is working out ways that the cloud and ground can find out about each other which isn’t a use case yet covered in AMWA’s NMOS IS-04.

The video ends with a Q&A covering the following:

  • Where to start in your transition to IP
  • What to look for in an ST 2110-capable switch
  • Multi-Level routing support
  • Using multicast in AWS
  • Whether IT equipment lifecycles conflict with Broadcast refresh cycles
  • Watch now!
    Speakers

    John Mailhot John Mailhot
    CTO & Director of Product Management, Infrastructure & Networking,
    Imagine Communications
    Ciro Noronha Ciro Noronha
    Executive Vice-President of Engineering,
    Cobalt Digital
    Thomas Edwards Thomas Edwards
    Principal Solutions Architect & Evangelist,
    Amazon Web Services
    Robert Erickson Robert Erickson
    Strategic Account Manager Sports and Venues,
    Grass Valley
    Steve Dupaix Steve Dupaix
    Senior Account Executive,
    Key Code Media

    Video: Public Internet Transport of Live Broadcast Video – SRT, NDI and RIST for Compressed Video

    Getting video over the internet and around the cloud has well-established solutions, but not only are they continuing to evolve, they are still new to some. This video looks at workflows that are possible teaming up SRT, RIST and NDI by getting a glimpse into projects that have gone live in 2020. We also get a deeper look at RIST’s features with a Q&A.

    This video from SMPTE’s New York section starts with Bryan Nelson from Alpha Video who’s been involved in many cloud-based NDI projects many of which also use SRT to get in and out of the cloud. NDI’s a lightly compressed, low-delay codec suitable for production and works well on 1GbE networks. Not dependant on multicast, it’s a technology that lends itself to cloud-based production where it’s found many uses. Bryan looks at a number of workflows that are also enabled by the Sienna production system which can use many video formats including NDI.

    For more information on SRT and RIST, have a look at this SMPTE video outlining how they work and the differences. For a deeper dive into NDI, this SMPTE webinar with VizRT explains how its works and also gives demos of the same software that Bryan uses. To get a feel for how NDI fits in with live production compared to SMPTE’s uncompressed ST 2110, this IBC Panel discussion ‘Where can SMPTE ST 2110 and NDI Co-exist’? explores the topic further.

    Bryan’s first example is the 2020 NFL draft is first up which used remote contribution on iPhones streaming using SRT. All streams were aggregated in AWS and converted to NDI feeding NDI multiviewers and routed. These were passed down to on-prem NDI processors which used HP ProLiant servers to output as SDI for handoff to other broadcast workflows. The router could be controlled by soft panels but also hardware panels on-prem. Bryan explores an extension to this idea where multiple cloud domains can be used, with NDI being the handoff between them. In one cloud system, VizRT vision mixing and graphics can be added with multiviewers and other outputs being sent via SRT to remote directors, producers etc. Another cloud system could be controlled by a third party with other processing ahead of then being sent to side and being decoded to SDI on-prem. This can be totally separate to acquisition from SDI & NDI with cameras located elsewhere. SRT & NDI become the mediators between this decentralised production environment.

    Bryan finishes off by talking about remote NLE monitoring and various types of MCR monitoring. NLE editing is made easy through NDI integration within Adobe Premiere and Avid Media Composer. It’s possible to bring all of these into a processing engine and move them over the public internet for viewing elsewhere via Apple TV or otherwise.

     

     

    Ciro Noronha from Cobalt Digital takes the last half of the video to talk about RIST. In addition to the talks mentioned above, Ciro recently gave a talk exploring the many RIST use cases. A good written overview of RIST can be found here.

    Ciro looks at the two published profiles that form RIST, the simple and main profile. The simple profile defines RTP interoperability with error correction, using re-requested packets with the option of bonding links. Ciro covers its use of RTCP for maintaining the channel and handling the negative acknowledgements (NACKs) which are based on RFC 4585. RIST can bond multiple links or use 2022-7 seamless switching.

    The Main profile builds on the simple profile by adding encryption, authentication and tunnelling. Tunnels allow multiple flows down one connection which simplifies firewall configuration, encryption and allows either end to initiate the bi-directional link. The tunnel can also carry non-RIST traffic for any other purpose. The tunnels are FRE over UDP (RFC 8086). DTLS is used for encryption which is almost identical to TLS used to secure websites. DTLS uses certificates meaning you get to authenticate the other end, not just encrypt the data. Alternatively, you can send a password that avoids the need for certificates when that’s not needed or for one-to-many distribution. Ciro concludes by showing that it can work with up to 50% packet loss and answers many questions in the Q&A.

    Watch now!
    Speakers

    Byran Nelson Bryan Nelson
    Sales Account Executive,
    Alpha Video
    Ciro Noronha Ciro Noronha
    President, RIST Forum
    Executive Vice President of Engineering, Cobalt Digital

    Video: Bit-Rate Evaluation of Compressed HDR using SL-HDR1

    HDR video can look vastly better than standard dynamic range (SDR), but much of our broadcast infrastructure is made for SDR delivery. SL-HDR1 allows you to deliver HDR over SDR transmission chains by breaking down HDR signals into an SDR video plus enhancement metadata which describes how to reconstruct the original HDR signal. Now part of the ATSC 3.0 suite of standards, people are asking the question whether you get better compression using SL-HDR1 or compressing HDR directly.

    HDR works by changing the interpretation of the video samples. As human sight has a non-linear response to luminance, we can take the same 256 or 1024 possible luminance values and map them to brightness so that where the eye isn’t very sensitive, only a few values are used, but there is a lot of detail where we see well. Humans perceive more detail at lower luminosity, so HDR devotes a lot more of the luminance values to describing that area and relatively few at high brightness where specular highlights tend to be. HDR, therefore, has the benefit of not only increasing the dynamic range but actually provides more detail in the lower light areas than SDR.

    Ciro Noronha from Cobalt has been examining the question of encoding. Video encoders are agnostic to dynamic range. Since HDR and SDR only define the meaning of the luminance values, the video encoder sees no difference. Yet there have been a number of papers saying that sending SL-HDR1 can result in bitrate savings over HDR. SL-HDR1 is defined in ETSI TS 103 433-1 and included in ATSC A/341. The metadata carriage is done using SMPTE ST 2108-1 or carried within the video stream using SEI. Ciro set out to do some tests to see if this was the case with technology consultant Matt Goldman giving his perspective on HDR and the findings.

    Ciro tested with three types of Tested 1080p BT.2020 10-bit content with the AVC and HEVC encoders set to 4:2:0, 10-bit with a 100-frame GOP. Quality was rated using PSNR as well as two special types of PSNR which look at distortion/deviation from the CIE colour space. The findings show that AVC encode chains benefit more from SL-HDR1 than HEVC and it’s clear that the benefit is content-dependent. Work remains to be done now to connect these results with verified subjective tests. With LCEVC and VVC, MPEG has seen that subjective assessments can show up to 10% better results than objective metrics. Additionally, PSNR is not well known for correlating well with visual improvements.

    Watch now!
    Speakers

    Ciro Noronha Ciro Noronha
    Executive Vice President of Engineering, Cobalt Digital
    President, Rist Forum
    Matthew Goldman Matthew Goldman
    Technology Consultant

    Video: RIST Unfiltered – Q&A Session

    RIST is a protocol which allows for reliable streaming over lossy networks like the internet. Whilst many people know that much, they may not know more and may have questions. Today’s video aims to answer the most common questions. For a technical presentation of RIST, look no further than this talk and this article

    Kieran Kunhya deals out the questions to the panel from the RIST Forum, RIST members and AWS. Asking:
    Does RIST need 3rd party equipment?
    Is there an open-source implementation of RIST?
    Whether there are any RIST learning courses?
    as well as why companies should use RIST over SRT.
    RIST, we hear is based on RTP which is a very widely deployed technology for real-time media transport and is widely used for SMPTE 2022-2 and 6 streams, SMPTE 2110, AES67 and other audio protocols. So not only is it proven, but it’s also based on RFCs along with much of RIST. SRT, the panel says, is based on the UDT file transfer protocol which is not an RFC and wasn’t designed for live media transport although SRT does perform very well for live media.

    “Why are there so many competitors in RIST?” is another common question which is answered by talking about the need for interoperability. Fostering widespread interoperability will grow the market for these products much more than it would with many smaller protocols. “What new traction is RIST getting?” is answered by David Griggs from AWS who says they are committed to the protocol and find that customers like the openness of the protocol and are thus willing to invest their time in creating workflows based on it. Adi Rozenberg lists many examples of customers who are using the technology today. You can hear David Griggs explain RIST from his perspective in this talk.

    Other questions handled are the licence that RIST is available under and the open-source implementations, the latency involved in using RIST and whether it can carry NDI. Sergio explains that NDI is a TCP-based protocol so you can transmit it by extracting UDP out of it, using multicast or using a VizRT-tool for extracting the media without recompressing. Finally, the panel looks at how to join the RIST Activity Group in the VSF and the RIST Forum. They talk about the origin of RIST being in an open request to the industry from ESPN and what is coming in the upcoming Advanced Profile.

    Watch now!
    Speakers

    Rick Ackermans Rick Ackermans
    RIST AG Chair,
    Director of RF & Transmission Engineering, CBS Television
    David Griggs David Griggs
    Senior Product Manager, Media Services,
    AWS Elemental
    Sergio Ammirata Sergio Ammirata
    RIST AG Member,
    Chief Science Officer, SipRadius
    Adi Rozenberg Adi Rozenberg
    RIST Forum Director
    AG Member, Co-Founder & CTO, VideoFlow
    Ciro Noronha Ciro Noronha
    RIST Forum President and AG Member
    EVP of Engineering, Cobalt Digital
    Paul Atwell Paul Atwell
    RIST Forum Director,
    President, Media Transport Solutions
    Wes Simpson Wes Simpson
    RIST AG Co-Chair,
    President & Founder, LearnIPvideo.com
    Kieran Kunhya Kieran Kunhya
    RIST Forum Director
    Founder & CEO, Open Broadcast Systems