Released in 2015, NDI is a royalty-free technology that implements low-latency video, audio, tally and control over gigabit networking. Since 2015, NDI has grown in popularity immensely and is now very widely used within AV, live streaming production and in cloud workflows. Developed by Newtek, now part of VizRT, it allows computers to easily push video from programs onto the local network whether from Teams or Skype, a video Editor, OBS or anything else. Many vendors have taken the NDI binaries and integrated them into their products.
On The Broadcast Knowledge we’ve explored how NDI compliments SMPTE’s ST 2110 suite of standards which primarly help move uncompressed payloads around a broadcast suite. In this panel put on by the IET at IBC we explored the benefits of 2110 and NDI to understand where the overlap lies. And for a deeper dive into NDI including some demos, we also featured this talk by SMPTE and VizRT.
In today’s video from Key Code Media, we hear from Newtek’s Dr. Andrew Cross, creator of NDI on what’s new in NDI’s latest release, 5.0. Jeff Sengpiehl from Key Code Media explains that NDI 5.0 brings with it improvements in multi-platform support including native support for Apple Silicon. 5.0 also includes plugins for any program to share its audio over NDI as well as a way of sharing a link to get someone to share their video and audio by NDI. Part of the big changes, though in this latest version is the addition of ‘reliable UDP’ and ‘NDI Bridge’. Based on Google’s QUIC this provides a way of sending NDI over lossy networks allowing for it to recover lost data and deal with network congestion. This ties in nicely with ‘NDI Bridge’ which allows two or three separate locations to share their NDI sources and destinations seamlessly. These additional features bring NDI outside of the LAN. Being a layer 2 protocol, it’s always been seen as a local protocol even when deployed in the cloud.
The majority of the video features Dr. Cross answering questions from Jeff and viewers. These questions include:
What are the pain points with audio? Is NDI Audio Direct a replacement for Dante? Maintaining synchronisation in multi-location systems. The significance of support for ARM chips, bidirectional use of NDI, NDI Bridge security, 10-bit colour support, NDI’s place in the ProAV market and the possibility of NDI becoming open source or a standard.
Getting video over the internet and around the cloud has well-established solutions, but not only are they continuing to evolve, they are still new to some. This video looks at workflows that are possible teaming up SRT, RIST and NDI by getting a glimpse into projects that have gone live in 2020. We also get a deeper look at RIST’s features with a Q&A.
This video from SMPTE’s New York section starts with Bryan Nelson from Alpha Video who’s been involved in many cloud-based NDI projects many of which also use SRT to get in and out of the cloud. NDI’s a lightly compressed, low-delay codec suitable for production and works well on 1GbE networks. Not dependant on multicast, it’s a technology that lends itself to cloud-based production where it’s found many uses. Bryan looks at a number of workflows that are also enabled by the Sienna production system which can use many video formats including NDI.
Bryan’s first example is the 2020 NFL draft is first up which used remote contribution on iPhones streaming using SRT. All streams were aggregated in AWS and converted to NDI feeding NDI multiviewers and routed. These were passed down to on-prem NDI processors which used HP ProLiant servers to output as SDI for handoff to other broadcast workflows. The router could be controlled by soft panels but also hardware panels on-prem. Bryan explores an extension to this idea where multiple cloud domains can be used, with NDI being the handoff between them. In one cloud system, VizRT vision mixing and graphics can be added with multiviewers and other outputs being sent via SRT to remote directors, producers etc. Another cloud system could be controlled by a third party with other processing ahead of then being sent to side and being decoded to SDI on-prem. This can be totally separate to acquisition from SDI & NDI with cameras located elsewhere. SRT & NDI become the mediators between this decentralised production environment.
Bryan finishes off by talking about remote NLE monitoring and various types of MCR monitoring. NLE editing is made easy through NDI integration within Adobe Premiere and Avid Media Composer. It’s possible to bring all of these into a processing engine and move them over the public internet for viewing elsewhere via Apple TV or otherwise.
Ciro Noronha from Cobalt Digital takes the last half of the video to talk about RIST. In addition to the talks mentioned above, Ciro recently gave a talk exploring the many RIST use cases. A good written overview of RIST can be found here.
Ciro looks at the two published profiles that form RIST, the simple and main profile. The simple profile defines RTP interoperability with error correction, using re-requested packets with the option of bonding links. Ciro covers its use of RTCP for maintaining the channel and handling the negative acknowledgements (NACKs) which are based on RFC 4585. RIST can bond multiple links or use 2022-7 seamless switching.
The Main profile builds on the simple profile by adding encryption, authentication and tunnelling. Tunnels allow multiple flows down one connection which simplifies firewall configuration, encryption and allows either end to initiate the bi-directional link. The tunnel can also carry non-RIST traffic for any other purpose. The tunnels are FRE over UDP (RFC 8086). DTLS is used for encryption which is almost identical to TLS used to secure websites. DTLS uses certificates meaning you get to authenticate the other end, not just encrypt the data. Alternatively, you can send a password that avoids the need for certificates when that’s not needed or for one-to-many distribution. Ciro concludes by showing that it can work with up to 50% packet loss and answers many questions in the Q&A.
Our final look back at the most viewed articles of 2020 is a very prescient topic, that of live IP production. As we all know, this has very much come into focus during the pandemic. Those that already had an IP infrastructure found managing it remotely easier than those that needed to get in and move SDI cables when they needed to provision new video links. Moreover putting together live remote workflows is all the easier with video IP and traditionally the decision on whether to use SMPTE 2110 has been a case of whether you need to be in the cloud or not.
This article and video brought together Will Waters, an NDI expert from VizRT, Marc Risby from UK SI Boxer and Willem Vermost who was with the EBU. The conversation, pre-pandemic, focused on how to choose the video-over-IP technology which was best for you and really tried to find the ways in which broadcasters could benefit from both at the same time.
The Broadcast Knowledge Editor, Russell Trafford-Jones also moderated a SMPTE Webcast with VizRT going into the detail of NDI and how it can be deployed in the cloud.
Our ability to work remotely during the pandemic is thanks to the hard work of many people who have developed the technologies which have made it possible. Even before the pandemic struck, this work was still on-going and gaining momentum to overcome more challenges and more hurdles of working in IP both within the broadcast facility and in the cloud.
SMPTE’s Paul Briscoe moderates the discussion surrounding these on-going efforts to make the cloud a better place for broadcasters in this series of presentation from the SMPTE Toronto section. First in the order is Peter Wharton from TAG V.S. talking about ways to innovate workflows to better suit the cloud.
Peter first outlines the challenges of live cloud production, namely keeping latency low, signal quality high and managing the high bandwidths needed at the same time as keeping a handle on the costs. There is an increasing number of cloud-native solutions but how many are truly innovating? Don’t just move workflows into the cloud, advocates Peter, rather take this opportunity to embrace the cloud.
Working with the cloud will be built on new transport interfaces like RIST and SRT using a modular and open architecture. Scalability is the name of the game for ‘the cloud’ but the real trick is in building your workflows and technology so that you can scale during a live event.
Source: TAG V.S.
There are still obstacles to be overcome. Bandwidth for uncompressed video is one, with typical signals up to 3Gbps uncompressed which then drives very high data transfer costs. The lack of PTP in the cloud makes ST 2110 workflows difficult, similarly the lack of multicast.
Tackling bandwidth, Peter looks at the low-latency ways to compress video such as NDI, NDI|HX, JPEG XS and Amazon’s lossless CDI. Peter talks us through some of the considerations in choosing the right codec for the task in hand.
Finishing his talk, Peter asks if this isn’t time for a radical change. Why not rethink the entire process and embrace latency? Peter gives an example of a colour grading workflow which has been able to switch from on-prem colour grading on very high-spec computers to running this same, incredibly intensive process in the cloud. The company’s able to spin up thousands of CPUs in the cloud and use spot pricing to create temporary, low cost, extremely powerful computers. This has brought waiting times down for jobs to be processed significantly and has reduced the cost of processing an order of magnitude.
Lastly Peter looks further to the future examining how saturating the stadium with cameras could change the way we operate cameras. With 360-degree coverage of the stadium, the position of the camera can be changed virtually by AI allowing camera operators to be remote from the stadium. There is already work to develop this from Canon and Intel. Whilst this may not be able to replace all camera operators, sports is the home of bleeding-edge technology. How long can it resist the technology to create any camera angle?
Jean-Baptiste Lorent is next from intoPIX to explain what JPEG XS is. A new, ultra-low-latency, codec it meets the challenges of the industry’s move to IP, its increasing desire to move data rather than people and the continuing trend of COTS servers and cloud infrastructure to be part of the real-time production chain.
As Peter covered, uncompressed data rates are very high. The Tokyo Olympics will be filmed in 8K which racks up close to 80Gbps for 120fps footage. So with JPEG XS standing for Xtra Small and Xtra Speed, it’s no surprise that this new ISO standard is being leant on to help.
Tested as visually lossless to 7 or more encode generations and with latency only a few lines of video, JPEG XS works well in multi-stage live workflows. Jean-Baptiste explains that it’s low complexity and can work well on FPGAs and on CPUs.
JPEG XS can support up to 16-bit values, any chroma and any colour space. It’s been standardised to be carried in MPEG TSes, in SMPTE ST 2110 as 2110-22, over RTP (pending) within HEIF file containers and more. Worst case bitrates are 200Mbps for 1080i, 390Mbps for 1080p60 and 1.4Gbps for 2160p60.
Evolution of Standards-Based IP Workflows Ground-To-Cloud
Last in the presentations is John Mailhot from Imagine Communications and also co-chair of an activity group at the VSF working on standardising interfaces for passing media place to place. Within the data plane, it would be better to avoid vendors repeatedly writing similar drivers. Between ground and cloud, how do we standardise video arriving and the data you need around that. Similarly standardising new technologies like Amazon’s CDI is important.
John outlines the aim of having an interoperability point within the cloud above the low-level data transfer, closer to 7 than to 1 in the OSI model. This work is being done within AIMS, VSF, SMPTE and other organisations based on existing technologies.
The video finishes with a Q&A and includes comments from AWS’s Evan Statton whose talk on CDI that evening is not part of this video. The questions cover comparing NDI with JPEG XS, how CDI uses networking to achieve high bandwidths and high reliability, the balance between minimising network and minimising CPU depending on workflow, the increasingly agile nature of broadcast infrastructure, the need for PTP in the cloud plus the pros and cons of standards versus specifications.
Director Corporate Strategy, TAG V.S.
President, Happy Robotz
Vice President of Membership, SMPTE
Director Marketing & Sales,
Co-Chair Cloud-Gounrd-Cloud-Ground Activity Group, VSF
Directory & NMOS Steering Member, AMWA
Systems Architect for IP Convergence, Imagine Communcations
Moderator: Paul Briscoe
Canadian Regional Governor, SMPTE
Consultant, Televisionary Consulting
Principal Architect, Media & Entertainment
Amazon Web Services
Subscribe to get daily updates
Views and opinions expressed on this website are those of the author(s) and do not necessarily reflect those of SMPTE or SMPTE Members.
This website is presented for informational purposes only. Any reference to specific companies, products or services does not represent promotion, recommendation, or endorsement by SMPTE