Video: IPMX – The Need for a New ProAV Standard

IPMX is an IP specification for interoperating Pro AV equipment. As the broadcast industry is moving towards increasing IP deployments based on SMPTE 2110 and AMWA’s NMOS protocols, there’s been a recognition that the Pro AV market needs to do many of the same things Broadcast wants to do. Moreover, there is not an open standard in Pro AV to achieve this transformation. Whilst there are a number of proprietary alliances, which enable wide-spread use of a single chip or software core, this interoperability comes at a cost and ultimately is underpinned by one, or a group of companies.

Dave Chiappini from Matrox discusses the work of the AIMS Pro AV working group with Wes Simpson from the VSF. Dave underlines the fact that this is a pull to unify the Pro AV industry to help people avoid investing over and over again in reinventing protocols or reworking their products to interoperate. He feels that ‘open standards help propel markets forward’ adding energy and avoiding vendor lock-in. This is one reason for the inclusion of NMOS, allowing any vendor to make a control system by working to the same open specification, opening up the market to both small and large companies.

Dave is the first to acknowledge that the Pro AV market’s needs are different to broadcast’s, and explains that they have calibrated settings, added some and ‘carefully relaxed’ parts of the standards. The aim is to have a specification which allows one piece of equipment, should the vendor wish to design it this way, that can be used in either an IPMX or ST 2110 system. He explains that the idea of relaxing some aspects of the ST 2110 ecosystem helps simplify implementation which therefore reduces cost.

One key relaxation has been in PTP. A lot of time and effort goes into making the PTP infrastructure work properly within SMPTE 2110 infrastructure. Having to do this at an event whilst setting up in a short timespan is not helpful to anyone and, elaborates Dave, a point to point video link simply doesn’t need high precision timing. IPMX, therefore, is lenient in the need for PTP. It will use it when it can, but will gracefully reduce accuracy and, when there is no grandmaster, will still continue to function.

Another difference in the Pro AV market is the need for compression. Whilst there are times when zero compression is needed in both AV and Broadcast, Pro AV needs the ability to throw some preview video out to an iPad or similar. This isn’t going to work with JPEG XS, the preferred ‘minimal compression’ codec for IPMX, so a system for including H264 or H265 is being investigated which could have knock-on benefits for Broadcast.

HDMI is essential for a Pro AV solution and needs its own treatment. Different from SDI, it has lots of resolutions and frame rates. It also has HDCP so AIMS is now working with the DCP on creating a method of carrying HDCP over 2110. It’s thus hoped that this work will help broadcast use cases. TVs are already replacing SDI monitors, such interoperability with HDMI should bring down the costs of monitoring for non-picture critical environments.

Watch now!
Speakers

David Chiappini David Chiappini
Chair, Pro AV Working Group, AIMS
Executive Vice President, Research & Development,
Matrox Graphics Inc.
Wes Simpson Wes Simpson
RIST AG Co-Chair, VSF
President & Founder, LearnIPvideo.com

Video: Delivering Quality Video Over IP with RIST

RIST continues to gain traction as a way to deliver video reliably over the internet. Reliable Internet Stream Transport continues to find uses both as part of the on-air signal chain and to enable broadcast workflows by ensuring that any packet loss is mitigated before a decoder gets around to decoding the stream.

In this video, AWS Elemental’s David Griggs explains why AWS use RIST and how RIST works. Introduced by LearnIPvideo.com’s Will Simpson who is also the co-chair of the RIST Activity Group at the VSF. Wes starts off by explaining the difference between consumer and business use-cases for video streaming against broadcast workflows. Two of the pertinent differences being one-directional video and needing a fixed delay. David explains that one motivator of broadcasters looking to the internet is the need to replace C-Band satellite links.

RIST’s original goals were to deliver video reliably over the internet but to ensure interoperability between vendors which has been missing to date in the purest sense of the word. Along with this, RIST also aimed to have a low, deterministic latency which is vital to make most broadcast workflows practical. RIST was also designed to be agnostic to the carrier type being internet, satellite or cellular.

Wes outlines how important it is to compensate for packet loss showing that even for what might seem low packet loss situations, you’ll still observe a glitch on the audio or video every twenty minutes. But RIST is more than just a way of ensuring your video/audio arrives without gaps, it. can also support other control signals such as PTZ for cameras, intercom feeds, ad insertion such as SCTE 35, subtitling and timecode. This is one strength which makes RIST ideal for broadcast over using, say RTMP for delivering a live stream.

Wes covers the main and simple profile which are also explained in more detail in this video from SMPTE and this article. One way in which RIST is different from other technologies is GRE tunnelling which allows the carriage of any data type alongside RIST and also allows bundling of RIST streams down a single connecting. This provides a great amount of flexibility to support new workflows as they arise.

David closes the video by explaining why RIST is important to AWS. It allows for a single protocol to support media transfers to, from and within the AWS network. Also important, David explains, is RIST’s standards-based approach. RIST is created out of many standards and RFC with very little bespoke technology. Moreover, the RIST specification is being formally created by the VSF and many VSF specifications have gone on to be standardised by bodies such as SMPTE, ST 2110 being a good example. AWS offer RIST simple profile within MediaConnect with plans to implement the main profile in the near future.

Watch now!
Speakers

David Griggs David Griggs
Senior Product Manager, Media Services,
AWS Elemental
Wes Simpson Wes Simpson
RIST AG Co-Chair,
President & Founder, LearnIPvideo.com

Video: RIST Unfiltered – Q&A Session

RIST is a protocol which allows for reliable streaming over lossy networks like the internet. Whilst many people know that much, they may not know more and may have questions. Today’s video aims to answer the most common questions. For a technical presentation of RIST, look no further than this talk and this article

Kieran Kunhya deals out the questions to the panel from the RIST Forum, RIST members and AWS. Asking:
Does RIST need 3rd party equipment?
Is there an open-source implementation of RIST?
Whether there are any RIST learning courses?
as well as why companies should use RIST over SRT.
RIST, we hear is based on RTP which is a very widely deployed technology for real-time media transport and is widely used for SMPTE 2022-2 and 6 streams, SMPTE 2110, AES67 and other audio protocols. So not only is it proven, but it’s also based on RFCs along with much of RIST. SRT, the panel says, is based on the UDT file transfer protocol which is not an RFC and wasn’t designed for live media transport although SRT does perform very well for live media.

“Why are there so many competitors in RIST?” is another common question which is answered by talking about the need for interoperability. Fostering widespread interoperability will grow the market for these products much more than it would with many smaller protocols. “What new traction is RIST getting?” is answered by David Griggs from AWS who says they are committed to the protocol and find that customers like the openness of the protocol and are thus willing to invest their time in creating workflows based on it. Adi Rozenberg lists many examples of customers who are using the technology today. You can hear David Griggs explain RIST from his perspective in this talk.

Other questions handled are the licence that RIST is available under and the open-source implementations, the latency involved in using RIST and whether it can carry NDI. Sergio explains that NDI is a TCP-based protocol so you can transmit it by extracting UDP out of it, using multicast or using a VizRT-tool for extracting the media without recompressing. Finally, the panel looks at how to join the RIST Activity Group in the VSF and the RIST Forum. They talk about the origin of RIST being in an open request to the industry from ESPN and what is coming in the upcoming Advanced Profile.

Watch now!
Speakers

Rick Ackermans Rick Ackermans
RIST AG Chair,
Director of RF & Transmission Engineering, CBS Television
David Griggs David Griggs
Senior Product Manager, Media Services,
AWS Elemental
Sergio Ammirata Sergio Ammirata
RIST AG Member,
Chief Science Officer, SipRadius
Adi Rozenberg Adi Rozenberg
RIST Forum Director
AG Member, Co-Founder & CTO, VideoFlow
Ciro Noronha Ciro Noronha
RIST Forum President and AG Member
EVP of Engineering, Cobalt Digital
Paul Atwell Paul Atwell
RIST Forum Director,
President, Media Transport Solutions
Wes Simpson Wes Simpson
RIST AG Co-Chair,
President & Founder, LearnIPvideo.com
Kieran Kunhya Kieran Kunhya
RIST Forum Director
Founder & CEO, Open Broadcast Systems

Video: Progress Update for the ST 2110 WAN VSF Activity Group

2110 Over WAN Update

Is SMPTE ST 2110 suitable for inter-site connectivity over the WAN? ST 2110 is moving past the early adopter phase with more and more installations and OB vans bringing 2110 into daily use but today, each site works independently. What if we could maintain a 2110 environment between sites. There are a number of challenges still to be overcome and moving a large number of essence flows long distances and between PTP time domains is one of them.

Nevion’s Andy Rayner is chair of the VSF Activity Group looking into transporting SMPTE ST 2110 over WAN and is here to give an update on the work in progress which started 18 months ago. The presentation looks at how to move media between locations which has been the primary focus to date. It then discusses how control over which media are shared will be handled as this is a new aspect to the work. Andy starts by outlining the protection offered in the scheme which supports both 2022-7 and FEC then explains that though FEC is valuable for single links where 2022-7 isn’t viable, only some of the possible ST 2022-5 FEC configurations are supported, in part, to keep latency low.

The headline to carrying 2110 over the WAN is that it will be done over a trunk. GRE is a widely used Cisco trunking technology. Trunking, also known as tunnelling, is a technique of carrying ‘private’ traffic over a network such that a device sending into the trunk doesn’t see any of the infrastructures between the entrance and the exit. It allows, for instance, IPv6 traffic to be carried over IPv4 equipment where the v4 equipment has no idea about the v6 data since it’s been wrapped in a v4 envelope. Similarly, the ipv6 equipment has no idea that the ipv6 data is being wrapped and carried by routers which don’t understand ipv6 since the wrapping and unwrapping of the data is done transparently at the handoff.

In the context of SMPTE ST 2110, a trunk allows one port to be used to create a single connection to the destination, yet carry many individual media streams within. This has the big benefit of simplifying the inter-site connectivity at the IT level, but importantly also means that the single connection is quite high bandwidth. When FEC is applied to a connection, the latency introduced increases as the bit rate reduces. Since ST 2110 carries audio and metadata separately, an FEC-protected stream would have variable latency depending on the type of the of traffic. Bundling them in to one large data stream allows FEC to be applied once and all traffic then suffers the same latency increase. The third reason is to ensure all essences take the same network path. If each connection was separate, it would be possible for some to be routed on a physically different route and therefore be subject to a different latency.

Entering the last part of the talk, Andy switches gears to talk about how site A can control streams in site B. The answer is that it doesn’t ‘control’, rather there is the concept of requesting streams. Site A will declare what is available and site B can state what it would like to connect to and when. In response, site A can accept and promise to have those sources available to the WAN interface at the right time. When the time is right, they are released over the WAN. This protects the WAN connectivity from being filled with media which isn’t actually being used. These exchanges are mediated and carried out with NMOS IS-04 an IS-05.

Watch now!
Speakers

Andy Rayner Andy Rayner
Chief Technologist, Nevion,
Chair, WAN IP Activity Group, VSF
Wes Simpson Moderator: Wes Simpson
Founder, LearnIPVideo.com
Co-chair RIST Activity Group, VSF