Video: IP-based Networks for UHD and HDR Video

If you get given a video signal, would you know what type it was? Life used to be simple, an SD signal would decode in a waveform monitor and you’d see which type it was. Now, with UHD and HDR, this isn’t all the information you need. Arguably this gets easier with IP and is possibly one of the few things that does. This video from AIMS helps to clear up why IP’s the best choice for UHD and HDR.

John Mailhot from Imagine Communications joins Wes Simpson from LearnIPVideo.com to introduce us to the difficulties wrangling with UHD and HDR video. Reflecting on the continued improvement of in-home displays’ ability to show brighter and better pictures as well as the broadcast cameras’ ability to capture much more dynamic range, John’s work at Imagine is focussed on helping broadcasters ensure their infrastructure can enable these high dynamic range experiences. Streaming services have a slightly easier time delivering HDR to end-users as they are in complete control of the distribution chain whereas often in broadcast, particularly with affiliates, there are many points in the chain which need to be HDR/UHD capable.

John starts by looking at how UHD was implemented in the early stages. UHD, being twice the horizontal and twice the vertical resolution of HD is usually seen as 4xHD, but, importantly, John points out that this is true for resolution but, as most HD is 1080i, it also represents a move to 1080p, 3Gbps signals. John’s point is that this is a strain on the infrastructure which was not necessarily tested for initially. Given the UHD signal, initially, was carried by four cables, there is now 4 times the chance of a signal impairment due to cabling.

Square Division Multiplexing (SQD) is the ‘most obvious’ way to carry UHD signals with existing HD infrastructure. The picture is simply cut into four quarters and each quarter is sent down one cable. The benefit here is that it’s easy to see which order the cables need to be connected to the equipment. The downsides included a frame-buffer delay (half a frame) each time the signal was received, difficulties preventing drift of quadrants if they were treated differently by the infrastructure (i.e. there was a non-synced hand-off). One important problem is that there is no way to know an HD feed is from a UHD set or just a lone 3G signal.

2SI, two-sample interleave, was another method of splitting up the signal which was standardised by SMPTE. This worked by taking a pair of samples and sending them down cable 1, then the next pair down cable 2, the pair of samples under the first pair went down cable 3 then the pair under 2 went down 4. This had the happy benefit that each cable held a complete picture, albeit very crudely downsampled. However, for monitoring applications, this is a benefit as you can DA one feed and send this to a monitor. Well, that would have been possible except for the problem that each signal had to maintain 400ns timing with the others which meant DAs often broke the timing budget if they reclocked. It did, however, remove the half-field latency burden which SQD carries. The main confounding factor in this mechanism is that looking at the video from any cable on a monitor isn’t enough to understand which of the four feeds you are looking at. Mis-cabling equipment leads to subtle visual errors which are hard to spot and hard to correct.

Enter the VPID, the Video Payload ID. SD SDI didn’t require this, HD often had it, but for UHD it became essential. SMPTE ST 425-5:2019 is the latest document explaining payload ID for UHD. As it’s version five, you should be aware that older equipment may not parse the information in the correct way a) as a bug and b) due to using an old standard. The VPID carries information such as interlaced/progressive, aspect ratio, transfer characteristics (HLG, SDR etc.), frame rate etc. John talks through some of the common mismatches in interpretation and implementation of VPID.

12G is the obvious baseband solution to the four-wires problem of UHD. Nowadays the cost of a 12G transceiver is only slightly more than 3G ones, therefore 12G is a very reasonable solution for many. It does require careful cabling to ensure the cable is in good condition and not too long. For OB trucks and small projects, 12G can work well. For larger installations, optical connections are needed, one signal per fibre.

The move to IP initially went to SMPTE ST 2022-6, which is a mapping of SDI onto IP. This meant it was still quite restrictive as we were still living within the SDI-described world. 12G was difficult to do. Getting four IP streams correctly aligned, and all switched on time, was also impractical. For UHD, therefore SMPTE ST 2110 is the natural home. 2110 can support 32K, so UHD fits in well. ST 2110-22 allows use of JPEG XS so if the 9-11Gbps bitrate of UHDp50/60 is too much it can be squeezed down to 1.5Gbps with almost no latency. Being carried as a single video flow removes any switch timing problems and as 2110 doesn’t use VPID, there is much more flexibility to fully describe the signal allowing future growth. We don’t know what’s to come, but if it’s different shapes of video rater, new colour spaces or extensions needed for IPMX, these are possible.

John finishes his conversation with Wes mentioning two big benefits of moving to IT-based infrastructure. One is the ability to use the free Wireshark or EBU List tools to analyse video. Whilst there are still good reasons to buy test equipment, the fact that many checks can be done without expensive equipment like waveform monitors is good news. The second big benefit is that whilst these standards were being made, the available network infrastructure has moved from 25 to 100 to 400Gbps links with 800Gbps coming in the next year or two. None of these changes has required any change in the standards, unlike with SDI where improvements in signal required improvements in baseband. Rather, the industry is able to take advantage of this new infrastructure with no effort on our part to develop it or modify the standards.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect, IP Convergence,
Imagine Communications
Wes Simpson Wes Simpson
RIST AG Co-Chair, VSF
President & Founder, LearnIPvideo.com

Video: IPMX – The Need for a New ProAV Standard

IPMX is an IP specification for interoperating Pro AV equipment. As the broadcast industry is moving towards increasing IP deployments based on SMPTE 2110 and AMWA’s NMOS protocols, there’s been a recognition that the Pro AV market needs to do many of the same things Broadcast wants to do. Moreover, there is not an open standard in Pro AV to achieve this transformation. Whilst there are a number of proprietary alliances, which enable wide-spread use of a single chip or software core, this interoperability comes at a cost and ultimately is underpinned by one, or a group of companies.

Dave Chiappini from Matrox discusses the work of the AIMS Pro AV working group with Wes Simpson from the VSF. Dave underlines the fact that this is a pull to unify the Pro AV industry to help people avoid investing over and over again in reinventing protocols or reworking their products to interoperate. He feels that ‘open standards help propel markets forward’ adding energy and avoiding vendor lock-in. This is one reason for the inclusion of NMOS, allowing any vendor to make a control system by working to the same open specification, opening up the market to both small and large companies.

Dave is the first to acknowledge that the Pro AV market’s needs are different to broadcast’s, and explains that they have calibrated settings, added some and ‘carefully relaxed’ parts of the standards. The aim is to have a specification which allows one piece of equipment, should the vendor wish to design it this way, that can be used in either an IPMX or ST 2110 system. He explains that the idea of relaxing some aspects of the ST 2110 ecosystem helps simplify implementation which therefore reduces cost.

One key relaxation has been in PTP. A lot of time and effort goes into making the PTP infrastructure work properly within SMPTE 2110 infrastructure. Having to do this at an event whilst setting up in a short timespan is not helpful to anyone and, elaborates Dave, a point to point video link simply doesn’t need high precision timing. IPMX, therefore, is lenient in the need for PTP. It will use it when it can, but will gracefully reduce accuracy and, when there is no grandmaster, will still continue to function.

Another difference in the Pro AV market is the need for compression. Whilst there are times when zero compression is needed in both AV and Broadcast, Pro AV needs the ability to throw some preview video out to an iPad or similar. This isn’t going to work with JPEG XS, the preferred ‘minimal compression’ codec for IPMX, so a system for including H264 or H265 is being investigated which could have knock-on benefits for Broadcast.

HDMI is essential for a Pro AV solution and needs its own treatment. Different from SDI, it has lots of resolutions and frame rates. It also has HDCP so AIMS is now working with the DCP on creating a method of carrying HDCP over 2110. It’s thus hoped that this work will help broadcast use cases. TVs are already replacing SDI monitors, such interoperability with HDMI should bring down the costs of monitoring for non-picture critical environments.

Watch now!
Speakers

David Chiappini David Chiappini
Chair, Pro AV Working Group, AIMS
Executive Vice President, Research & Development,
Matrix Graphics Inc.
Wes Simpson Wes Simpson
RIST AG Co-Chair, VSF
President & Founder, LearnIPvideo.com

Video: Delivering Quality Video Over IP with RIST

RIST continues to gain traction as a way to deliver video reliably over the internet. Reliable Internet Stream Transport continues to find uses both as part of the on-air signal chain and to enable broadcast workflows by ensuring that any packet loss is mitigated before a decoder gets around to decoding the stream.

In this video, AWS Elemental’s David Griggs explains why AWS use RIST and how RIST works. Introduced by LearnIPvideo.com’s Will Simpson who is also the co-chair of the RIST Activity Group at the VSF. Wes starts off by explaining the difference between consumer and business use-cases for video streaming against broadcast workflows. Two of the pertinent differences being one-directional video and needing a fixed delay. David explains that one motivator of broadcasters looking to the internet is the need to replace C-Band satellite links.

RIST’s original goals were to deliver video reliably over the internet but to ensure interoperability between vendors which has been missing to date in the purest sense of the word. Along with this, RIST also aimed to have a low, deterministic latency which is vital to make most broadcast workflows practical. RIST was also designed to be agnostic to the carrier type being internet, satellite or cellular.

Wes outlines how important it is to compensate for packet loss showing that even for what might seem low packet loss situations, you’ll still observe a glitch on the audio or video every twenty minutes. But RIST is more than just a way of ensuring your video/audio arrives without gaps, it. can also support other control signals such as PTZ for cameras, intercom feeds, ad insertion such as SCTE 35, subtitling and timecode. This is one strength which makes RIST ideal for broadcast over using, say RTMP for delivering a live stream.

Wes covers the main and simple profile which are also explained in more detail in this video from SMPTE and this article. One way in which RIST is different from other technologies is GRE tunnelling which allows the carriage of any data type alongside RIST and also allows bundling of RIST streams down a single connecting. This provides a great amount of flexibility to support new workflows as they arise.

David closes the video by explaining why RIST is important to AWS. It allows for a single protocol to support media transfers to, from and within the AWS network. Also important, David explains, is RIST’s standards-based approach. RIST is created out of many standards and RFC with very little bespoke technology. Moreover, the RIST specification is being formally created by the VSF and many VSF specifications have gone on to be standardised by bodies such as SMPTE, ST 2110 being a good example. AWS offer RIST simple profile within MediaConnect with plans to implement the main profile in the near future.

Watch now!
Speakers

David Griggs David Griggs
Senior Product Manager, Media Services,
AWS Elemental
Wes Simpson Wes Simpson
RIST AG Co-Chair,
President & Founder, LearnIPvideo.com

Video: RIST Unfiltered – Q&A Session

RIST is a protocol which allows for reliable streaming over lossy networks like the internet. Whilst many people know that much, they may not know more and may have questions. Today’s video aims to answer the most common questions. For a technical presentation of RIST, look no further than this talk and this article

Kieran Kunhya deals out the questions to the panel from the RIST Forum, RIST members and AWS. Asking:
Does RIST need 3rd party equipment?
Is there an open-source implementation of RIST?
Whether there are any RIST learning courses?
as well as why companies should use RIST over SRT.
RIST, we hear is based on RTP which is a very widely deployed technology for real-time media transport and is widely used for SMPTE 2022-2 and 6 streams, SMPTE 2110, AES67 and other audio protocols. So not only is it proven, but it’s also based on RFCs along with much of RIST. SRT, the panel says, is based on the UDT file transfer protocol which is not an RFC and wasn’t designed for live media transport although SRT does perform very well for live media.

“Why are there so many competitors in RIST?” is another common question which is answered by talking about the need for interoperability. Fostering widespread interoperability will grow the market for these products much more than it would with many smaller protocols. “What new traction is RIST getting?” is answered by David Griggs from AWS who says they are committed to the protocol and find that customers like the openness of the protocol and are thus willing to invest their time in creating workflows based on it. Adi Rozenberg lists many examples of customers who are using the technology today. You can hear David Griggs explain RIST from his perspective in this talk.

Other questions handled are the licence that RIST is available under and the open-source implementations, the latency involved in using RIST and whether it can carry NDI. Sergio explains that NDI is a TCP-based protocol so you can transmit it by extracting UDP out of it, using multicast or using a VizRT-tool for extracting the media without recompressing. Finally, the panel looks at how to join the RIST Activity Group in the VSF and the RIST Forum. They talk about the origin of RIST being in an open request to the industry from ESPN and what is coming in the upcoming Advanced Profile.

Watch now!
Speakers

Rick Ackermans Rick Ackermans
RIST AG Chair,
Director of RF & Transmission Engineering, CBS Television
David Griggs David Griggs
Senior Product Manager, Media Services,
AWS Elemental
Sergio Ammirata Sergio Ammirata
RIST AG Member,
Chief Science Officer, SipRadius
Adi Rozenberg Adi Rozenberg
RIST Forum Director
AG Member, Co-Founder & CTO, VideoFlow
Ciro Noronha Ciro Noronha
RIST Forum President and AG Member
EVP of Engineering, Cobalt Digital
Paul Atwell Paul Atwell
RIST Forum Director,
President, Media Transport Solutions
Wes Simpson Wes Simpson
RIST AG Co-Chair,
President & Founder, LearnIPvideo.com
Kieran Kunhya Kieran Kunhya
RIST Forum Director
Founder & CEO, Open Broadcast Systems