Video: IPMX – The Need for a New ProAV Standard

IPMX is an IP specification for interoperating Pro AV equipment. As the broadcast industry is moving towards increasing IP deployments based on SMPTE 2110 and AMWA’s NMOS protocols, there’s been a recognition that the Pro AV market needs to do many of the same things Broadcast wants to do. Moreover, there is not an open standard in Pro AV to achieve this transformation. Whilst there are a number of proprietary alliances, which enable wide-spread use of a single chip or software core, this interoperability comes at a cost and ultimately is underpinned by one, or a group of companies.

Dave Chiappini from Matrox discusses the work of the AIMS Pro AV working group with Wes Simpson from the VSF. Dave underlines the fact that this is a pull to unify the Pro AV industry to help people avoid investing over and over again in reinventing protocols or reworking their products to interoperate. He feels that ‘open standards help propel markets forward’ adding energy and avoiding vendor lock-in. This is one reason for the inclusion of NMOS, allowing any vendor to make a control system by working to the same open specification, opening up the market to both small and large companies.

Dave is the first to acknowledge that the Pro AV market’s needs are different to broadcast’s, and explains that they have calibrated settings, added some and ‘carefully relaxed’ parts of the standards. The aim is to have a specification which allows one piece of equipment, should the vendor wish to design it this way, that can be used in either an IPMX or ST 2110 system. He explains that the idea of relaxing some aspects of the ST 2110 ecosystem helps simplify implementation which therefore reduces cost.

One key relaxation has been in PTP. A lot of time and effort goes into making the PTP infrastructure work properly within SMPTE 2110 infrastructure. Having to do this at an event whilst setting up in a short timespan is not helpful to anyone and, elaborates Dave, a point to point video link simply doesn’t need high precision timing. IPMX, therefore, is lenient in the need for PTP. It will use it when it can, but will gracefully reduce accuracy and, when there is no grandmaster, will still continue to function.

Another difference in the Pro AV market is the need for compression. Whilst there are times when zero compression is needed in both AV and Broadcast, Pro AV needs the ability to throw some preview video out to an iPad or similar. This isn’t going to work with JPEG XS, the preferred ‘minimal compression’ codec for IPMX, so a system for including H264 or H265 is being investigated which could have knock-on benefits for Broadcast.

HDMI is essential for a Pro AV solution and needs its own treatment. Different from SDI, it has lots of resolutions and frame rates. It also has HDCP so AIMS is now working with the DCP on creating a method of carrying HDCP over 2110. It’s thus hoped that this work will help broadcast use cases. TVs are already replacing SDI monitors, such interoperability with HDMI should bring down the costs of monitoring for non-picture critical environments.

Watch now!
Speakers

David Chiappini David Chiappini
Chair, Pro AV Working Group, AIMS
Executive Vice President, Research & Development,
Matrox Graphics Inc.
Wes Simpson Wes Simpson
RIST AG Co-Chair, VSF
President & Founder, LearnIPvideo.com

Video: Line by Line Processing of Video on IT Hardware

If the tyranny of frame buffers is let to continue, line-latency I/O is rendered impossible without increasing frame-rate to 60fps or, preferably, beyond. In SDI, hardware was able to process video line-by-line. Now, with uncompressed SDI, is the same possible with IT hardware?

Kieran Kunhya from Open Broadcast Systems explains how he has been able to develop line-latency video I/O with SMPTE 2110, how he’s coupled that with low-latency AVC and HEVC encoding and the challenges his company has had to overcome.

The commercial drivers are fairly well known for reducing the latency. Firstly, for standard 1080i50, typically treated as 25fps, if you have a single frame buffer, you are treated to a 40ms delay. If you need multiple buffers for a workflow, this soon stacks up so whatever the latency of your codec – uncompressed or JPEG XS, for example – the latency will be far above it. In today’s covid world, companies are looking for cutting the latency so people can work remotely. This has only intensified the interest that was already there for the purposes of remote production (REMIs) in having low-latency feeds. In the Covid world, low latency allows full engagement in conversations which is vital for news anchors to conduct interviews as well as they would in person.

IP, itself, has come into its own during recent times where there has been no-one around to move an SDI cable, being able to log in and scale up, or down, SMPTE ST 2110 infrastructure remotely is a major benefit. IT equipment has been shown to be fairly resilient to supply chain disruption during the pandemic, says Kieran, due to the industry being larger and being used to scaling up.

Kieran’s approach to receiving ST 2110 deals in chunks of 5 to 10 lines. This gives you time to process the last few lines whilst you are waiting for the next to arrive. This processing can be de-encapsulation, processing the pixel values to translate to another format or to modify the values to key on graphics.

As the world is focussed on delivering in and out of unusual and residential places, low-bitrate is the name of the game. So Kieran looks at low-latency HEVC/AVC encoding as part of an example workflow which takes in ST 2110 video at the broadcaster and encodes to MPEG to deliver to the home. In the home, the video is likely to be decoded natively on a computer, but Kieran shows an SDI card which can be used to deliver in traditional baseband if necessary.

Kieran talks about the dos and don’ts of encoding and decoding with AVC and HEVC with low latency targetting an end-to-end budget of 100ms. The name of the game is to avoid waiting for whole frames, so refreshing the screen with I-frame information in small slices, is one way of keeping the decoder supplied with fresh information without having to take the full-frame hit of 40ms (for 1080i50). Audio is best sent uncompressed to ensure its latency is lower than that of the video.

Decoding requires carefully handling the slice boundaries, ensuring deblocking i used so there are no artefacts seen. Compressed video is often not PTP locked which does mean that delivery into most ST 2110 infrastructures requires frame synchronising and resampling audio.

Kieran foresees increasing use of 2110 to MPEG Transport Stream back to 2110 workflows during the pandemic and finishes by discussing the tradeoffs in delivering during Covid.

Watch now!
Speaker

Kieran Kunhya Kieran Kunhya
CEO & Founder, Open Broadcast Systems

Video: Keeping Time with PTP

The audio world has been using PTP for years, but now there is renewed interest thanks to its inclusion in SMPTE ST 2110. Replacing the black and burst timing signal (and for those that used it, TLS), PTP changes the way we distribute time. B&B was a waterfall distribution, PTP is a bi-directional conversation which, as a system, needs to be monitored and should be actively maintained.

Michael Waidson from Telestream (who now own Tektronix) brings us the foundational basics of PTP as well as tips and tricks to troubleshoot your PTP system. He starts by explaining. the types of messages which are exchanged between the clock and the device as well as why all these different messages are necessary. We see that we can set the frequency at which the announce, sync and follow-up messages. The sync and follow-up messages actually contain the time. When a device receives one of these messages, it needs to respond with a ‘delay request’ in order to work out how much of a delay there is between it and the grand master clock. This will result in it receiving a delay response. On top of these basic messages, there is a periodic management message which can contain further information such as daylight savings time or drop-frame information.

Michael moves on to looking at troubleshooting highlighting the four main numbers to check: The domain value, grandmaster ID, message rates and the communication mode. PTP is a global standard used in many industries. To make PTP most useful to the broadcast industry, SMPTE ST 2059 defines values to use for message repetition (4 per second for announce messages, 8 for sync, delay request and delay response). ST 2059 also defines how devices can determine the phase of any broadcast signal for any given time which is the fundamental link needed to ensure all devices keep synchronicity.

Another good tip from Michael is if you see the grandmaster MAC changing between the grandmasters on the system, this indicates it’s no receiving any announce messages so is initiating the Best Master Clock Algorithm (BMCA) and trying the next grandmaster. Some PTP monitoring equipment including from Meinberg and from Telestream can show the phase lag of the PTP timing as well as the delay between the primary and secondary grandmaster – the lower the better.

A talk on PTP can’t avoid mentioning boundary clocks and transparent switches. Boundary clocks take on much of the two-way traffic in PTP protecting the grandmasters from having to speak directly to all the, potentially, thousands of devices. Transparent switches, simply update the time announcements with the delay for the message to move through the switch. Whilst this is useful in keeping the timing accurate, it provides no protection for the grandmasters. He finishes video ends with a look at how to check PTP messages on the switch.

Watch now!
Speakers

Michael Waidson Michael Waidson
Application Engineer
Telestream (formerly Tektronix)

Video: Progress Update for the ST 2110 WAN VSF Activity Group

2110 Over WAN Update

Is SMPTE ST 2110 suitable for inter-site connectivity over the WAN? ST 2110 is moving past the early adopter phase with more and more installations and OB vans bringing 2110 into daily use but today, each site works independently. What if we could maintain a 2110 environment between sites. There are a number of challenges still to be overcome and moving a large number of essence flows long distances and between PTP time domains is one of them.

Nevion’s Andy Rayner is chair of the VSF Activity Group looking into transporting SMPTE ST 2110 over WAN and is here to give an update on the work in progress which started 18 months ago. The presentation looks at how to move media between locations which has been the primary focus to date. It then discusses how control over which media are shared will be handled as this is a new aspect to the work. Andy starts by outlining the protection offered in the scheme which supports both 2022-7 and FEC then explains that though FEC is valuable for single links where 2022-7 isn’t viable, only some of the possible ST 2022-5 FEC configurations are supported, in part, to keep latency low.

The headline to carrying 2110 over the WAN is that it will be done over a trunk. GRE is a widely used Cisco trunking technology. Trunking, also known as tunnelling, is a technique of carrying ‘private’ traffic over a network such that a device sending into the trunk doesn’t see any of the infrastructures between the entrance and the exit. It allows, for instance, IPv6 traffic to be carried over IPv4 equipment where the v4 equipment has no idea about the v6 data since it’s been wrapped in a v4 envelope. Similarly, the ipv6 equipment has no idea that the ipv6 data is being wrapped and carried by routers which don’t understand ipv6 since the wrapping and unwrapping of the data is done transparently at the handoff.

In the context of SMPTE ST 2110, a trunk allows one port to be used to create a single connection to the destination, yet carry many individual media streams within. This has the big benefit of simplifying the inter-site connectivity at the IT level, but importantly also means that the single connection is quite high bandwidth. When FEC is applied to a connection, the latency introduced increases as the bit rate reduces. Since ST 2110 carries audio and metadata separately, an FEC-protected stream would have variable latency depending on the type of the of traffic. Bundling them in to one large data stream allows FEC to be applied once and all traffic then suffers the same latency increase. The third reason is to ensure all essences take the same network path. If each connection was separate, it would be possible for some to be routed on a physically different route and therefore be subject to a different latency.

Entering the last part of the talk, Andy switches gears to talk about how site A can control streams in site B. The answer is that it doesn’t ‘control’, rather there is the concept of requesting streams. Site A will declare what is available and site B can state what it would like to connect to and when. In response, site A can accept and promise to have those sources available to the WAN interface at the right time. When the time is right, they are released over the WAN. This protects the WAN connectivity from being filled with media which isn’t actually being used. These exchanges are mediated and carried out with NMOS IS-04 an IS-05.

Watch now!
Speakers

Andy Rayner Andy Rayner
Chief Technologist, Nevion,
Chair, WAN IP Activity Group, VSF
Wes Simpson Moderator: Wes Simpson
Founder, LearnIPVideo.com
Co-chair RIST Activity Group, VSF