Video: IPMX for Broadcast Installations?

IPMX, the new ProAV IP challenger spec, is taking shape promising to tame SMPTE’s ST 2110 standards, make PTP useable and extend AMWA into managing HDCP. Is this a tall order and can it actually deliver? Taking us through the ins and out is Jean Lapierre from Matrox.

With or without IPMX, ProAV is moving to IP whether with SDVoE, ZeeVee or something else. There are a number of competing technologies, but we hear from Jean that IPMX is the only software-defined one. This is important because if you don’t require a chip to be an IPMX product and participate in ProAV workflows, then anything can support IPMX such as PCs, Laptops and mobile phones.

 

 

IPMX based on RTP, ST 2110, ST 2059 PTP and AMWA specifications IS-04, IS-05, IS-08 (audio channel mapping), IS-11 for EDID handling as well as NMOS security and best practice guidance. This seems like a lot, but to cover media transfer, registration, control, security and interfacing with display screens, this is the range of tech needed.

Compared to SMPTE ST 2110, the PTP profile is easier to deploy and produces less traffic, explains Jean, and IPMX even works without PTP which support for asynchronous signals. Support of HDCO is included along with a lower-latency FEC mode for those that find 2022-7 too costly or impractical to deploy. Lastly, Jean points out that thanks to the in-built support for JPEG XS, IPMX can support UHD workflows within a 1GbE infrastructure.

Jean continues by discussing the compatibility between 2110 and IPMX. In principle IPMX and 2110 senders and receivers are interchangeable. Jean goes into more detail, but the example would be that IPMX is managing the HDCP encryption of the source using AMWA NMOS IS-11. IS-11 is, naturally available to be used with any other technology including ST 2110. If it’s adopted, then HDCP-protected material can flow between the two systems.

Watch now!
Speaker

Jean Lapierre Jean Lapierre
Senior Director, Advanced Technologies,
Matrox

Video: AES67 Over Wide Area Networks


AES67 is a widely adopted standard for moving PCM audio from place to place. Being a standard, it’s ideal for connecting equipment together from different vendors and delivers almost zero latency, lossless audio from place to place. This video looks at use cases for moving AES from its traditional home on a company’s LAN to the WAN.

Discovery’s Eurosport Technology Transformation (ETT) project is a great example of the compelling use case for moving to operations over the WAN. Eurosport’s Olivier Chambin explains that the idea behind the project is to centralise all the processing technology needed for their productions spread across Europe feeding their 60 playout channels.

Control surfaces and some interface equipment is still necessary in the European production offices and commentary points throughout Europe, but the processing is done in two data centres, one in the Netherlands, the other in the UK. This means audio does need to travel between countries over Discovery’s dual MPLS WAN using IGMPv3 multicast with SSM

From a video perspective, the ETT project has adopted 2110 for all essences with NMOS control. Over the WAN, video is sent as JPEG XS but all audio links are 2022-7 2110-30 with well over 10,000 audio streams in total. Timing is done using PTP aware switches with local GNSS-derived PTP with a unicast-over-WAN as a fallback. For more on PTP over WAN have a look at this RTS webinar and this update from Meinberg’s Daniel Boldt.

 

 

Bolstering the push for standards such as AES67 is self-confessed ‘audioholic’ Anthony P. Kuzub from Canada’s CBC. Chair of the local AES section he makes the point that broadcast workflows have long used AES standards to ensure vendor interoperability from microphones to analogue connectors, from grounding to MADI (AES10). This is why AES67 is important as it will ensure that the next generation of equipment can also interoperate.

Surrounding these two case studies is a presentation from Nicolas Sturmel all about the AES SC-02-12-M working group which aims to define the best ways of working to enable easy use of AES67 on the WAN. The key issue here is that AES67 was written expecting short links on a private network that you can completely control. Moving to a WAN or the internet with long-distance links on which your bandwidth or choice of protocols is limited can make AES67 perform badly if you don’t follow the best practices.

To start with, Nicolas urges anyone to check they actually need AES67 over the WAN to start with. Only if you need precise timing (for lip-sync for example) with PCM quality and low latencies from 250ms down to as a little as 5 milliseconds do you really need AES67 instead of using other protocols such as ACIP, he explains. The problem being that any ping on the internet, even to something fairly close, can easily take 16 to 40ms for the round trip. This means you’re guaranteed 8ms of delay, but any one packet could be as late as 20ms known as the Packet Delay Variation (PDV).

Not only do we need to find a way to transmit AES67, but also PTP. The Precise Time Protocol has ways of coping for jitter and delay, but these don’t work well on WAN links whether the delay in one direction may be different to the delay for a packet in the other direction. PTP also isn’t built to deal with the higher delay and jitter involved. PTP over WAN can be done and is a way to deliver a service but using a GPS receiver at each location, as Eurosport does, is a much better solution only hampered by cost and one’s ability to see enough of the sky.

The internet can lose packets. Given a few hours, the internet will nearly always lose packets. To get around this problem, Nicolas looks at using FEC whereby you are constantly sending redundant data. FEC can send up to around 25% extra data so that if any is lost, the extra information sent can be leveraged to determine the lost values and reconstruct the stream. Whilst this is a solid approach, computing the FEC adds delay and the extra data being constantly sent adds a fixed uplift on your bandwidth need. For circuits that have very few issues, this can seem wasteful but having a fixed percentage can also be advantageous for circuits where a predictable bitrate is much more important. Nicolas also highlights that RIST, SRT or ST 2022-7 are other methods that can also work well. He talks about these longer in his talk with Andreas Hildrebrand

The video concludes with a Q&A.

Watch now!
Speakers

Nicolas Sturmel Nicolas Sturmel
Product Manager – Senior Technologist,
Merging Technologies
Anthony P. Kuzub Anthony P. Kuzub
Senior Systems Designer,
CBC/Radio Canada
Olivier Chambin Olivier Chambin
Audio Broadcast Engineer, AioP and Voice-over-IP
Eurosport Discovery

Video: AES67 Beyond the LAN

It can be tempting to treat a good quality WAN connection like a LAN. But even if it has a low ping time and doesn’t drop packets, when it comes to professional audio like AES67, you can help but unconver the differences. AES67 was designed for tranmission over short distances meaning extremely low latency and low jitter. However, there are ways to deal with this.

Nicolas Sturmel from Merging Technologies is working as part of the AES SC-02-12M working group which has been defining the best ways of working to enable easy use of AES67 on the WAN wince the summer. The aims of the group are to define what you should expect to work with AES67, how you can improve your network connection and give guidance to manufacturers on further features needed.

WANs come in a number of flavours, a fully controlled WAN like many larger broadacsters have which is fully controlled by them. Other WANs are operated on SLA by third parties which can provide less control but may present a reduced operating cost. The lowest cost is the internet.

He starts by outlining the fact that AES67 was written to expect short links on a private network that you can completely control which causes problems when using the WAN/internet with long-distance links on which your bandwidth or choice of protocols can be limited. If you’re contributing into the cloud, then you have an extra layer of complication on top of the WAN. Virtualised computers can offer another place where jitter and uncertain timing can enter.

Link

The good news is that you may not need to use AES67 over the WAN. If you need precise timing (for lip-sync for example) with PCM quality and low latencies from 250ms down to as a little as 5 milliseconds do you really need AES67 instead of using other protocols such as ACIP, he explains. The problem being that any ping on the internet, even to something fairly close, can easily have a varying round trip time of, say, 16 to 40ms. This means you’re guaranteed 8ms of delay, but any one packet could be as late as 20ms. This variation in timing is known as the Packet Delay Variation (PDV).

Not only do we need to find a way to transmit AES67, but also PTP. The Precise Time Protocol has ways of coping for jitter and delay, but these don’t work well on WAN links whether the delay in one direction may be different to the delay for a packet in the other direction. PTP also isn’t built to deal with the higher delay and jitter involved. PTP over WAN can be done and is a way to deliver a service but using a GPS receiver at each location is a much better solution only hampered by cost and one’s ability to see enough of the sky.

The internet can lose packets. Given a few hours, the internet will nearly always lose packets. To get around this problem, Nicolas looks at using FEC whereby you are constantly sending redundant data. FEC can send up to around 25% extra data so that if any is lost, the extra information sent can be leveraged to determine the lost values and reconstruct the stream. Whilst this is a solid approach, computing the FEC adds delay and the extra data being constantly sent adds a fixed uplift on your bandwidth need. For circuits that have very few issues, this can seem wasteful but having a fixed percentage can also be advantageous for circuits where a predictable bitrate is much more important. Nicolas also highlights that RIST, SRT or ST 2022-7 are other methods that can also work well. He talks about these longer in his talk with Andreas Hildrebrand

Nocals finishes by summarising that your solution will need to be sent over unicast IP, possibly in a tunnel, each end locked to a GNSS, high buffers to cope with jitter and, perhaps most importantly, the output of a workflow analysis to find out which tools you need to deploy to meet your actual needs.

Watch now!
Speaker

Nicolas Sturmel Nicolas Sturmel
Network Specialist,
Merging Technologies

Video: Public Internet Transport of Live Broadcast Video – SRT, NDI and RIST for Compressed Video

Getting video over the internet and around the cloud has well-established solutions, but not only are they continuing to evolve, they are still new to some. This video looks at workflows that are possible teaming up SRT, RIST and NDI by getting a glimpse into projects that have gone live in 2020. We also get a deeper look at RIST’s features with a Q&A.

This video from SMPTE’s New York section starts with Bryan Nelson from Alpha Video who’s been involved in many cloud-based NDI projects many of which also use SRT to get in and out of the cloud. NDI’s a lightly compressed, low-delay codec suitable for production and works well on 1GbE networks. Not dependant on multicast, it’s a technology that lends itself to cloud-based production where it’s found many uses. Bryan looks at a number of workflows that are also enabled by the Sienna production system which can use many video formats including NDI.

For more information on SRT and RIST, have a look at this SMPTE video outlining how they work and the differences. For a deeper dive into NDI, this SMPTE webinar with VizRT explains how its works and also gives demos of the same software that Bryan uses. To get a feel for how NDI fits in with live production compared to SMPTE’s uncompressed ST 2110, this IBC Panel discussion ‘Where can SMPTE ST 2110 and NDI Co-exist’? explores the topic further.

Bryan’s first example is the 2020 NFL draft is first up which used remote contribution on iPhones streaming using SRT. All streams were aggregated in AWS and converted to NDI feeding NDI multiviewers and routed. These were passed down to on-prem NDI processors which used HP ProLiant servers to output as SDI for handoff to other broadcast workflows. The router could be controlled by soft panels but also hardware panels on-prem. Bryan explores an extension to this idea where multiple cloud domains can be used, with NDI being the handoff between them. In one cloud system, VizRT vision mixing and graphics can be added with multiviewers and other outputs being sent via SRT to remote directors, producers etc. Another cloud system could be controlled by a third party with other processing ahead of then being sent to side and being decoded to SDI on-prem. This can be totally separate to acquisition from SDI & NDI with cameras located elsewhere. SRT & NDI become the mediators between this decentralised production environment.

Bryan finishes off by talking about remote NLE monitoring and various types of MCR monitoring. NLE editing is made easy through NDI integration within Adobe Premiere and Avid Media Composer. It’s possible to bring all of these into a processing engine and move them over the public internet for viewing elsewhere via Apple TV or otherwise.

 

 

Ciro Noronha from Cobalt Digital takes the last half of the video to talk about RIST. In addition to the talks mentioned above, Ciro recently gave a talk exploring the many RIST use cases. A good written overview of RIST can be found here.

Ciro looks at the two published profiles that form RIST, the simple and main profile. The simple profile defines RTP interoperability with error correction, using re-requested packets with the option of bonding links. Ciro covers its use of RTCP for maintaining the channel and handling the negative acknowledgements (NACKs) which are based on RFC 4585. RIST can bond multiple links or use 2022-7 seamless switching.

The Main profile builds on the simple profile by adding encryption, authentication and tunnelling. Tunnels allow multiple flows down one connection which simplifies firewall configuration, encryption and allows either end to initiate the bi-directional link. The tunnel can also carry non-RIST traffic for any other purpose. The tunnels are FRE over UDP (RFC 8086). DTLS is used for encryption which is almost identical to TLS used to secure websites. DTLS uses certificates meaning you get to authenticate the other end, not just encrypt the data. Alternatively, you can send a password that avoids the need for certificates when that’s not needed or for one-to-many distribution. Ciro concludes by showing that it can work with up to 50% packet loss and answers many questions in the Q&A.

Watch now!
Speakers

Byran Nelson Bryan Nelson
Sales Account Executive,
Alpha Video
Ciro Noronha Ciro Noronha
President, RIST Forum
Executive Vice President of Engineering, Cobalt Digital