Video: AES67 Beyond the LAN

It can be tempting to treat a good quality WAN connection like a LAN. But even if it has a low ping time and doesn’t drop packets, when it comes to professional audio like AES67, you can help but unconver the differences. AES67 was designed for tranmission over short distances meaning extremely low latency and low jitter. However, there are ways to deal with this.

Nicolas Sturmel from Merging Technologies is working as part of the AES SC-02-12M working group which has been defining the best ways of working to enable easy use of AES67 on the WAN wince the summer. The aims of the group are to define what you should expect to work with AES67, how you can improve your network connection and give guidance to manufacturers on further features needed.

WANs come in a number of flavours, a fully controlled WAN like many larger broadacsters have which is fully controlled by them. Other WANs are operated on SLA by third parties which can provide less control but may present a reduced operating cost. The lowest cost is the internet.

He starts by outlining the fact that AES67 was written to expect short links on a private network that you can completely control which causes problems when using the WAN/internet with long-distance links on which your bandwidth or choice of protocols can be limited. If you’re contributing into the cloud, then you have an extra layer of complication on top of the WAN. Virtualised computers can offer another place where jitter and uncertain timing can enter.

Link

The good news is that you may not need to use AES67 over the WAN. If you need precise timing (for lip-sync for example) with PCM quality and low latencies from 250ms down to as a little as 5 milliseconds do you really need AES67 instead of using other protocols such as ACIP, he explains. The problem being that any ping on the internet, even to something fairly close, can easily have a varying round trip time of, say, 16 to 40ms. This means you’re guaranteed 8ms of delay, but any one packet could be as late as 20ms. This variation in timing is known as the Packet Delay Variation (PDV).

Not only do we need to find a way to transmit AES67, but also PTP. The Precise Time Protocol has ways of coping for jitter and delay, but these don’t work well on WAN links whether the delay in one direction may be different to the delay for a packet in the other direction. PTP also isn’t built to deal with the higher delay and jitter involved. PTP over WAN can be done and is a way to deliver a service but using a GPS receiver at each location is a much better solution only hampered by cost and one’s ability to see enough of the sky.

The internet can lose packets. Given a few hours, the internet will nearly always lose packets. To get around this problem, Nicolas looks at using FEC whereby you are constantly sending redundant data. FEC can send up to around 25% extra data so that if any is lost, the extra information sent can be leveraged to determine the lost values and reconstruct the stream. Whilst this is a solid approach, computing the FEC adds delay and the extra data being constantly sent adds a fixed uplift on your bandwidth need. For circuits that have very few issues, this can seem wasteful but having a fixed percentage can also be advantageous for circuits where a predictable bitrate is much more important. Nicolas also highlights that RIST, SRT or ST 2022-7 are other methods that can also work well. He talks about these longer in his talk with Andreas Hildrebrand

Nocals finishes by summarising that your solution will need to be sent over unicast IP, possibly in a tunnel, each end locked to a GNSS, high buffers to cope with jitter and, perhaps most importantly, the output of a workflow analysis to find out which tools you need to deploy to meet your actual needs.

Watch now!
Speaker

Nicolas Sturmel Nicolas Sturmel
Network Specialist,
Merging Technologies

Video: Creating Interoperable Hybrid Workflows with RIST

TV isn’t made in one place anymore. Throughout media and entertainment, workflows increasingly involve many third parties and being in the cloud. Content may be king, but getting it from place to place is foundational in our ability to do great work. RIST is a protocol that is able to move video very reliably and flexibly between buildings, into, out of and through the cloud. Leveraging its flexibility, there are many ways to use it. This video helps review where RIST is up to in its development and understand the many ways in which it can be used to solve your workflow problems.

Starting the RIST overview is Ciro Noronha, chair of the RIST Forum. Whilst we have delved in to the detail here before in talks like this from SMPTE and this talk also from Ciro, this is a good refresher on the main points that RIST is published in three parts, known as profiles. First was the Simple Profile which defined the basics, those being that it’s based on RTP and uses an ARQ technology to dynamically request any missing packets in a timely way which doesn’t trip the stream up if there are problems. The Main Profile was published second which includes encryption and authentication. Lastly is the Advanced Profile which will be released later this year.

 

 

Ciro outlines the importance of the Simple Profile. That it guarantees compatibility with RTP-only decoders, albeit without error correction. When you can use the error correction, you’ll benefit from correction even when 50% of the traffic is being lost unlike similar protocols such as SRT. Another useful feature for many is multi-link support allowing you to use RIST over bonded LTE modems as well as using SMPTE ST 2022-7

The Main Profile brings with it support for tunnelling meaning you can set up one connection between two locations and put multiple streams of data through. This is great for simplifying data connectivity because only one port needs to be opened in order to deliver many streams and it doesn’t matter in which direction you establish the tunnel. Once established, the tunnel is bi-directional. The tunnel provides the ability to carry general data such as control data or miscellaneous IT.

Encryption made its debut with the publishing of the Main Profile. RIST can use DTLS which is a version of the famous TLS security used in web sites that runs on UDP rather than TCP. The big advantage of using this is that it brings authentication as well as encryption. This ensures that the endpoint is allowed to receive your stream and is based on the strong encryption we are familiar with and which has been tested and hardened over the years. Certificate distribution can be difficult and disproportionate to the needs of the workflow, so RIST also allows encryption using pre-shared keys.

Handing over now to David Griggs and Tim Baldwin, we discuss the use cases which are enabled by RIST which is already found in encoders, decoders and gateways which are on the market. One use case which is on the rise is satellite replacement. There are many companies that have been using satellite for years and for whom the lack of operational agility hasn’t been a problem. In fact, they’ve also been able to make a business model work for occasional use even though, in a pure sense, satellite isn’t perfectly suited to occasional use satellites. However, with the ability to use C-band closing in many parts of the world, companies have been forced to look elsewhere for their links and RIST is one solution that works well.

David runs through a number of others including primary and secondary distribution, links aggregation, premium sports syndication with the handoff between the host broadcaster and the multiple rights-holding broadcasters being in the cloud and also a workflow for OTT where RIST is used for ingest.

RIST is available as an open source library called libRIST which can be downloaded from videolan and is documented in open specifications TR-06-1 and TR-06-2. LibRIST can be found in gstreamer, Upipe, VLC, Wireshark and FFmpeg.

The video finishes with questions about how RIST compares with SRT. RTMP, CMAF and WebRTC.

Watch now!
Speakers

Tim Baldwin Tim Baldwin
Head of Product,
Zixi
David Griggs David Griggs
Senior Product Manager, Distribution Platforms
Disney Streaming Services
Ciro Noronha Ciro Noronha
President, RIST Forum
Executive Vice President of Engineering, Cobalt Digital

Video: Time and timing at VidTrans21

Timing is both everything and nothing. Although much fuss is made of timing, often it’s not important. But when it is important, it can be absolutely critical. Helping us navigate through the broadcast chains varying dependence on a central co-ordinated time source is Nevion’s Andy Rayner in this talk at the VSF’s VidTrans21. When it comes down to it, you need time for coordination. In the 1840s, the UK introduced ‘Railway time’ bringing each station’s clock into line with GMT to coordinate people and trains.

For broadcast, working with multiple signals in a low-latency workflow is the time we’re most likely to need synchronisation such as in a vision or audio mixer. Andy shows us some of the original television technology where the camera had to be directly synchronised to the display. This is the era timing came from, built on by analogue video and RF transmission systems which had components whose timing relied on those earlier in the chain. Andy brings us into the digital world reminding us of the ever-useful blanking areas of the video raster which we packed with non-video data. Now, as many people move to SMPTE’s ST 2110 there is still a timing legacy as we see that some devices are still generating data with gaps where the blanking of the video would be even though 2110 has no blanking. This means we have to have timing modes for linear and non-linear delivery of video.
 

 
In ST 2110 every packet is marked with a reduced resolution timestamp from PTP, the Precision Time Protocol (or See all our PTP articles). This allows highly accurate alignment of essences when bringing them together as even a slight offset between audios can create comb filters and destroy the sound. The idea of the PTP timestamp is to stamp the time the source was acquired. But Andy laments that in ST 2110 it’s hard to keep this timestamp since interim functions (e.g. graphics generators) may restamp the PTP breaking the association.

Taking a step back, though, there are delays now up to a minute later delivering content to the home. Which underlines that relative timing is what’s most important. A lesson learnt many years back when VR/AR was first being used in studios where whole sections of the gallery were running several frames delayed to the rest of the facility to account for the processing delay. Today this is more common as is remote production which takes this fixed time offset to the next level. Andy highlights NMOS IS-07 which allows you timestamp button presses and other tally info allowing this type of time-offset working to succeed.

The talk finishes by talking about the work of the GCCG Activity Group at the VSF of which Andy is the co-chair. This group is looking at how to get essences into and out of the cloud. Andy spends some time talking about the tests done to date and the fact that PTP doesn’t exist in the cloud (it may be available for select customers). In fact you may have live with NTP-derived time. Dealing with this is still a lively discussion in progress and Andy is welcoming participants.

Watch now!
Speakers

Andy Rayner Andy Rayner
Co-Chair, Ground-Cloud-Cloud-Ground Activity Group, VSF
Chief Technologist, Nevion

Video: Bit-Rate Evaluation of Compressed HDR using SL-HDR1

HDR video can look vastly better than standard dynamic range (SDR), but much of our broadcast infrastructure is made for SDR delivery. SL-HDR1 allows you to deliver HDR over SDR transmission chains by breaking down HDR signals into an SDR video plus enhancement metadata which describes how to reconstruct the original HDR signal. Now part of the ATSC 3.0 suite of standards, people are asking the question whether you get better compression using SL-HDR1 or compressing HDR directly.

HDR works by changing the interpretation of the video samples. As human sight has a non-linear response to luminance, we can take the same 256 or 1024 possible luminance values and map them to brightness so that where the eye isn’t very sensitive, only a few values are used, but there is a lot of detail where we see well. Humans perceive more detail at lower luminosity, so HDR devotes a lot more of the luminance values to describing that area and relatively few at high brightness where specular highlights tend to be. HDR, therefore, has the benefit of not only increasing the dynamic range but actually provides more detail in the lower light areas than SDR.

Ciro Noronha from Cobalt has been examining the question of encoding. Video encoders are agnostic to dynamic range. Since HDR and SDR only define the meaning of the luminance values, the video encoder sees no difference. Yet there have been a number of papers saying that sending SL-HDR1 can result in bitrate savings over HDR. SL-HDR1 is defined in ETSI TS 103 433-1 and included in ATSC A/341. The metadata carriage is done using SMPTE ST 2108-1 or carried within the video stream using SEI. Ciro set out to do some tests to see if this was the case with technology consultant Matt Goldman giving his perspective on HDR and the findings.

Ciro tested with three types of Tested 1080p BT.2020 10-bit content with the AVC and HEVC encoders set to 4:2:0, 10-bit with a 100-frame GOP. Quality was rated using PSNR as well as two special types of PSNR which look at distortion/deviation from the CIE colour space. The findings show that AVC encode chains benefit more from SL-HDR1 than HEVC and it’s clear that the benefit is content-dependent. Work remains to be done now to connect these results with verified subjective tests. With LCEVC and VVC, MPEG has seen that subjective assessments can show up to 10% better results than objective metrics. Additionally, PSNR is not well known for correlating well with visual improvements.

Watch now!
Speakers

Ciro Noronha Ciro Noronha
Executive Vice President of Engineering, Cobalt Digital
President, Rist Forum
Matthew Goldman Matthew Goldman
Technology Consultant