Video: Time and timing at VidTrans21

Timing is both everything and nothing. Although much fuss is made of timing, often it’s not important. But when it is important, it can be absolutely critical. Helping us navigate through the broadcast chains varying dependence on a central co-ordinated time source is Nevion’s Andy Rayner in this talk at the VSF’s VidTrans21. When it comes down to it, you need time for coordination. In the 1840s, the UK introduced ‘Railway time’ bringing each station’s clock into line with GMT to coordinate people and trains.

For broadcast, working with multiple signals in a low-latency workflow is the time we’re most likely to need synchronisation such as in a vision or audio mixer. Andy shows us some of the original television technology where the camera had to be directly synchronised to the display. This is the era timing came from, built on by analogue video and RF transmission systems which had components whose timing relied on those earlier in the chain. Andy brings us into the digital world reminding us of the ever-useful blanking areas of the video raster which we packed with non-video data. Now, as many people move to SMPTE’s ST 2110 there is still a timing legacy as we see that some devices are still generating data with gaps where the blanking of the video would be even though 2110 has no blanking. This means we have to have timing modes for linear and non-linear delivery of video.
 

 
In ST 2110 every packet is marked with a reduced resolution timestamp from PTP, the Precision Time Protocol (or See all our PTP articles). This allows highly accurate alignment of essences when bringing them together as even a slight offset between audios can create comb filters and destroy the sound. The idea of the PTP timestamp is to stamp the time the source was acquired. But Andy laments that in ST 2110 it’s hard to keep this timestamp since interim functions (e.g. graphics generators) may restamp the PTP breaking the association.

Taking a step back, though, there are delays now up to a minute later delivering content to the home. Which underlines that relative timing is what’s most important. A lesson learnt many years back when VR/AR was first being used in studios where whole sections of the gallery were running several frames delayed to the rest of the facility to account for the processing delay. Today this is more common as is remote production which takes this fixed time offset to the next level. Andy highlights NMOS IS-07 which allows you timestamp button presses and other tally info allowing this type of time-offset working to succeed.

The talk finishes by talking about the work of the GCCG Activity Group at the VSF of which Andy is the co-chair. This group is looking at how to get essences into and out of the cloud. Andy spends some time talking about the tests done to date and the fact that PTP doesn’t exist in the cloud (it may be available for select customers). In fact you may have live with NTP-derived time. Dealing with this is still a lively discussion in progress and Andy is welcoming participants.

Watch now!
Speakers

Andy Rayner Andy Rayner
Co-Chair, Ground-Cloud-Cloud-Ground Activity Group, VSF
Chief Technologist, Nevion

Video: Bit-Rate Evaluation of Compressed HDR using SL-HDR1

HDR video can look vastly better than standard dynamic range (SDR), but much of our broadcast infrastructure is made for SDR delivery. SL-HDR1 allows you to deliver HDR over SDR transmission chains by breaking down HDR signals into an SDR video plus enhancement metadata which describes how to reconstruct the original HDR signal. Now part of the ATSC 3.0 suite of standards, people are asking the question whether you get better compression using SL-HDR1 or compressing HDR directly.

HDR works by changing the interpretation of the video samples. As human sight has a non-linear response to luminance, we can take the same 256 or 1024 possible luminance values and map them to brightness so that where the eye isn’t very sensitive, only a few values are used, but there is a lot of detail where we see well. Humans perceive more detail at lower luminosity, so HDR devotes a lot more of the luminance values to describing that area and relatively few at high brightness where specular highlights tend to be. HDR, therefore, has the benefit of not only increasing the dynamic range but actually provides more detail in the lower light areas than SDR.

Ciro Noronha from Cobalt has been examining the question of encoding. Video encoders are agnostic to dynamic range. Since HDR and SDR only define the meaning of the luminance values, the video encoder sees no difference. Yet there have been a number of papers saying that sending SL-HDR1 can result in bitrate savings over HDR. SL-HDR1 is defined in ETSI TS 103 433-1 and included in ATSC A/341. The metadata carriage is done using SMPTE ST 2108-1 or carried within the video stream using SEI. Ciro set out to do some tests to see if this was the case with technology consultant Matt Goldman giving his perspective on HDR and the findings.

Ciro tested with three types of Tested 1080p BT.2020 10-bit content with the AVC and HEVC encoders set to 4:2:0, 10-bit with a 100-frame GOP. Quality was rated using PSNR as well as two special types of PSNR which look at distortion/deviation from the CIE colour space. The findings show that AVC encode chains benefit more from SL-HDR1 than HEVC and it’s clear that the benefit is content-dependent. Work remains to be done now to connect these results with verified subjective tests. With LCEVC and VVC, MPEG has seen that subjective assessments can show up to 10% better results than objective metrics. Additionally, PSNR is not well known for correlating well with visual improvements.

Watch now!
Speakers

Ciro Noronha Ciro Noronha
Executive Vice President of Engineering, Cobalt Digital
President, Rist Forum
Matthew Goldman Matthew Goldman
Technology Consultant

Video: Tweaking Error Correction Protocol Performance: A libRIST Deep Dive

There’s a false assumption that if you send video with these new error-correcting protocols like RIST or SRT that you just need to send the stream, it’ll get healed and everything will be good. But often people don’t consider what actually happens when things go wrong. To heal the stream, more data needs to be sent. Do you have enough headroom to cope with these resends? And what happens if part of your circuit becomes temporarily saturated, how will the feed cope? The reality is that it could kill it permanently due to re-request storms.

In this video from VidTrans21, Sergio Ammirata from SipRadius talks about how the error correcting protocol within RIST works and how it’s been improved to cope even better in a crisis. Joined by Adi Rozenberg they remind us of the key points of RIST and the libRIST. As a reminder, RIST is one of many protocols which allows the receiver to let the sender know which packets its missed and for them to be resent. For a proper overview of RIST and SRT, have a look at this talk explaining RIST and SRT or the multitude of talks here on The Broadcast Knowledge on RIST or SRT. Today’s video is not so much about why people use RIST, but how to make it performant with difficult circuits.

 
libRIST is an open-source, free, library which implements the RIST specification. The aim of libRIST is to allow companies to easily implement RIST within their own commercial and free programmes. Sergio points out that it’s an active project with over 675 commits in the last year bringing RIST to many platforms including ARM, AWS, Darwin, iOS, windows etc. and is now on version 0.2.0, plus is soon to be in VLC 4.0 and FFmpeg 4.3.

To understand why getting error correction is important, we can look at the effects of a simplistic implementation of the negative acknowledgement error recovery method. When the receiver doesn’t receive a packet it sends back a request for a resend of that packet. The sender will send that and, hopefully, it will be received. Let’s imagine, though, that you’re in a data centre sending to someone on a 100Mbps leased line. If the incoming bitrate of your receiver’s internet connection started getting close to 100Mbps due to the aggregate traffic coming into the site, the receiver may start missing out on occasional packets leading it to ask for more packets from the sender. The sender’s bitrate then increases which reduces the margin available in the incoming circuit resulting in more lost packets. This cycle continues until the line is saturated. It’s important to remember that saturating an incoming link doesn’t mean traffic can’t get out. It’s quite possible there are hundreds of megabits available outgoing so there’s plenty of bandwidth to shout for more and more re-requests. The sender is quite happy to send these re-requests as it’s on a 10Gbe link and has plenty of headroom left. Only by stopping the receiver would you be able to break this positive-feedback loop.

Now, all protocols deliver some form of control over what’s re-requested to try to manage difficult situations. Sergio agrees that other implementations of RIST work well in normal situations with less than 10% packet loss, for example. But where bursts of packet loss exceed 20% or the circuit headroom dips below 20%, Sergio says implementations tend to struggle.

As a lead-up up to the recent improvements made in congestion management, Sergio outlines how libRIST uses internal QOS to maintain a bandwidth cap. It will also monitor the RTT every tenth of a second to help spread retries over time. By checking how the RTT is changing in these extreme conditions, libRIST is able to throw away redundant re-requests leaving more bandwidth for useful requests. The fact that the sender is doing this work means that even if the receiver is on an older version of libRIST or on another implementation, the link can still benefit from the checking the libRIST 0.2.0 is doing. The upshot of all this work is that no longer can libRIST deal with 50% packet loss, it can now deliver an unblemished stream up to just shy of 70% packet loss.

Watch now!
Speaker

Sergio Ammirata Sergio Ammirata Ph.D.
Chief Scientist,
SipRadius LLC
Adi Rozenberg Adi Rozenberg
CTO & Co-founder,
VideoFlow

Video: JPEG XS Interoperability Activity Group Update


JPEG XS is a low-latency, light-compression codec often called a ‘mezzanine’ codec. Encoding within milliseconds, JPEG XS can compress full-bandwidth signals by 4x or more allowing scope for several generations of compression without significant degradation. The low-latency and resilience to de-generation make it ideal for enabling remote production.

John Dale from Media Links joins us to look at what’s being done within the Video Services Forum (VSF) to ensure interoperability. As a new standard, JPEG XS is yet to be or is still being implemented in many companies’ products. Therefore this is the perfect time to be looking at how to standardise interconnects,

Running JPEG XS over MPEG TS is one approach which is being written up in ‘VSF TR-07’ (Technical Reference 7) which will be imminently completed. It defines capabilities for 2K, 4K and 8K video with and without HDR. They have split the video formats into capability sets meaning that a vendor can comply with the specification by stating which subset(s) it can cope with. All formats up to 1080p60 are under capability set ‘A’ with ‘B’ covering UHD resolutions. After this work, they will look at JPEG XS over ST 2110-22 instead of MPEG TS. This is yet to start and will share much of the work from previous work.

Watch now!
Speaker

John Dale John Dale
Company Director and CMO,
Media Links.