Video: AES67 & SMPTE ST 2110 Timing and Synchronization

Good timing is essential in production for AES67 audio and SMPTE ST 2110. Delivering timing is no longer a matter of delivering a signal throughout your facility, over IP timing is bidirectional and forms a system which should be monitored and managed. Timing distribution has always needed design and architecture, but the detail and understanding needed are much more. At the beginning of this talk, Andreas Hildebrand explains why we need to bother with such complexity, after all, we got along very well for many years without it! Non-IP timing signals are distributed on their own cables as part of their own system. There are some parts of the chain which can get away without timing signals, but when they are needed, they are on a separate cable. With IP, having a separate network for distribution of timing doesn’t make sense so, whether you have an analogue or digital timing signal, that needs to be moving into the IP domain. But how much accuracy in timing to you need? Network devices already widely use NTP which can achieve an accuracy of less than a millisecond. Andreas explains that this isn’t enough for professional audio. At 48Khz, AES samples happen at an accuracy of plus or minus 10 microseconds with 192KHz going down to 2.5 microseconds. As your timing signal has to be less than the accuracy you need, this means we need to achieve nanosecond precision.

Daniel Boldt from timing specialists Meinberg is the focus of this talk explaining how we achieve this nano-second precision. Enter PTP, the Precision Time Protocol. This is a cross-industry standard from the IEEE uses in telcoms, power, finance and in many others wherever a network and its devices need to understand the time. It’s not a static standard, Daniel explains, and it’s just about to see its third revision which, like the last, adds features.

Before finding out about the latest changes, Daniel explains how PTP works in the first place; how is it possible to accurately derive time down to the nanosecond over a network which will have variable propagation times? We see how timestamps are introduced into the network interface controller (NIC) at the last moment allowing the timestamps to be created in hardware which removes some of the variable delays that is typical in software. This happens, Daniel shows, in the switch as well as in the server network cards. This article will refer to either a primary clock or a grand master. Daniel steps us through the messages exchanged between the primary and secondary clock which is the interaction at the heart of the protocol. The key is that after the primary has sent a timestamp, the secondary sends its timestamp to the primary which replies saying the time it received the secondary the reply. The secondary ends up with 4 timestamps that it can combine to determine its offset from the primary’s time and the delay in receiving messages. Applying this information allows it to correct the clock very accurately.

PTP Primary-Secondary Message Exchange.
Source: Meinberg

Most broadcasters would prefer to have more than one grandmaster clock but if there are multiple clocks, how do you choose which to sync from? Timing systems have long used strata whereby clocks are rated based on accuracy, either for internal accuracy & stability or by what they are synched to. This is also true for PTP and is part of the considerations in the ‘Best Master Clock Algorithm’. The BMCA starts by allowing a time source to assess its own accuracy and then search for better options on the network. Clocks announce themselves to the network and by listening to other announcements, a clock can decide if it should become a primary clock if, for instance, it hears no announce messages at all. For devices which should never be a grand primary, you can force them never to decide to become grand masters. This is a requisite for audio devices participating in ST 2110-3x.

Passing PTP around the network takes some care and is most easily done by using switches which understand PTP. These switches either run a ‘boundary clock’ or are ‘transparent clocks’. Daniel explores both of these scenarios explaining how the boundary clock switch is able to run multiple primary and secondary clocks depending on what is connected on each interface. We also see what work the switches have to do behind the scenes to maintain timing precision in transparent mode. In summary, Daniel summaries boundary clocks as being good for hierarchical systems and scales well but requires continuous monitoring whereas transparent clocks are simpler to deploy and require minimal monitoring. The main issue with transparent clocks is that they don’t scale well as all your timing messages are still going back to one main clock which could get overwhelmed.

SMPTE 2022-7 has been a very successful standard as its reliance only on RTP has allowed it to be widely applicable to compressed and uncompressed IP flows. It is often used in 2110 networks, too, where two separate networks are run and brought together at the receiving device. That device, on a packet-by-packet basis, is free to derive its audio/video stream from either network. This requires, however, exactly the same timing on both networks so Daniel looks at an example diagram where this PTP sharing is shown.

PTP’s still evolving and in this next section, Daniel takes us through some of the coming improvements which are also outlined at Meinberg’s blog. These are profile isolation, multi-domain clocks, security improvements and more.

Andreas takes the final section of the webinar to explain how we use PTP in media networks. All receivers will have the same clock which could be derived from GPS removing the need to distribute PTP between sites. 2110 is based on RTP which requires a timestamp to be added to every packet delivered to the network. RTP is a wrapper around IP packets which includes a timestamp which can be derived from the media clock counter.

Andreas looks at how accurate RTP delivery is achieved, dealing with offset values, populating the timestamp from the PTP clock for realties streams and he explains how the playout delay is calculated from the link offset. Finally, he shows the relatively simple process of synchronisation art the playout device. With all the timestamps in the system, synchronising playback of audio, video and metadata using buffers can be achieved fairly easily. Unfortunately, timestamps are easily destroyed by secondary processing (for instance loudness adjustment for an audio stream). Clearly, if this happened, synchronisation at the receiver would be broken. Whilst this will be addressed by out-of-band messaging in future standards, for now, this is managed by a broadcast controller which can take delay information from processing stages and distribute this to receivers.

Watch now!
Speakers

Daniel Boldt Daniel Boldt
Head of Software Development,
Meinberg
Andreas Hildebrand Andreas Hildebrand
RAVENNA Technology Evangelist,
ALC NetworX

Video: How to Successfully Commission a SMPTE ST 2059/PTP System

PTP is the beating heart behind video- and audio-over-IP installations. As critical as black and burst reference, it pays to get it right. But PTP is a system, not a monolithic signal distributed around the facility. Unlike genlock, it’s a two-way conversation over networked infrastructure and whilst that brings great benefits, it changes how we deal with it. The system should be monitored, both at the ST 2059 layer and network layer. But before we even get to that point, implementation requires care particularly as the industry is still in the early phases of developing tools and best practices for project deployments.

Leigh Whitcomb from Imagine Communications has stepped up to bring us his experiences and best practices as part of the Broadcast Engineering and IT Conference at NAB. This talk assumes an existing level of knowledge of PTP. If you would like to start at the beginning, then please look at this talk from Meinberg and this from Tektronix.

Leigh starts by explaining that, typically, the best architecture is to have a red and a blue network. A grand master would then be on both networks and both would be set to lock to GPS. He explains how do deal with prioritisation and preventing other devices from becoming grand masters. He also explains some of the basic PTP parameter values such as setting the Announcement time outs. Other good design practices he discusses are where to use Boundary Clocks, avoiding PTP Domain numbers of 0 and 127 plus using QoS and DSCP.

As part of the commissioning piece, Leigh goes through some frequently-seen problems such as locking up slowly due to an incorrect Delay Request setting or the Grand Master announce rate being the same as the timeout. To understand when your system isn’t working properly, Leigh makes the point that it’s vital to understand in detail how you expect the system to behave. Use checklists to ensure all parameters and configuration have been applied correctly but also to verify the PTP packets themselves leaving the GM. Leigh then highlights checklists for other parts of the network such as the switches and Media Nodes.

There are a number of tools available for faultfinding and checking compliance. As part of commissioning, the first port of call is the device’s GUI and API which will obviously give most of the parameters needed but often will go further and help with fault finding. WireShark can help verifying the fields in the packets, the timing and message rates. Whilst Meinberg’s Track Hound is a free program which allows you to verify the PTP protocol and Grand Masters. The EBU List project also covers PTP/ST 2059. Helpfully, Leigh talks through how to use Wireshark to verify fields and message rates.

In terms of Testing, Leigh suggests running a packet capture (PCap) for 48 hours after commissioning to verify any issues. He then highlights the need for redundancy testing. This is where understanding how you intend the network to work is important as redundancy testing should also be combined with network testing where you deliberately pull down part of your network and see the GMs change as intended. This changeover will be managed by the Best Master Clock Algorithm (BMCA). When troubleshooting, you should use your monitoring system to help you visualise what’s happening. A good system should enable you to see the devices on the network and their status. Many companies would want to test how successfully the system recovers from a full failure as this will represent the maximum traffic load on the PTP system.

How to watch
1) Click on ‘Add to favourites’
2) Register for free – or log in if you are already part of NAB Express

3) You will then see the video on the left of the screen.

Watch now!
Speakers

Leigh Whitcomb Leigh Whitcomb
Architect,
Imagine Communications

Video: 5 PTP Implementation Challenges & Best Practices

PTP is an underlying technology enabling the whole SMPTE 2110 uncompressed ecosystem to work. Using PTP, the Precision Time Protocol, the time a frame of video, audio etc. was captured is recorded and so when decoded can be synchronised with other media recorded around that same time. Though parts of 2110 can function without it, when it comes to bringing media together which need synchronisation, vision mixing for instance, PTP is the way to go.

PTP is actually a standard for time distribution which, like its forerunner NTP, was developed by the IEEE and is a cross-industry standard. Now on version IEEE-1588-2019, it defines not only how to send time onto a network, but also how a receiver can work out what the time actually is. Afterall, if you had a letter in the post telling you the time, you’d know that time – and date for that matter – was old. PTP defines a way of working out how long the letter took to arrive so that you can know the date and time based on the letter and you new-found knowledge of the delivery time.

Knowing the time of day is all very well, but to truly synchronise media, SMPTE ST 2059 is used to interpret PTP for professional media. Video and audio are made from repeating data structures. 2059 relates these repeating data structures back to a common time in the past so that at any time in the future, you can calculate the phase of the signal.

Karl Khun from Tektronix starts by laying out the problems to be solved, such as managing jitter and the precision needed. This leads in into a look at how timestamps are used to make a note of when, separately, video and audio were captured. The network needed to implement PTP, particularly for redundancy and the ability of GPS allowing buildings to be co-timed without being connected.

Troubleshooting PTP will be tricky for many, but learning the IT side of this is only part of the solution. Karl looks at some best practices and tips on faultfinding PPT errors which leads on to a discussion of PTP domains and profiles. An important aspect of PTP is that it is bi-directional. Not only that but it’s much more than a distribution of a signal like the previous black and burst infrastructure. It is a system which needs to be managed and deserves to be monitored. Karl shows how graphs can help show the stability of the network and how RTP/CC errors can show network packet losses/corruptions.

Watch now!
Speakers

Karl Kuhn Karl J. Khun
Principal Solutions Architect
Telestream/Tekronix

Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group