Video: How to Successfully Commission a SMPTE ST 2059/PTP System

PTP is the beating heart behind video- and audio-over-IP installations. As critical as black and burst reference, it pays to get it right. But PTP is a system, not a monolithic signal distributed around the facility. Unlike genlock, it’s a two-way conversation over networked infrastructure and whilst that brings great benefits, it changes how we deal with it. The system should be monitored, both at the ST 2059 layer and network layer. But before we even get to that point, implementation requires care particularly as the industry is still in the early phases of developing tools and best practices for project deployments.

Leigh Whitcomb from Imagine Communications has stepped up to bring us his experiences and best practices as part of the Broadcast Engineering and IT Conference at NAB. This talk assumes an existing level of knowledge of PTP. If you would like to start at the beginning, then please look at this talk from Meinberg and this from Tektronix.

Leigh starts by explaining that, typically, the best architecture is to have a red and a blue network. A grand master would then be on both networks and both would be set to lock to GPS. He explains how do deal with prioritisation and preventing other devices from becoming grand masters. He also explains some of the basic PTP parameter values such as setting the Announcement time outs. Other good design practices he discusses are where to use Boundary Clocks, avoiding PTP Domain numbers of 0 and 127 plus using QoS and DSCP.

As part of the commissioning piece, Leigh goes through some frequently-seen problems such as locking up slowly due to an incorrect Delay Request setting or the Grand Master announce rate being the same as the timeout. To understand when your system isn’t working properly, Leigh makes the point that it’s vital to understand in detail how you expect the system to behave. Use checklists to ensure all parameters and configuration have been applied correctly but also to verify the PTP packets themselves leaving the GM. Leigh then highlights checklists for other parts of the network such as the switches and Media Nodes.

There are a number of tools available for faultfinding and checking compliance. As part of commissioning, the first port of call is the device’s GUI and API which will obviously give most of the parameters needed but often will go further and help with fault finding. WireShark can help verifying the fields in the packets, the timing and message rates. Whilst Meinberg’s Track Hound is a free program which allows you to verify the PTP protocol and Grand Masters. The EBU List project also covers PTP/ST 2059. Helpfully, Leigh talks through how to use Wireshark to verify fields and message rates.

In terms of Testing, Leigh suggests running a packet capture (PCap) for 48 hours after commissioning to verify any issues. He then highlights the need for redundancy testing. This is where understanding how you intend the network to work is important as redundancy testing should also be combined with network testing where you deliberately pull down part of your network and see the GMs change as intended. This changeover will be managed by the Best Master Clock Algorithm (BMCA). When troubleshooting, you should use your monitoring system to help you visualise what’s happening. A good system should enable you to see the devices on the network and their status. Many companies would want to test how successfully the system recovers from a full failure as this will represent the maximum traffic load on the PTP system.

How to watch
1) Click on ‘Add to favourites’
2) Register for free – or log in if you are already part of NAB Express

3) You will then see the video on the left of the screen.

Watch now!
Speakers

Leigh Whitcomb Leigh Whitcomb
Architect,
Imagine Communications

Video: 5 PTP Implementation Challenges & Best Practices

PTP is an underlying technology enabling the whole SMPTE 2110 uncompressed ecosystem to work. Using PTP, the Precision Time Protocol, the time a frame of video, audio etc. was captured is recorded and so when decoded can be synchronised with other media recorded around that same time. Though parts of 2110 can function without it, when it comes to bringing media together which need synchronisation, vision mixing for instance, PTP is the way to go.

PTP is actually a standard for time distribution which, like its forerunner NTP, was developed by the IEEE and is a cross-industry standard. Now on version IEEE-1588-2019, it defines not only how to send time onto a network, but also how a receiver can work out what the time actually is. Afterall, if you had a letter in the post telling you the time, you’d know that time – and date for that matter – was old. PTP defines a way of working out how long the letter took to arrive so that you can know the date and time based on the letter and you new-found knowledge of the delivery time.

Knowing the time of day is all very well, but to truly synchronise media, SMPTE ST 2059 is used to interpret PTP for professional media. Video and audio are made from repeating data structures. 2059 relates these repeating data structures back to a common time in the past so that at any time in the future, you can calculate the phase of the signal.

Karl Khun from Tektronix starts by laying out the problems to be solved, such as managing jitter and the precision needed. This leads in into a look at how timestamps are used to make a note of when, separately, video and audio were captured. The network needed to implement PTP, particularly for redundancy and the ability of GPS allowing buildings to be co-timed without being connected.

Troubleshooting PTP will be tricky for many, but learning the IT side of this is only part of the solution. Karl looks at some best practices and tips on faultfinding PPT errors which leads on to a discussion of PTP domains and profiles. An important aspect of PTP is that it is bi-directional. Not only that but it’s much more than a distribution of a signal like the previous black and burst infrastructure. It is a system which needs to be managed and deserves to be monitored. Karl shows how graphs can help show the stability of the network and how RTP/CC errors can show network packet losses/corruptions.

Watch now!
Speakers

Karl Kuhn Karl J. Khun
Principal Solutions Architect
Telestream/Tekronix

Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group

Video: The 7th Circle of Hell; Making Facility-Wide Audio-over-IP Work

audio-over-ip

When it comes to IP, audio has always been ahead of video. Whilst audio often makes up for it in scale, its relatively low bandwidth requirements meant computing was up to the task of audio-over-IP long before uncompressed video-over-IP. Despite the early lead, audio-over-IP isn’t necessarily trivial. However, this talk aims to give you a heads up to the main hurdles so you can address them right from the beginning.

Matt Ward, Head of Video for UK-based Jigsaw24, starts this talk revising the reasons to go audio over IP (AoIP). The benefits vary for each company. For some, reducing cabling is a benefit, many are hoping it will be cheaper, for others achievable scale is key. Matt’s quick to point out the drawbacks we should be cautious of, not least of which are complexity and skill gaps.

Matt fast-tracks us to better installations by hitting a list of easy wins some of which are basic, but a disproportionately important as the project continues i.e. naming paths and devices and having IP addresses in logical groups. Others are more nuanced like ensuring cable performance. For CAT6 cabling, it’s easy to get companies to test each of your cables to ensure the cable and all terminations are still working at peak performance.

Planning your timing system is highlighted as next on the road to success with smaller facilities more susceptible to problems if they only have one clock. But any facility has to be carefully considered and Matt points out that the Best Master Clock Algorithm (BMCA).

Network considerations are the final stop on the tour, underlining that audio doesn’t have to run in its own network as long as QoS is used to maintain performance. Matt details his reasons to keep Spanning Tree Protocol off, unless you explicitly know that you need it on. The talk finishes by discussing multicast distribution and IGMP snooping.

Watch now!
Speaker

Matt Ward Matt Ward
Head of Audio,
Jigsaw24