Video: PTP/ST 2059 Best Practices developed from PTP deployments and experiences

PTP is foundational for SMPTE ST 2110 systems. It provides the accurate timing needed to make the most out of almost zero-latency professional video systems. In the strictest sense, some ST 2110 workflows can work without PTP where they’re not combining signals, but for live production, this is almost never the case. This is why a lot of time and effort goes into getting PTP right from the outset because making it work perfectly from the outset gives you the bedrock on which to build your most valuable infrastructure upon.

In this video, Gerard Phillips from Arista, Leigh Whitcomb from Imagine Communications and Telestream’s Mike Waidson join forces to run down their top 15 best practices of building a PTP infrastructure you can rely on.

Gerard kicks off underlining the importance of PTP but with the reassuring message that if you ‘bake it in’ to your underlying network, with PTP-aware equipment that can support the scale you need, you’ll have the timing system you need. Thinking of scale is important as PTP is a bi-directional protocol. That is, it’s not like the black and burst and TLS that it replaces which are simply waterfall signals. Each endpoint needs to speak to a clock so understanding how many devices you’ll be having and where is important to consider. For a look a look at PTP itself, rather than best practices, have a look at this talk free registration required or this video with Meinberg.

 

 

Gerard’s best practices advice continues as he recommends using a routed network meaning having multiple layer 2 networks with layer 3 routing between This reduces the broadcast domain size which, in turn, increases stability and resilience. JT-NM TR-1001 can help to assist in deployments using this network architecture. Gerard next cautions about layer 2 IGMP snoopers and queriers which should exist on every VLAN. As the multicast traffic is flooded to the snooping querier in layer 2, it’s important to consider traffic flows.

When Gerard says PTP should be ‘baked in’, it’s partly boundary clocks he’s referring to. Use them ‘everywhere you can’ is the advice as they bring simplicity to your design and allow for easier debugging. Part of the simplicity they bring is in helping the scalability as they shed load from your GM, taking the brunt of the bi-directional traffic and can reduce load on the endpoints.

It’s long been known that audio devices, for instance, older versions of Dante before v4.2, use version one of PTP which isn’t compatible with SPMTE ST 2059’s requirement to use PTP v2. Gerard says that, if necessary, you should buy a version 1 to version 2 converter from your audio vendor to join the v1 island to your v2 infrastructure. This is linked to best practice point 6; All GMs must have the same time. Mike makes the point that all GMs should be locked to GPS and that if you have multiple sites, they should all have an active, GPS-locked GM even if they do send PTP to each other over a WAN as that is likely to deliver less accurate timing even if it is useful as a backup.

Even if you are using physically separate networks for your PTP and ST 2110 main and backup networks, it’s important to have a link between the two GMs for ST 2022-7 traffic so a link between the two networks just for PTP traffic should be established.

The next 3 points of advice are about the ongoing stability of the network. Firstly, ST 2059-2 specifies the use of TLV messages as part of a mechanism for media notes to generate drop-frame timecode. Whilst this may not be needed day 1, if you have it running and show your PTP system works well with it on, there shouldn’t be any surprises in a couple of years when you need to introduce an end-point that will use it. Similarly, the advice is to give your PTP domain a number which isn’t a SMPTE or AES default for the sole reason that if you ever have a device join your network which hasn’t been fully configured, if it’s still on defaults it will join your PTP domain and could disrupt it. If, part of the configuration of a new endpoint is changing the domain number, the chances of this are notably reduced. One example of a configuration item which could affect the network is ‘ptp role master’ which will stop a boundary clock from taking part in BCMA and prevents unauthorised end-points taking over.

Gerard lays out the ways in which to do ‘proper commissioning’ which is the way you can verify, at the beginning, that your PTP network is working well-meaning you have designed and built your system correctly. Unfortunately, PTP can appear to be working properly when in reality it is not for reasons of design, the way your devices are acting, configuration or simply due to bugs. To account for this, Gerard advocates separate checklists for GM switches and media nodes with a list of items to check…and this will be a long list. Commissioning should include monitoring the PTP traffic, and taking a packet capture, for a couple of days for analysis with test and measurement gear or simply Wireshark.

Leigh finishes up the video talking about verifying functionality during redundancy switches and on power-up. Commissioning is your chance to characterise the behaviour of the system in these transitory states and to observe how equipment attached is affected. His last point before summarising is to implement a PTP monitoring solution to capture the critical parameters and to detect changes in the system. SMPTE RP 2059-15 will define parameters to monitor, with the aim that monitoring across vendors will provide some sort of consistent metrics. Also, a new version of IEEE-1588, version 2.1, will add monitoring features that should aid in actively monitoring the timing in your ST 2110 system.

This Arista white paper contains further detail on many of these best practices.

Watch now!
Speakers

Gerard Phillips Gerard Phillips
Solutions Engineer,
Arista
Leigh Whitcomb Leigh Whitcomb
Principal Engineer.
Imagine
Michael Waidson Mike Waidson
Application Engineer,
Telestream

Video: Live Production Forecast: Cloudy for the Foreseeable Future

Our ability to work remotely during the pandemic is thanks to the hard work of many people who have developed the technologies which have made it possible. Even before the pandemic struck, this work was still on-going and gaining momentum to overcome more challenges and more hurdles of working in IP both within the broadcast facility and in the cloud.

SMPTE’s Paul Briscoe moderates the discussion surrounding these on-going efforts to make the cloud a better place for broadcasters in this series of presentation from the SMPTE Toronto section. First in the order is Peter Wharton from TAG V.S. talking about ways to innovate workflows to better suit the cloud.

Peter first outlines the challenges of live cloud production, namely keeping latency low, signal quality high and managing the high bandwidths needed at the same time as keeping a handle on the costs. There is an increasing number of cloud-native solutions but how many are truly innovating? Don’t just move workflows into the cloud, advocates Peter, rather take this opportunity to embrace the cloud.

Working with the cloud will be built on new transport interfaces like RIST and SRT using a modular and open architecture. Scalability is the name of the game for ‘the cloud’ but the real trick is in building your workflows and technology so that you can scale during a live event.

Source: TAG V.S.

There are still obstacles to be overcome. Bandwidth for uncompressed video is one, with typical signals up to 3Gbps uncompressed which then drives very high data transfer costs. The lack of PTP in the cloud makes ST 2110 workflows difficult, similarly the lack of multicast.

Tackling bandwidth, Peter looks at the low-latency ways to compress video such as NDI, NDI|HX, JPEG XS and Amazon’s lossless CDI. Peter talks us through some of the considerations in choosing the right codec for the task in hand.

Finishing his talk, Peter asks if this isn’t time for a radical change. Why not rethink the entire process and embrace latency? Peter gives an example of a colour grading workflow which has been able to switch from on-prem colour grading on very high-spec computers to running this same, incredibly intensive process in the cloud. The company’s able to spin up thousands of CPUs in the cloud and use spot pricing to create temporary, low cost, extremely powerful computers. This has brought waiting times down for jobs to be processed significantly and has reduced the cost of processing an order of magnitude.

Lastly Peter looks further to the future examining how saturating the stadium with cameras could change the way we operate cameras. With 360-degree coverage of the stadium, the position of the camera can be changed virtually by AI allowing camera operators to be remote from the stadium. There is already work to develop this from Canon and Intel. Whilst this may not be able to replace all camera operators, sports is the home of bleeding-edge technology. How long can it resist the technology to create any camera angle?

Source: intoPIX

Jean-Baptiste Lorent is next from intoPIX to explain what JPEG XS is. A new, ultra-low-latency, codec it meets the challenges of the industry’s move to IP, its increasing desire to move data rather than people and the continuing trend of COTS servers and cloud infrastructure to be part of the real-time production chain.

As Peter covered, uncompressed data rates are very high. The Tokyo Olympics will be filmed in 8K which racks up close to 80Gbps for 120fps footage. So with JPEG XS standing for Xtra Small and Xtra Speed, it’s no surprise that this new ISO standard is being leant on to help.

Tested as visually lossless to 7 or more encode generations and with latency only a few lines of video, JPEG XS works well in multi-stage live workflows. Jean-Baptiste explains that it’s low complexity and can work well on FPGAs and on CPUs.

JPEG XS can support up to 16-bit values, any chroma and any colour space. It’s been standardised to be carried in MPEG TSes, in SMPTE ST 2110 as 2110-22, over RTP (pending) within HEIF file containers and more. Worst case bitrates are 200Mbps for 1080i, 390Mbps for 1080p60 and 1.4Gbps for 2160p60.

Evolution of Standards-Based IP Workflows Ground-To-Cloud

Last in the presentations is John Mailhot from Imagine Communications and also co-chair of an activity group at the VSF working on standardising interfaces for passing media place to place. Within the data plane, it would be better to avoid vendors repeatedly writing similar drivers. Between ground and cloud, how do we standardise video arriving and the data you need around that. Similarly standardising new technologies like Amazon’s CDI is important.

John outlines the aim of having an interoperability point within the cloud above the low-level data transfer, closer to 7 than to 1 in the OSI model. This work is being done within AIMS, VSF, SMPTE and other organisations based on existing technologies.

Q&A
The video finishes with a Q&A and includes comments from AWS’s Evan Statton whose talk on CDI that evening is not part of this video. The questions cover comparing NDI with JPEG XS, how CDI uses networking to achieve high bandwidths and high reliability, the balance between minimising network and minimising CPU depending on workflow, the increasingly agile nature of broadcast infrastructure, the need for PTP in the cloud plus the pros and cons of standards versus specifications.

Watch now!
Speakers

Peter Wharton Peter Wharton
Director Corporate Strategy, TAG V.S.
President, Happy Robotz
Vice President of Membership, SMPTE
Jean-Baptiste Lorent Jean-Baptiste Lorent
Director Marketing & Sales,
intoPIX
John Mailhot John Mailhot
Co-Chair Cloud-Gounrd-Cloud-Ground Activity Group, VSF
Directory & NMOS Steering Member, AMWA
Systems Architect for IP Convergence, Imagine Communcations
Paul Briscoe Moderator: Paul Briscoe
Canadian Regional Governor, SMPTE
Consultant, Televisionary Consulting
Evan Statton Evan Statton
Principal Architect, Media & Entertainment
Amazon Web Services

Video: IP-based Networks for UHD and HDR Video

If you get given a video signal, would you know what type it was? Life used to be simple, an SD signal would decode in a waveform monitor and you’d see which type it was. Now, with UHD and HDR, this isn’t all the information you need. Arguably this gets easier with IP and is possibly one of the few things that does. This video from AIMS helps to clear up why IP’s the best choice for UHD and HDR.

John Mailhot from Imagine Communications joins Wes Simpson from LearnIPVideo.com to introduce us to the difficulties wrangling with UHD and HDR video. Reflecting on the continued improvement of in-home displays’ ability to show brighter and better pictures as well as the broadcast cameras’ ability to capture much more dynamic range, John’s work at Imagine is focussed on helping broadcasters ensure their infrastructure can enable these high dynamic range experiences. Streaming services have a slightly easier time delivering HDR to end-users as they are in complete control of the distribution chain whereas often in broadcast, particularly with affiliates, there are many points in the chain which need to be HDR/UHD capable.

John starts by looking at how UHD was implemented in the early stages. UHD, being twice the horizontal and twice the vertical resolution of HD is usually seen as 4xHD, but, importantly, John points out that this is true for resolution but, as most HD is 1080i, it also represents a move to 1080p, 3Gbps signals. John’s point is that this is a strain on the infrastructure which was not necessarily tested for initially. Given the UHD signal, initially, was carried by four cables, there is now 4 times the chance of a signal impairment due to cabling.

Square Division Multiplexing (SQD) is the ‘most obvious’ way to carry UHD signals with existing HD infrastructure. The picture is simply cut into four quarters and each quarter is sent down one cable. The benefit here is that it’s easy to see which order the cables need to be connected to the equipment. The downsides included a frame-buffer delay (half a frame) each time the signal was received, difficulties preventing drift of quadrants if they were treated differently by the infrastructure (i.e. there was a non-synced hand-off). One important problem is that there is no way to know an HD feed is from a UHD set or just a lone 3G signal.

2SI, two-sample interleave, was another method of splitting up the signal which was standardised by SMPTE. This worked by taking a pair of samples and sending them down cable 1, then the next pair down cable 2, the pair of samples under the first pair went down cable 3 then the pair under 2 went down 4. This had the happy benefit that each cable held a complete picture, albeit very crudely downsampled. However, for monitoring applications, this is a benefit as you can DA one feed and send this to a monitor. Well, that would have been possible except for the problem that each signal had to maintain 400ns timing with the others which meant DAs often broke the timing budget if they reclocked. It did, however, remove the half-field latency burden which SQD carries. The main confounding factor in this mechanism is that looking at the video from any cable on a monitor isn’t enough to understand which of the four feeds you are looking at. Mis-cabling equipment leads to subtle visual errors which are hard to spot and hard to correct.

Enter the VPID, the Video Payload ID. SD SDI didn’t require this, HD often had it, but for UHD it became essential. SMPTE ST 425-5:2019 is the latest document explaining payload ID for UHD. As it’s version five, you should be aware that older equipment may not parse the information in the correct way a) as a bug and b) due to using an old standard. The VPID carries information such as interlaced/progressive, aspect ratio, transfer characteristics (HLG, SDR etc.), frame rate etc. John talks through some of the common mismatches in interpretation and implementation of VPID.

12G is the obvious baseband solution to the four-wires problem of UHD. Nowadays the cost of a 12G transceiver is only slightly more than 3G ones, therefore 12G is a very reasonable solution for many. It does require careful cabling to ensure the cable is in good condition and not too long. For OB trucks and small projects, 12G can work well. For larger installations, optical connections are needed, one signal per fibre.

The move to IP initially went to SMPTE ST 2022-6, which is a mapping of SDI onto IP. This meant it was still quite restrictive as we were still living within the SDI-described world. 12G was difficult to do. Getting four IP streams correctly aligned, and all switched on time, was also impractical. For UHD, therefore SMPTE ST 2110 is the natural home. 2110 can support 32K, so UHD fits in well. ST 2110-22 allows use of JPEG XS so if the 9-11Gbps bitrate of UHDp50/60 is too much it can be squeezed down to 1.5Gbps with almost no latency. Being carried as a single video flow removes any switch timing problems and as 2110 doesn’t use VPID, there is much more flexibility to fully describe the signal allowing future growth. We don’t know what’s to come, but if it’s different shapes of video rater, new colour spaces or extensions needed for IPMX, these are possible.

John finishes his conversation with Wes mentioning two big benefits of moving to IT-based infrastructure. One is the ability to use the free Wireshark or EBU List tools to analyse video. Whilst there are still good reasons to buy test equipment, the fact that many checks can be done without expensive equipment like waveform monitors is good news. The second big benefit is that whilst these standards were being made, the available network infrastructure has moved from 25 to 100 to 400Gbps links with 800Gbps coming in the next year or two. None of these changes has required any change in the standards, unlike with SDI where improvements in signal required improvements in baseband. Rather, the industry is able to take advantage of this new infrastructure with no effort on our part to develop it or modify the standards.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect, IP Convergence,
Imagine Communications
Wes Simpson Wes Simpson
RIST AG Co-Chair, VSF
President & Founder, LearnIPvideo.com

Video: Keeping Time with PTP

Different from his talk of the same name we covered last week, Mike Waidson from Telestream explains the fundamentals of PTP joined by Leigh Whitcomb from Imagine Communications and Robert Welch from Arista. Very few PTP talks include a live BCMA quiz plus, with more time than the IP Showcase talks, this is a well-paced, deep look into the basics.

Mike starts by reviewing how the measurement of time has been more and more accurately measured with us now, typically using atomic clocks. In the TV-domain analogue video used signals for B&B which gave frequency information in the subcarrier and allowed frequency locking and to keep in sync with other signals. NTP has allowed computers and routers on IP networks to keep lock allowing sub-millisecond synchronisation over LANs. Now we have IEEE 1588 PTP which harnesses hardware for maximum precision providing sub-microsecond precision.

Traditionally an SPG would create many different synchronising signals, distributed by DAs. With PTP however, the idea is creating a single time signal on to the network (as well as older signals if necessary). Although, the important thing to remember is that PTP both sends and receives data from the endpoints. GPS is made from 31 active satellites of which only 4 are needed for a lock. But other systems such as the Russian GLONASS, the Chinese BAIDU Navigational system or the European Galileo can also be used, sometimes in conjunction with each other to improve locking speed or give resilience.

Mike and his co-hosts give an overview of the standards that make all this possible, starting with the PTP standard itself IEEE 1588-2019 which is added to by SMPTE 2059. The latter is two standards that, together ensure broadcast devices can usefully harness PTP which is a general, cross-industry standard and track all signals back to a single point in time in 1970. Whilst this may seem extreme, the benefit of doing this is that if we know that all possible types of signal were in-phase at this one point in time, we can extrapolate how each signal should be phased now and use that information to synchronise the system. Upcoming to PTP, we hear, are standardised ways to monitor PTP plus additional security around the standard.

The next section looks at the types of Grandmaster and the fact that each clock works in its own domain. Typically, all your system will be in the same domain, but if you have incompatible situations such as older Dante networks or if you want to have a testing environment, you can use domains to separate your equipment. The standard, as defined by SMPTE 2059 is 127.

Mike then looks at the different types of PTP Message types: Announce, Sync & Follow up, Delay Request, Delay Response and Management Messages (broadcast information, drop second, time zone etc.) He then brings some of these up in Wireshark and talks us through the structure and what can be found within.

The most original part of the talk is the live walkthrough of three different scenarios where Leigh and Robert talk through their thinking on which clock will be the grandmaster and for what reason. This comes down to their understanding of the order of precedence of the metrics such as the manually-allotted priority, then the class of clock, clock accuracy and other values. One value worth remembering is that if your clock is locked to GPS it will have a class of 6, but if it then loses lock, it will become 7.

PTP talks are not complete without an explanation of the sync message exchanges needed to actually determine the time (and the relative delays in order to compute it) as well as the secondary clock types, boundary and transparent. Boundary clocks take on much of the two-way traffic in PTP protecting the grandmasters from having to speak directly to all the, potentially, thousands of devices. Transparent switches, simply update the time announcements with the delay for the message to move through the switch. Whilst this is useful in keeping the timing accurate, it provides no protection for the grandmasters.

Before the talk finishes with a Q&A, the team finish by explaining the difference between operating in unicast and multicast, prioritising PTP traffic using the differentiated services protocol and adding redundancy to the PTP system.

Watch now!
Free registration required
Speakers

Robert Welch Robert Welch
Technical Solultions Lead,
Arista
Leigh Whitcomb Leigh Whitcomb
Principal Engineer.
Imagine
Michael Waidson Mike Waidson
Application Engineer,
Telestream