Video: As Time Goes by…Precision Time Protocol in the Emerging Broadcast Networks

How much timing do you need? PTP can get you timing in the nanoseconds, but is that needed, how can you transport it and how does it work? These questions and more are under the microscope in this video from RTS Thames Valley.

SMPTE Standards Vice President, Bruce Devlin introduces the two main speakers by reminding us why we need timing and how we dealt with it in the past. Looking back to the genesis of television, points out Bruce, everything was analogue and it was almost impossible to delay a signal at all. An 8cm, tightly wound coil of copper would give you only 450 nanoseconds of delay alternatively quartz crystals could be used to create delays. In the analogue world, these delays were used to time signals and since little could be delayed, only small adjustments were necessary. Bruce’s point is that we’ve swapped around now. Delays are everywhere because IP signals need to be buffered at every interface. It’s easy to find buffers that you didn’t know about and even small ones really add up. Whereas analogue TV got us from camera to TV within microseconds, it’s now a struggle to get below two seconds.

Hand in hand with this change is the change from metadata and control data being embedded in the video signal – and hence synchronised with the video signal – to all data being sent separately. This is where PTP, Precision Time Protocol, comes in. An IP-based timing mechanism which can keep time despite the buffers and allow signals to be synchronised.

Next to speak is Richard Hoptroff whose company works with broadcasters and financial services to provide accurate time derived from 4 satellite services (GPS, GLONASS etc) and the Swedish time authority RiSE. They have been working on the problem of delivering time to people who can’t put up antennas either because they are operating in an AWS datacentre or broadcasting from an underground car park. Delivering time by a wired network, Richard points out, is much more practical as it’s not susceptible to jamming and spoofing, unlike GPS.

Richard outlines SMPTE’s ST 2059-2 standard which says that a local system should maintain accuracy to within 1 microsecond. the JT-NM TR1001-1 specification calls for a maximum of 100ms between facilities, however Richard points out that, in practice, 1ms or even 10 microseconds is highly desired. And in tests, he shows that with layer 2, PTP unicast looping around western Europe was able to adhere to 1 microsecond, layer 3 within 10 microseconds. Over the internet, with a VPN Richard says he’s seen around 40 microseconds which would then feed into a boundary clock at the receiving site.

Summing up Richard points out that delivering PTP over a wired network can deliver great timing without needing timing hardware on an OPEX budget. On top of that, you can use it to add resilience to any existing GPS timing.

Gerard Philips from Arista speaks next to explain some of the basics about how PTP works. If you are interested in digging deeper, please check out this talk on PTP from Arista’s Robert Welch.

Already in use by many industries including finance, power and telcoms, PTP is base on IEEE-1588 allowing synchronisation down to 10s of nanoseconds. Just sending out a timestamp to the network would be a problem because jitter is inherent in networks; it’s part and parcel of how switches work. Dealing with the timing variations as smaller packets wait for larger packets to get out of the way is part of the job of PTP.

To do this, the main clock – called the grandmaster – sends out the time to everyone 8 times a second. This means that all the devices on the network, known as endpoints, will know what time it was when the message was sent. They still won’t know the actual time because they don’t know how long the message took to get to them. To determine this, each endpoint has to send a message back to the grandmaster. This is called a delay request. All that happens here is that the grandmaster replies with the time it received the message.

PTP Primary-Secondary Message Exchange.
Source: Meinberg [link]

This gives us 4 points in time. The first (t1) is when the grandmaster sent out the first message. The second (t2) is when the device received it. t3 is when the endpoint sent out its delay request and t4 is the time when the master clock received that request. The difference between t2 and t1 indicates how long the original message took to get there. Similarly t4-t3 gives that information in the other direction. These can be combined to derive the time. For more info either check out Arista’s talk on the topic or this talk from RAVENNA and Meinberg from which the figure above comes.

Gerard briefly gives an overview of Boundary Clock which act as secondary time sources taking pressure off the main grandmaster(s) so they don’t have to deal with thousands of delay requests, but they also solve a problem with jitter of signals being passed through switches as it’s usually the switch itself which is the boundary clock. Alternatively, Transparent Clock switches simply pass on the PTP messages but they update the timestamps to take account of how long the message took to travel through the switch. Gerard recommends only using one type in a single system.

Referring back to Bruce’s opening, Gerard highlights the need to monitor the PTP system. Black and burst timing didn’t need monitoring. As long as the main clock was happy, the DA’s downstream just did their job and on occasion needed replacing. PTP is a system with bidirectional communication and it changes depending on network conditions. Gerard makes a plea to build a monitoring system as part of your solution to provide visibility into how it’s working because as soon as there’s a problem with PTP, there could quickly be major problems. Network switches themselves can provide a lot of telemetry on this showing you delay values and allowing you to see grandmaster changes.

Gerard’s ‘Lessons Learnt’ list features locking down PTP so only a few ports are actually allowed to provide time information to the network, dealing carefully with audio protocols like Dante which need PTP version 1 domains, and making sure all switches are PTP-aware.

The video finishes with Q&A after a quick summary of SMPTE RP 2059-15 which is aiming to standardise telemetry reporting on PTP and associated information. Questions from the audience include asking how easy it is to do inter-continental PTP, whether the internet is prone to asymmetrical paths and how to deal with PTP in the cloud.

Watch now!
Speakers

Bruce Devlin Bruce Devlin
Standards Vice President,
SMPTE
Gerard Phillips Gerard Phillips
Systems Engineer,
Arista
Richard Hoptroff Richard Hoptroff
Founder and CTO
Hoptroff London Ltd

Video: The Fundamentals of Virtualisation

Virtualisation is continuing to be a driving factor in the modernisation of broadcast workflows both from the technical perspective of freeing functionality from bespoke hardware and from the commercial perspective of maximising ROI by increasing utilisation of infrastructure. Virtualisation itself is not new, but using it in broadcast is still new to many and the technology continues to advance to deal with modern bitrate and computation requirements.

In these two videos, Tyler Kern speaks to Mellanox’s Richard Hastie, NVIDIA’s Jeremy Krinitt and John Naylor from Ross Video explain how virtualisation fits with SMPTE ST 2110 and real-time video workflows.

Richard Hastie explains that the agility is the name of the game by separating the software from hardware. Suddenly your workflow, in principle can be deployed anywhere and has the freedom to move within the same infrastructure. This opens up the move to the cloud or to centralised hosting with people working remotely. One of the benefits of doing this is the ability to have a pile of servers and continually repurpose them throughout the day. Rather than have discrete boxes which only do a few tasks, often going unused, you can now have a quota of compute which is much more efficiently used so the return on investment is higher as is the overall value to the company. As an example, this principle is at the heart of Discovery’s transition of Eurosport to ST 2110 and JPEG XS. They have centralised all equipment allowing for the many countries around Europe which have production facilities to produce remotely from one, heavily utilised, set of equipment.

Part I

John Naylor explains the recent advancements brought to the broadcast market in virtualisation. vMotion from VMware allows live-migration of virtual. machines without loss of performance. When you’re running real-time graphics, this is really important. GPU’s are also vital for graphics and video tasks. In the past, it’s been difficult for VMs to have full access to GPUs, but now not only is that practical but work’s happened to allow a GPU to be broken up and these reserved partitions dedicated to a VM using NVIDIA Ampere architecture.
John continues by saying that VMWare have recently focussed on the media space to allow better tuning for the hypervisor. When looking to deploy VM infrastructures, John recommends that end-users work closely with their partners to tune not only the hypervisor but the OS, NIC firmware and the BIOS itself to deliver the performance needed.

“Timing is the number one challenge to the use of virtualisation in broadcast production at the moment”

Richard Hastie

Mellanox, now part of NVIDIA, has continued improving its ConnectX network cards, according to Richard Hastie, to deal with the high-bandwidth scenarios that uncompressed production throws up. These network cards now have onboard support for ST 2110, traffic shaping and PTP. Without hardware PTP, getting 500-nanosecond-accurate timing into a VM is difficult. Mellanox also use SR-IOV, a technology which bypasses the software switch in the hypervisor, reducing I/O overhead and bringing performance close to non-virtualised performance. It does this by partitioning the PCI bus meaning one NIC can present itself multiple times to the computer and whilst the NIC is shared, the software has direct access to it. For more information on SR-IOV, have a look at this article and this summary from Microsoft.

Part II

Looking to the future, the panel sees virtualisation supporting the deployment of uncompressed ST 2110 and JPEG XS workflows enabling a growing number of virtual productions. And, for virtualisation itself, a move down from OS-level virtualisation to containerised microservices. Not only can these be more efficient but, if managed by an orchestration layer, allow for processing to move to the ‘edge’. This should allow some logic to happen. much closer to the end-user at the same time as allowing the main computation to be centralised.

Watch part I and part II now!
Speakers

Tyler Kern Tyler Kern
Moderator
John Naylor John Naylor
Technology Strategist & Director of Product Security
Ross
Richard Hastie Richard Hastie
Senior Sales Director, Business Development
NVIDIA
Jeremy Krinitt Jeremy Krinitt
Senior Developer Relations Manager
NVIDIA

Video: ST-2110 – Measuring and Testing the Data, Control and Timing Planes

An informal chat touching on the newest work around SMPTE ST-2110 standards and related specifications in today’s video. The industry’s leading projects are now tracking the best practices in IT as much as the latest technology in IP because simply getting video working over the network isn’t enough. Broadcasters demand solutions which are secure from the ground up, easy to deploy and have nuanced options for deployment.

Andy Rayner from Nevion talks to Prin Boon from Phabrix to understand the latest trends. Between then, Andy and Prin account for a lot of activity in standards work within standards and industry bodies such as SMPTE, VSF and JT-NM to name a but a few, so whom better to hear from regarding the latest thinking and ongoing work.

Andy starts by outlining the context of SMPTE’s ST-2110 suite of standards which covers not only the standards within 2110, but also the NMOS specifications from AMWA as well as the timing standards (SMPTE 2059 and IEEE 1588). Prin and Andy both agree that the initial benefit of moving to IT networking was benefiting from the massive network switches which now delivering much higher switching density than SDI ever could or would, now the work of 2110 projects is also tracking IT, rather than simply IP. By benefiting from the best practices of the IT industry as a whole, the broadcast industry is getting a much better product. Andy makes the point that broadcast-uses have very much pushed fabric manufacturers to implement PTP and other network technologies in a much more mature and scalable way than was imagined before.

Link to video

The focus of conversation now moves to the data, control and timing plane. The data plane contains the media essences and all of the ST 21110 standards. Control is about the AMWA/NMOS specs such as the IS-0X specs as well as the security-focused BCP-003 and JT-NM TR-1001. Timing is about PTP and associated guidelines.

Prin explains that in-service test and measurement is there to give a feeling for the health of a system; how close to the edge is the system? This is about early alerting of engineering specialists and then enable deep faultfinding with hand-held 2110 analysers. Phabrix, owned by Leader, are one of a number of companies who are creating monitoring and measurement tools. In doing this Willem Vermost observed that little of the vendor data was aligned so couldn’t be compared. This has directly led to work between many vendors and broadcasters to standardise the reported measurement data in terms of how it’s measured and how it is named and is being standardised under 2110-25. This will cover latency, video timing, margin and RTP offset.

More new work discussed by the duo includes the recommended practice, RP 2059-15 which is related to the the ST 2059 standards which apply PTP to media streams. As PTP, also known as IEEE 1588 has been updated to version 2.1 as part of the 2019 update, this RP creates a unified framework to expose PTP data in a structured manner and relies on RFC 8575 which, itself, relies on the YANG data modeling language.

We also hear about work to ensure that NMOS can fully deal with SMPTE 2022-7 flows in all the cases where a receiver is expecting a single or dual feed. IS-08 corner cases have been addressed and an all-encompassing model to develop against has been created as a reference.

Pleasingly, as this video was released in December, we are treated to a live performance of a festive song on piano and trombone. Whilst this doesn’t progress the 2110 narrative, it is welcomed as a great excuse to have a mine pie.

Watch now!
Speakers

Andy Rayner Andy Rayner
Chief Technologist,
Nevion
Prinyar Boon Prinyar Boon
Product Manager,
PHABRIX

Video: Proper Network Designs and Considerations for SMPTE ST-2110

Networks from SMPTE ST 2110 systems can be fairly simple, but the simplicity achieved hides a whole heap of careful considerations. By asking the right questions at the outset, a flexible, scalable network can be built with relative ease.

“No two networks are the same” cautions Robert Welch from Arista as he introduces the questions he asks at the beginning of the designs for a network to carry professional media such as uncompressed audio and video. His thinking focusses on the network interfaces (NICs) of the devices: How many are there? Which receive PTP? Which are for management and how do you want out-of-band/ILO access managed? All of these answers then feed into the workflows that are needed influencing how the rest of the network is created. The philosophy is to work backwards from the end-nodes that receive the network traffic.

Robert then shows how these answers influence the different networks at play. For resilience, it’s common to have two separate networks at work sending the same media to each end node. Each node then uses ST 2022-7 to find the packets it needs from both networks. This isn’t always possible as there are some devices which only have one interface or simply don’t have -7 support. Sometimes equipment has two management interfaces, so that can feed into the network design.

PTP is an essential service for professional media networks, so Robert discusses some aspects of implementation. When you have two networks delivering the same media simultaneously, they will both need PTP. For resilience, a network should operate with at least two Grand Masters – and usually, two is the best number. Ideally, your two media networks will have no connection between them except for PTP whereby the amber network can benefit from the PTP from the blue network’s grandmaster. Robert explains how to make this link a pure PTP-only link, stopping it from leaking other information between networks.

Multicast is a vital technology for 2110 media production, so Robert looks at its incarnation at both layer 2 and layer 3. With layer 2, multicast is handled using multicast MAC addresses. It works well with snooping and a querier except when it comes to scaling up to a large network or when using a number of switches. Robert explains that this because all multicast traffic needs to be sent through the rendez-vous point. If you would like more detail on this, check out Arista’s Gerard Phillips’ talk on network architecture.

Looking at JT-NM TR-1001, the guidelines outlining the best practices for deploying 2110 and associated technologies, Robert explains that multicast routing at layer 3 works much increases stability, enables resiliency and scalability. He also takes a close look at the difference between ‘all source’ multicasting supported by IGMP version 2 and the ability to filter for only specific sources using IGMP version 3.

Finishing off, Robert talks about the difficulties in scaling PTP since all the replies/requests go into the same multicast group which means that as the network scales, so does the traffic on that multicast group. This can be a problem for lower-end gear which needs to process and reject a lot of traffic.

Watch now!
Speaker

Robert Welch Robert Welch
Technical Solutions Lead
Arista Networks