Video: As Time Goes by…Precision Time Protocol in the Emerging Broadcast Networks

How much timing do you need? PTP can get you timing in the nanoseconds, but is that needed, how can you transport it and how does it work? These questions and more are under the microscope in this video from RTS Thames Valley.

SMPTE Standards Vice President, Bruce Devlin introduces the two main speakers by reminding us why we need timing and how we dealt with it in the past. Looking back to the genesis of television, points out Bruce, everything was analogue and it was almost impossible to delay a signal at all. An 8cm, tightly wound coil of copper would give you only 450 nanoseconds of delay alternatively quartz crystals could be used to create delays. In the analogue world, these delays were used to time signals and since little could be delayed, only small adjustments were necessary. Bruce’s point is that we’ve swapped around now. Delays are everywhere because IP signals need to be buffered at every interface. It’s easy to find buffers that you didn’t know about and even small ones really add up. Whereas analogue TV got us from camera to TV within microseconds, it’s now a struggle to get below two seconds.

Hand in hand with this change is the change from metadata and control data being embedded in the video signal – and hence synchronised with the video signal – to all data being sent separately. This is where PTP, Precision Time Protocol, comes in. An IP-based timing mechanism that can keep time despite the buffers and allow signals to be synchronised.

 

 

Next to speak is Richard Hoptroff whose company works with broadcasters and financial services to provide accurate time derived from 4 satellite services (GPS, GLONASS etc) and the Swedish time authority RiSE. They have been working on the problem of delivering time to people who can’t put up antennas either because they are operating in an AWS datacentre or broadcasting from an underground car park. Delivering time by a wired network, Richard points out, is much more practical as it’s not susceptible to jamming and spoofing, unlike GPS.

Richard outlines SMPTE’s ST 2059-2 standard which says that a local system should maintain accuracy to within 1 microsecond. the JT-NM TR1001-1 specification calls for a maximum of 100ms between facilities, however Richard points out that, in practice, 1ms or even 10 microseconds is highly desired. And in tests, he shows that with layer 2, PTP unicast looping around western Europe was able to adhere to 1 microsecond, layer 3 within 10 microseconds. Over the internet, with a VPN Richard says he’s seen around 40 microseconds which would then feed into a boundary clock at the receiving site.

Summing up Richard points out that delivering PTP over a wired network can deliver great timing without needing timing hardware on an OPEX budget. On top of that, you can use it to add resilience to any existing GPS timing.

Gerard Philips from Arista speaks next to explain some of the basics about how PTP works. If you are interested in digging deeper, please check out this talk on PTP from Arista’s Robert Welch.

Already in use by many industries including finance, power and telecoms, PTP is base on IEEE-1588 allowing synchronisation down to 10s of nanoseconds. Just sending out a timestamp to the network would be a problem because jitter is inherent in networks; it’s part and parcel of how switches work. Dealing with the timing variations as smaller packets wait for larger packets to get out of the way is part of the job of PTP.

To do this, the main clock – called the grandmaster – sends out the time to everyone 8 times a second. This means that all the devices on the network, known as endpoints, will know what time it was when the message was sent. They still won’t know the actual time because they don’t know how long the message took to get to them. To determine this, each endpoint has to send a message back to the grandmaster. This is called a delay request. All that happens here is that the grandmaster replies with the time it received the message.

PTP Primary-Secondary Message Exchange.
Source: Meinberg [link]

This gives us 4 points in time. The first (t1) is when the grandmaster sent out the first message. The second (t2) is when the device received it. t3 is when the endpoint sent out its delay request and t4 is the time when the master clock received that request. The difference between t2 and t1 indicates how long the original message took to get there. Similarly, t4-t3 gives that information in the other direction. These can be combined to derive the time. For more info either check out Arista’s talk on the topic or this talk from RAVENNA and Meinberg from which the figure above comes.

Gerard briefly gives an overview of Boundary Clock which act as secondary time sources taking pressure off the main grandmaster(s) so they don’t have to deal with thousands of delay requests, but they also solve a problem with jitter of signals being passed through switches as it’s usually the switch itself which is the boundary clock. Alternatively, Transparent Clock switches simply pass on the PTP messages but they update the timestamps to take account of how long the message took to travel through the switch. Gerard recommends only using one type in a single system.

Referring back to Bruce’s opening, Gerard highlights the need to monitor the PTP system. Black and burst timing didn’t need monitoring. As long as the main clock was happy, the DA’s downstream just did their job and on occasion needed replacing. PTP is a system with bidirectional communication and it changes depending on network conditions. Gerard makes a plea to build a monitoring system as part of your solution to provide visibility into how it’s working because as soon as there’s a problem with PTP, there could quickly be major problems. Network switches themselves can provide a lot of telemetry on this showing you delay values and allowing you to see grandmaster changes.

Gerard’s ‘Lessons Learnt’ list features locking down PTP so only a few ports are actually allowed to provide time information to the network, dealing carefully with audio protocols like Dante which need PTP version 1 domains, and making sure all switches are PTP-aware.

The video finishes with Q&A after a quick summary of SMPTE RP 2059-15 which is aiming to standardise telemetry reporting on PTP and associated information. Questions from the audience include asking how easy it is to do inter-continental PTP, whether the internet is prone to asymmetrical paths and how to deal with PTP in the cloud.

Watch now!
Speakers

Bruce Devlin Bruce Devlin
Standards Vice President,
SMPTE
Gerard Phillips Gerard Phillips
Systems Engineer,
Arista
Richard Hoptroff Richard Hoptroff
Founder and CTO
Hoptroff London Ltd

Video: I know X, what does WebRTC get me?

WebRTC is now a W3C standard providing sub-second peer-to-peer video and audio streaming with NAT traversal. Widely used for video conferencing, its sub-second latency has also been the focus of video streaming companies such as Millicast and Limelight (to name but two) who aim to deliver this otherwise peer-to-peer technology to thousands or millions of people in under a second enabling interactive video, gamefied streams, auctions and ultra-low-latency sports.

Addressing directly people using other streaming protocols, Pion creator Sean DuBois spoke at SF Video Tech about what WebRTC brings over and above protocols like RTMP, SRT and RIST. At the heart of it, WebRTC, like SRT and RIST, creates a connection over which it can send a variety of data. Whilst we expect media to be sent, actually, file transfer can be easily achieved – let’s not forget the whole of SRT is build upon UDT which is specifically a file delivery utility. Where file transfer can be achieved, so can real-time data & metadata transfer.

Sean quickly summarises WebRTC as a Protocol between (typically) browsers, an peer-to-peer secure connection over which multiple audio & video streams can flow. In common with RIST and other recent protocols, it’s based on many pre-existing
technologies such as SRTP, DTLS, ICE and SDP to deliver signalling, connection management, encryption and communication.

 

 

The list of improvements over RTMP is very long. They’re spelt out concisely in the video so we will highlight just a few here. Importantly, low-latency is key. RTMP was low-latency for its time, but not by today’s standards. Google’s Stadia can boast 125ms video latency for a keypress, explains Sean. DTLS and SRTP are essential for security but are well understood, trusted methods of securing your data. DTLS is pretty much exactly the same as the TLS which secures your bank transfers, just moved into UDP instead of TCP. However, WebRTC can work by exchanging ‘fingerprints’ (DTLS-SRTP) instead of the full trusted certificate infrastructure that underpins TLS on the web. Removing the requirement for certs is a big boost for flexibility and agility as long as you are confident you can exchange fingerprints securely ahead of time.

NAT traversal is also a big boon where, even with both endpoints behind a firewall, endpoints can always find a way to communicate although this does mean that ICE servers are needed to facilitate connectivity. Within broadcasting, however, it’s more likely that you’ll have control of one end so this is less needed. Sean highlights the ability to send multiple quality levels within the same stream using the ‘simulcast’ ability of WebRTC.

Sean then looks at SRT and RIST. Both of these are low-latency streaming protocols which can, both, also provide sub-second streaming for good connections with a relatively low RTT. Sean highlights the lack of SRT and RIST to negotiate the codec in use and their optional security. Being focused more on delivering contribution feeds, they tend to have a more static configuration often created after a programme of testing to ensure the quality will be acceptable to the broadcaster/streaming provider.

To finish, Sean highlights a whole series of interesting, innovative uses of WebRTC from informal group streaming to drones to shared online games to file transfers and more.

Watch now!
Speaker

Sean DuBois Sean DuBois
Developer, Apple
Creator of Pion WebRTC

Video: AES67/ST 2110-30 over WAN

Dealing with professional audio, it’s difficult to escape AES67 particularly as it’s embedded within the SMPTE ST 2110-30 standard. Now, with remote workflows prevalent, moving AES67 over the internet/WAN is needed more and more. This talk brings the good news that it’s certainly possible, but not without some challenges.

Speaking at the SMPTE technical conference, Nicolas Sturmel from Merging Technologies outlines the work being done within the AES SC-02-12M working group to define the best ways of working to enable easy use of AES67 on the WAn. He starts by outlining the fact that AES67 was written to expect short links on a private network that you can completely control which causes problems when using the WAN/internet with long-distance links on which your bandwidth or choice of protocols can be limited.

To start with, Nicolas urges anyone to check they actually need AES67 over the WAN to start with. Only if you need precise timing (for lip-sync for example) with PCM quality and low latencies from 250ms down to as a little as 5 milliseconds do you really need AES67 instead of using other protocols such as ACIP, he explains. The problem being that any ping on the internet, even to something fairly close, can easily take 16 to 40ms for the round trip. This means you’re guaranteed 8ms of delay, but any one packet could be as late as 20ms known as the Packet Delay Variation (PDV).

Link

Not only do we need to find a way to transmit AES67, but also PTP. The Precise Time Protocol has ways of coping for jitter and delay, but these don’t work well on WAN links whether the delay in one direction may be different to the delay for a packet in the other direction. PTP also isn’t built to deal with the higher delay and jitter involved. PTP over WAN can be done and is a way to deliver a service but using a GPS receiver at each location is a much better solution only hampered by cost and one’s ability to see enough of the sky.

The internet can lose packets. Given a few hours, the internet will nearly always lose packets. To get around this problem, Nicolas looks at using FEC whereby you are constantly sending redundant data. FEC can send up to around 25% extra data so that if any is lost, the extra information sent can be leveraged to determine the lost values and reconstruct the stream. Whilst this is a solid approach, computing the FEC adds delay and the extra data being constantly sent adds a fixed uplift on your bandwidth need. For circuits that have very few issues, this can seem wasteful but having a fixed percentage can also be advantageous for circuits where a predictable bitrate is much more important. Nicolas also highlights that RIST, SRT or ST 2022-7 are other methods that can also work well. He talks about these longer in his talk with Andreas Hildrebrand

Watch now!
Speaker

Nicolas Sturmel Nicolas Sturmel
Product Manager, Senior Technologist,
Merging Technologies

From WebRTC to RTMP

Continuing our look at the most popular videos of 2020, in common with the previous post on SRT, today we look at replacing RTMP for ingest. This time, WebRTC is demonstrated as an option. With sub-second latency, WebRTC is a compelling replacement for RTMP.
 

 
Read what we said about it the first time in the original article, but you’ll see that Nick Chadwick from Mux takes us through the how RTMP works and where the gaps are as it’s phased out. He steps through the alternatives showing how even the low-latency delivery formats don’t fit the bill for contribution and shows how WebRTC can be a sub-second solution.

RIST and SRT saw significant and continued growth in use throughout 2020 as delivery formats and appear to be more commonly used than WebRTC, though that’s not to say that WebRTC isn’t continuing to grow within the broadcast community. SRT and RIST are both designed for contribution in that they actively manage packet loss, allow any codecs to be used and provide for other data to be sent, too. Overall, this tends to give them the edge, particularly for hardware products. But WebRTC’s wide availability on computers can be a bonus in some circumstances. Have a listen and come to your own conclusion.

Watch now!
Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux