Video: Introduction to Precision Time Protocol (PTP)

As we’ve seen in so many videos, PTP is fundamental to large-scale SMPTE ST 2110 and pro-audio installations. On The Broadcast Knowledge we’ve looked at a large range of talks on PTP on architecture, scaling, and how it fits in to the broadcast industry. Few of these break down PTP into the fundamentals like today’s article about a video from Albert Mitchell from Cisco.

The key to a PTP network is having one grandmaster clock which can provide time for the rest of the network. In this article, the clocks running in the end devices are called ‘ordinary clocks’. Whilst there are ways to avoid using PTP with uncompressed video such as ST 2110, for live, studio-style productions where you will be bringing them together in a video mixer or similar, keeping these videos effectively zero latency is important and frame syncs on every input of the mixer are discouraged. A grandmaster clock can provide the timing the whole network needs to make this work, usually fed by GPS time.

 

 

SMPTE’s ST 2110 suite has built itself on the timing mechanism of PTP in form of IEEE-1588. SMPTE ST 2059 standards suite provides a method to accommodate all legacy reference and media signals using IEEE-1588 Precision Time Protocol (PTP), delivered over an IP network.


Albert moves on to how this all works. He keeps it simple explaining that there are two measurements needed to get the timing right. You need to know how long it takes to get a message from the grandmaster to the clock and how long it takes to get a message from the clock to the grandmaster. If the grandmaster sends a message with the time in it, it’s trivial for the ordinary clock to look at the time when the message arrived and work out the time it took. It can do the same; an ordinary clock can put the time into a message and send it to a grandmaster. The grandmaster will look at the current time and reply saying how long the transmission delay was. The ordinary clock averages these two measurements and can use the result and the time from the grandmaster to correct its own clock.

Albert finishes by explaining that if there are other switches between the grandmaster and the ordinary clock, those switches should be expected to identify the ‘residence time’ and add this extra delay of simply going through the switch to the time message. Changes in network delay due to congestion or path changes are the reason this timing calculation happens once a second.

Watch now!
Speakers

Albert Mitchell Albert Mitchell
Technical MArketing Engineer,
Cisco

Video: Best Practices for End-to-End Workflow and Server-Side Ad Insertion Monitoring

This video from the Streaming Video Alliance, presented at Mile High Video 2020 looks at the result of recent projects document the best practice for two important activities: server-side ad insertion (SSAI) and end-to-end (E2E) workflow monitoring. First off, is E2E monitoring which defines a multi-faceted approach to making sure you’re delivering good-quality content well.

The this part of the talk is given by Christopher Kulbakas who introduces us to the document published by the Streaming Video Alliance covering monitoring best practices. The advice surrounds three principles: Creating a framework, deciding on metrics, and correlation. Christopher explains the importance of monitoring video quality after a transcode or encode since it’s easy to take a sea of green from your transport layer to indicate that viewers are happy. If your encode looks bad, viewers won’t be happy just because the DASH segments were delivered impeccably.

The guidance helps your monitor your workflow. ‘End to end’ doesn’t imply the whole delivery chain, only how to ensure the part you are responsible for is adequately monitored.

Christopher unveils the principles behind the modular monitoring across the workflow and tech stack:
1) Establish monitoring scope
Clearly delineate your responsibility from that of other parties. Define exactly how and to what standard data will be handled between the parties.

2) Partition workflow with monitoring points
Now your scope is clear, you can select monitoring points before and after key components such as the transcoder.

3) Decompose tech stage
Here, think of each point in the workflow to be monitored as a single point in a stack of technology. There will be content needing a perceptual quality monitor, Quality of Service (QoS) and auxiliary layers such as player events, logs and APIs which can be monitored.

4) Describe Methodology
This stage calls for documenting the what, where, how and why of your choices, for instance explaining that you would like to check the manifest and chunks on the output of the packager. You’d do this with HTTP-GET requests for the manifest and chunks for all rungs of the ladder. After you have finished, you will have a whole set of reasoned monitoring points which you can document and also share with third parties.

5) Correlate results
The last stage is bringing together this data, typically by using an asset identifier. This way, all alarms for an asset can be grouped together and understood as a whole workflow.

End-to-End Server-Side Ad Monitoring

The last part of this talk is from Mourad Kioumgi from Sky who walks us through a common scenario and how to avoid it. An Ad Buyer complains their ad didn’t make it to air. Talking to every point in the chain, everyone checks their own logs and says that their function was working, from the schedulers to the broadcast team inserting the SCTE markers. The reality is that if you can’t get to the bottom of this, you’ll lose money as you lose business and give refunds.

The Streaming Video Alliance considered how to address this through better monitoring and are creating a blueprint and architecture to monitor SSAI systems.

Mourad outlines these possible issues that can be found in SSAI systems:
1) Duration of content is different to the ad duration.
2) Chunks/manifest are not available or poorly hosted
3) The SCTE marker fails to reach downstream systems
4) Ad campaigns are not fulfilled despite being scheduled
5) Ad splicing components fail to create personalised manifests
6) Over-compression of the advert.

Problems 2,3, 5 and 6 are able to be caught by the monitoring proposed which revolves around adding the Creative ID and AdID into the manifest file. This way, problems can be correlated which particularly improves the telemetry back from the player which can deliver a problem report and specify which asset was affected. Other monitoring probes are added to monitor the manifests and automatic audio and video quality metrics. Sky successfully implemented this as a proof of concept with two vendors working together resulting in a much better overview of their system.

Mourad finishes his talk looking at the future creating an ad monitoring framework to distribute an agreed framework document for. best practices.

Watch now!
Speakers

Christopher Kulbakas Christopher Kulbakas
Project Lead, Senior Systems Designer, Media Technology & infrastructure,
CBC/Radio Canada
Mourad Kioumgi Mourad Kioumgi
VOD Solutions Architect.
Sky

Video: Time and timing at VidTrans21

Timing is both everything and nothing. Although much fuss is made of timing, often it’s not important. But when it is important, it can be absolutely critical. Helping us navigate through the broadcast chains varying dependence on a central co-ordinated time source is Nevion’s Andy Rayner in this talk at the VSF’s VidTrans21. When it comes down to it, you need time for coordination. In the 1840s, the UK introduced ‘Railway time’ bringing each station’s clock into line with GMT to coordinate people and trains.

For broadcast, working with multiple signals in a low-latency workflow is the time we’re most likely to need synchronisation such as in a vision or audio mixer. Andy shows us some of the original television technology where the camera had to be directly synchronised to the display. This is the era timing came from, built on by analogue video and RF transmission systems which had components whose timing relied on those earlier in the chain. Andy brings us into the digital world reminding us of the ever-useful blanking areas of the video raster which we packed with non-video data. Now, as many people move to SMPTE’s ST 2110 there is still a timing legacy as we see that some devices are still generating data with gaps where the blanking of the video would be even though 2110 has no blanking. This means we have to have timing modes for linear and non-linear delivery of video.
 

 
In ST 2110 every packet is marked with a reduced resolution timestamp from PTP, the Precision Time Protocol (or See all our PTP articles). This allows highly accurate alignment of essences when bringing them together as even a slight offset between audios can create comb filters and destroy the sound. The idea of the PTP timestamp is to stamp the time the source was acquired. But Andy laments that in ST 2110 it’s hard to keep this timestamp since interim functions (e.g. graphics generators) may restamp the PTP breaking the association.

Taking a step back, though, there are delays now up to a minute later delivering content to the home. Which underlines that relative timing is what’s most important. A lesson learnt many years back when VR/AR was first being used in studios where whole sections of the gallery were running several frames delayed to the rest of the facility to account for the processing delay. Today this is more common as is remote production which takes this fixed time offset to the next level. Andy highlights NMOS IS-07 which allows you timestamp button presses and other tally info allowing this type of time-offset working to succeed.

The talk finishes by talking about the work of the GCCG Activity Group at the VSF of which Andy is the co-chair. This group is looking at how to get essences into and out of the cloud. Andy spends some time talking about the tests done to date and the fact that PTP doesn’t exist in the cloud (it may be available for select customers). In fact you may have live with NTP-derived time. Dealing with this is still a lively discussion in progress and Andy is welcoming participants.

Watch now!
Speakers

Andy Rayner Andy Rayner
Co-Chair, Ground-Cloud-Cloud-Ground Activity Group, VSF
Chief Technologist, Nevion

Video: Fibre Optics in the LAN and Data Centre

Fibres are the lifeblood of the major infrastructure broadcasters have today. But do you remember your SC from your LC connectors? Do you know which cable types are allowed in permenant installations? Did you know you can damage connectors by mating the wrong fibre endings? For some buildings, there’s only one fibre and connector type making patch cable selection all the easier. However there are always exceptions and when it comes to ordering more, do you know what to look out for to get exactly the right ones?

This video from Lowell Vanderpool takes a swift, but comprehensive, look at fibre types, connector types, light budget, ferrule types and SFPs. Delving straight in, Lowell quickly establishes the key differences between single-mode and multi-mode fibre with the latter using wider-diameter fibres. This keeps the costs down, but compared to single-mode fibre can’t transmit as far. Due to their cost, multi-mode fibres are common within the datacentre so Lowell takes us through the multimode cable types from the legacy OM1 to the latest OM5 cable.

OM1 cable was rated for 1GB, but the currently used OM3 and 4 fibre types can carry 10Gb up to 550m. Multimode fibres are typically colour-coded with OM3 an 4 being ‘aqua’. OM5 is the latest cable to standardised which can support Short Wavelength Division Multiplexing (SWDM) whereby 4 frequencies are sent down the same fibre giving an overall bandwidth of 10Gbx4 = 40GbE. For longer-distance, the yellow OS1 and, more recently, OS2 fibre types will achieve up to 10km distance.

Lowell explains that whilst 10km is far enough for many inter-building links, the distance quoted is a maximum which excludes the losses incurred as light leaves one fibre and enters another at connection points. Lowell has an excellent graphic which shows the overall light ‘budget’, how each connector represents a major drop in signal and how each interface will also reflect small amounts of the signal back up the fibre.

Having dealt with the inside of the cables, Lowell brings up the important topic of the outer jacket. All cables have different options for the outer jacket (for electrical cables, usually called insulation). These outer jackets allow for varying amounts of flexibility, water-tightness and armouring. Sometimes forgotten is that they have also got different properties in the event of fire. Depending on where a cable is, there are different rules on how flame retardant the cable can be. For instance, in the plenum of a room (false ceiling/wall) and a riser there are different requirements than patching between racks. Some areas keeping smoke low is important, in others ensuring fire doesn’t travel between areas is the aim so Lowell cautions us to check the local regulations.

The final part of the video covers connectors, ferrules and SFPs. Connectors come in many types, although as Lowell points out, LC is most popular in server rooms. LC connectors can come in pairs, locked together and called ‘duplex’ or individually, known as ‘simplex’. Lowell looks at pretty much every type of connector you might encounter from the legacy, metal bayonet & screw connectors (FC, ST) to the low-insertion loss, capped EC2000 connector for single mode cables and popular for telco applications. Lowell gives a close look at MPT and MPO connectors which combine 1×12 or 2×12 fibres into one connector making for a very high capacity connection. We also see how the fibres can be broken out individually at the other end into a breakout cassette.

The white, protruding end to a connector is called the ferrule and contains the fibre in the centre. The solid surround is shaped and polished to minimise gaps between the two fibre ends and to fully align the fibre ends themselves. Any errors will lead to loss of light due to it spilling out of the fibre or to excessive light bouncing back down the cable. Lowell highlights the existence of angled ferrules which will cause damage if mated with flat connectors.

The video finishes with a detailed talk through the make up of an SFP (Small Form-factor Pluggable) transceiver looking and what is going on inside. We see how the incoming data needs to be serialised, how heat dissipation and optical lanes are handled plus how that affects the cost.

Watch now!
Speaker

Lowell Vanderpool Lowell Vanderpool
Technical Trainger,
Lowell Vanderpool YouTube Channel