Video: How to Successfully Commission a SMPTE ST 2059/PTP System

PTP is the beating heart behind video- and audio-over-IP installations. As critical as black and burst reference, it pays to get it right. But PTP is a system, not a monolithic signal distributed around the facility. Unlike genlock, it’s a two-way conversation over networked infrastructure and whilst that brings great benefits, it changes how we deal with it. The system should be monitored, both at the ST 2059 layer and network layer. But before we even get to that point, implementation requires care particularly as the industry is still in the early phases of developing tools and best practices for project deployments.

Leigh Whitcomb from Imagine Communications has stepped up to bring us his experiences and best practices as part of the Broadcast Engineering and IT Conference at NAB. This talk assumes an existing level of knowledge of PTP. If you would like to start at the beginning, then please look at this talk from Meinberg and this from Tektronix.

Leigh starts by explaining that, typically, the best architecture is to have a red and a blue network. A grand master would then be on both networks and both would be set to lock to GPS. He explains how do deal with prioritisation and preventing other devices from becoming grand masters. He also explains some of the basic PTP parameter values such as setting the Announcement time outs. Other good design practices he discusses are where to use Boundary Clocks, avoiding PTP Domain numbers of 0 and 127 plus using QoS and DSCP.

As part of the commissioning piece, Leigh goes through some frequently-seen problems such as locking up slowly due to an incorrect Delay Request setting or the Grand Master announce rate being the same as the timeout. To understand when your system isn’t working properly, Leigh makes the point that it’s vital to understand in detail how you expect the system to behave. Use checklists to ensure all parameters and configuration have been applied correctly but also to verify the PTP packets themselves leaving the GM. Leigh then highlights checklists for other parts of the network such as the switches and Media Nodes.

There are a number of tools available for faultfinding and checking compliance. As part of commissioning, the first port of call is the device’s GUI and API which will obviously give most of the parameters needed but often will go further and help with fault finding. WireShark can help verifying the fields in the packets, the timing and message rates. Whilst Meinberg’s Track Hound is a free program which allows you to verify the PTP protocol and Grand Masters. The EBU List project also covers PTP/ST 2059. Helpfully, Leigh talks through how to use Wireshark to verify fields and message rates.

In terms of Testing, Leigh suggests running a packet capture (PCap) for 48 hours after commissioning to verify any issues. He then highlights the need for redundancy testing. This is where understanding how you intend the network to work is important as redundancy testing should also be combined with network testing where you deliberately pull down part of your network and see the GMs change as intended. This changeover will be managed by the Best Master Clock Algorithm (BMCA). When troubleshooting, you should use your monitoring system to help you visualise what’s happening. A good system should enable you to see the devices on the network and their status. Many companies would want to test how successfully the system recovers from a full failure as this will represent the maximum traffic load on the PTP system.

How to watch
1) Click on ‘Add to favourites’
2) Register for free – or log in if you are already part of NAB Express

3) You will then see the video on the left of the screen.

Watch now!
Speakers

Leigh Whitcomb Leigh Whitcomb
Architect,
Imagine Communications

Video: AES67 & ST 2110 Deeper Dive – The Audio Files

A deeper dive here, in the continuing series of videos looking at AES67, SMPTE ST 2110 and Ravenna. Andreas Hildebrand from ALC Networx is back to investigate the next level down on how AES67 and ST 2110 operate and how they can be configured. The talk, however, remains accessible throughout and starts with an reminder of what AES67 is and why it exists. This is was also covered in his first talk.

After explaining the AES67 was created as a way for multiple audio-over-IP standards to interoperate, Andreas looks at the stack, stepping through it to explain each element. The first topic is timing. He explains that every device on the AES67 network is not only governed by PTP, but it’s also runs its own clock which is called the Local Clock. From the Local Clock, the device then also creates a Media Clock which is based on the Local Clock time but is used to crate any frequency needed for the media (48KHz, for instance). Finally an RTP clock is kept for transmission over the network.

The next item featured on the stack is encoding. AES67 is baed on linear audio, also known as PCM. AES67 ensures that 48KHz, 16 & 24-bit audio is supported on all devices and allows up to 8 channels per stream. Importantly, Andreas explains the different versions of packet time which are supported, 1ms being mandatory which allows 48 samples of 48 KHz audio into teach IP packet.

SDP – Session Description Protocol is next which describes in a simple text file what’s in the AES67 stream giving its configuration. Then Andreas looks at what Link Offset is and examines its role in determining latency and the types of latency it’s been made to compensate for. He then talks you through working out what latency setting you need to use including taking into account the number of switches in a network and our frame size.

SMPTE ST 2110 is the focus for the last part of the talk. This, Andreas explains, is a way of moving, typically uncompressed, professional media (also known as essences) around a network for live production with very low latency. It sends audio separately to the video and uses AES67 to do so. This is defined in standard ST 2110-30. However, there are some important configurations for AES67 which are mandated in order to be compatible which Andreas explains. One example is forcing all devices to be slave only, another is setting the RTP clock offset to zero. Andreas finishes the talk by summarising what parts of ST 2110 and AES67 overlap including discussing the frame sizes supported.

Watch now!
Download the presentation
Speaker

Andreas Hildebrand Andreas Hildebrand
Senior Product Manager,
ALC NetworX Gmbh.

Video: Building Television Systems in a Time of Multiple Technology Transitions

Major technology transitions can be hard to keep up with, and when you have a project requiring you decide which one to go with, it can seem unmanageable. This panel put together by SMPTE New York looks gives the view from System Integrators on how to make this work and cover their experience with a wide range of new technologies.

John Turner kicked off explaining the reasoning for using SDI over SMPTE ST 2110 in some circumstances. For that project, his client had a fixed space so wouldn’t see the benefits of 2110 in terms of expansion. Their workflow already worked well in SDI and at the time, the costs of 2110 would have been higher. Overall, the project went with SDI, was successful and they are a happy customer. Karl Paulsen agreed that new technology shouldn’t be ‘for the sake of it’ and added that whilst individual products with a new technology may be stable, that’s not certain to be the case when interoperating within a whole system. As such, this puts the implementation time up meaning the incumbent technologies do tend to get chosen when time is at a premium.

Turning to 5G, Karl answered the question “what are the transformational technologies”. For some applications, for instance back of the camera RF in a stadium, 5G is a major leap compared to microwave packs, but early on in a technology’s life, like we are with 5G, it’s a matter of working out where it does and where it doesn’t work well. In time, it will probably adapt to some of those other use cases that it wasn’t suited for initially. John Turner highlighted the elements that ATSC 3.0 transforms in a big way. From an RF perspective, its modulation is much stronger and more flexible, that it’s able to drive new business models.

John Mailhot’s view on transformational challenge is ‘the people’. He puts forward the idea that the technical constraints of router size and max cable length, to name two examples, embedded themselves into the routines, assumptions and architectures that people embody in their work. With SMPTE ST-2110, most of these constraints are removed. This means you are a lot freer to work out the workflows the business wants. The challenge here is to have the imagination and fortitude to forge the right workflow without getting paralysed by choice.

“SMPTE ST 2110 is an entire paradigm shift”, John Humphrey

After responding to the moderator’s question on how much turmoil these transitions are causing, Mark Schubin summarises the situation by saying we need to work out which of the technologies is like a fridge (replacing previous technologies), a microwave (used as well as a conventional oven) and an induction cooker (requires change in cookware, little adoption). John Humphrey adds that ST 2110 is a technology which viewers don’t notice since the visual quality is the same. HDR, is the opposite so they need different approaches.

During the last 45 minutes, the panel took questions from the audience covering how to hire talent, the perspective of younger people on technology, programming specifically made for smartphones, ATSC 3.0 implementation, reliability of home internet, PTP and more.

Watch now!
Speakers

Mark Schubin Mark Schubin
Consultant & Explainer
John Humphrey John Humphrey
VP, Business Development,
Hitachi Kokusai Electric America Ltd.
Karl Paulsen Karl Paulsen
CTO,
Diversified
John Turner John Turner
Principal Engineer
Turner Engineering Inc.
John Mailhot John Mailhot
Systems Architect for IP Convergence
Imagine Communications

Video: How to Build an SRT Streaming Flow from Encoder to Edge

SRT is an enabler for contribution over the internet – whether point to point, or cloud egress/ingress. In recent weeks here on The Broadcast Knowledge we have seen different takes on how SRT, short for Secure Reliable Transport, and RIST can be used including from Open Broadcast Systems.

Here, Karel Boek, CEO of Raskenlund a Norwegian consultancy company for streaming, explains SRT and builds a workflow as a live demo showing how you can implement it quickly and easily. He starts by explaining where SRT sits and what it’s for. SRT makes contribution over the internet possible because it has a very light-touch away of recovering missing packets which are inevitable on internet links. Karel covers Haivision’s creation of SRT and the SRT Alliance that has grown out of that which now boast 350 members. The protocol being Open Source – and now an IETF Draft – means that a lot of companies have been happy to adopt the protocol. There are frequent plugfests, one has just concluded, where vendors test compatibility with the increasing set of features offered in SRT.

‘Secure’ is the ‘S’ in SRT’s name which is because the stream can be easily encrypted as part of the protocol. This is an important aspect in enabling sports and enterprise contribution in the cloud given the security that no-one can watch the feed before it gets to its destination.

‘Reliable’ is the key offer for SRT as that’s the number one problem with the internet and other networks where not all packets get delivered. TCP/IP is a great protocol on which most webpages are delivered. It’s fantastic for file delivery since every single packet gets acknowledged and there really isn’t any way that a file can get to the other end without being completely intact. Live streams can’t afford the overhead of counting in and counting out every packet so SRT’s ability to request only the missing packets is very important. It should be noted that this ability is also in Zixi, ARQ and RIST.

Karel compares SRT with other protocols including RTMP, MPEG-2 Transport Streams amongst others. He is careful to separate HLS, MPEG-DASH and WebRTC as ‘last-mile protocols’ in order to make a differentiation between those which content providers use to move video around as part of production and those which are used for distribution. RTMP’s use is still notable but diminishing particularly in Europe and the American markets. But the idea of MPEG-TS over UDP is still the best way to deliver within a building. Outside of the building, you would then want to protect it at least with FEC, with SMPTE 2022-7 or, better, with a protocol such as RIST or SRT. Karel mentions the details of the Simple Profile of RIST which was, by design, missing the features Karel notes are absent. We’ve heard here on The Broadcast Knowledge that these features have been delivered as planned in the Main Profile.

In the final part of this talk, Karel builds, live, an example workflow which combines both Wowza and SRTHub to create an end-to-end workflow. This is a great way of demonstrating how quickly you can create a workflow with SRT. There are plenty of SRT-enabled encoders and senders which is one of the ways we can judge the success of the SRT Alliance. Similarly whilst Haivision’s SRTHub is a useful product which brings things together in the cloud or on-prem, but Techex’s MWEdge and Videoflow’s DVG can do similar or more, each with their own advantages.

Overwell the takeaway from this talk from Raskenlund is that internet contribution is sorted, it’s now for your to choose how to do it and with whom. To that end, the talk ends with a Q&A from people wondering exactly that.

Watch now!
Speaker

Karel Boek Karel Boek
CEO,
Raskenlund