Video: How to Successfully Commission a SMPTE ST 2059/PTP System

PTP is the beating heart behind video- and audio-over-IP installations. As critical as black and burst reference, it pays to get it right. But PTP is a system, not a monolithic signal distributed around the facility. Unlike genlock, it’s a two-way conversation over networked infrastructure and whilst that brings great benefits, it changes how we deal with it. The system should be monitored, both at the ST 2059 layer and network layer. But before we even get to that point, implementation requires care particularly as the industry is still in the early phases of developing tools and best practices for project deployments.

Leigh Whitcomb from Imagine Communications has stepped up to bring us his experiences and best practices as part of the Broadcast Engineering and IT Conference at NAB. This talk assumes an existing level of knowledge of PTP. If you would like to start at the beginning, then please look at this talk from Meinberg and this from Tektronix.

Leigh starts by explaining that, typically, the best architecture is to have a red and a blue network. A grand master would then be on both networks and both would be set to lock to GPS. He explains how do deal with prioritisation and preventing other devices from becoming grand masters. He also explains some of the basic PTP parameter values such as setting the Announcement time outs. Other good design practices he discusses are where to use Boundary Clocks, avoiding PTP Domain numbers of 0 and 127 plus using QoS and DSCP.

As part of the commissioning piece, Leigh goes through some frequently-seen problems such as locking up slowly due to an incorrect Delay Request setting or the Grand Master announce rate being the same as the timeout. To understand when your system isn’t working properly, Leigh makes the point that it’s vital to understand in detail how you expect the system to behave. Use checklists to ensure all parameters and configuration have been applied correctly but also to verify the PTP packets themselves leaving the GM. Leigh then highlights checklists for other parts of the network such as the switches and Media Nodes.

There are a number of tools available for faultfinding and checking compliance. As part of commissioning, the first port of call is the device’s GUI and API which will obviously give most of the parameters needed but often will go further and help with fault finding. WireShark can help verifying the fields in the packets, the timing and message rates. Whilst Meinberg’s Track Hound is a free program which allows you to verify the PTP protocol and Grand Masters. The EBU List project also covers PTP/ST 2059. Helpfully, Leigh talks through how to use Wireshark to verify fields and message rates.

In terms of Testing, Leigh suggests running a packet capture (PCap) for 48 hours after commissioning to verify any issues. He then highlights the need for redundancy testing. This is where understanding how you intend the network to work is important as redundancy testing should also be combined with network testing where you deliberately pull down part of your network and see the GMs change as intended. This changeover will be managed by the Best Master Clock Algorithm (BMCA). When troubleshooting, you should use your monitoring system to help you visualise what’s happening. A good system should enable you to see the devices on the network and their status. Many companies would want to test how successfully the system recovers from a full failure as this will represent the maximum traffic load on the PTP system.

How to watch
1) Click on ‘Add to favourites’
2) Register for free – or log in if you are already part of NAB Express

3) You will then see the video on the left of the screen.

Watch now!
Speakers

Leigh Whitcomb Leigh Whitcomb
Architect,
Imagine Communications

Video: The Basics of SMPTE ST 2110 in 60 Minutes

SMPTE ST 2110 is a growing suite of standards detailing uncompressed media transport over networks. Now at 8 documents, it’s far more than just ‘video over IP’. This talk looks at the new ways that video can be transported, dealing with PTP timing, creating ‘SDPs’ and is a thorough look at all the documents.

Building on this talk from Ed Calverley which explains how we can use networks to carry uncompressed video, Wes Simpson goes through all the parts of the ST 2110 suite explaining how they work and interoperate as part of the IP Showcase at NAB 2019.

Wes starts by highlighting the new parts of 2110, namely the overview document which gives a high level overview of all the standard docs, the addition of compressed bit-rate video carriage and the recommended practice document for splitting a single video and sending it over multiple links; both of which are detailed later in the talk.

SMPTE ST 2110 is fundamentally different, as highlighted next, in that it splits up all the separate parts of the signal (i.e. video, audio and metadata) so they can be transferred and processed separately. This is a great advantage in terms of reading metadata without having to ingest large amounts of video meaning that the networking and processing requirements are much lighter than they would otherwise be. However, when essences are separated, putting them back together without any synchronisation issues is tricky.

ST 2110-10 deals with timing and knowing which packets of one essence are associated with packets of another essence at any particular point in time. It does this with PTP, which is detailed in IEEE 1588 and also in SMPTE ST 2059-2. Two standards are needed to make this work because the IEEE defined how to derive and carry timing over the network, SMPTE then detailed how to match the PTP times to phases of media. Wes highlights that care needs to be used when using PTP and AES67 as the audio standard requires specific timing parameters.

The next section moves into the video portion of 2110 dealing with video encapsulation on the networks pixel grouping and the headers needed for the packets. Wes then spends some time walking us through calculating the bitrate of a stream. Whilst for most people using a look-up table of standard formats would suffice, understanding how to calculate the throughput helps develop a very good understanding of the way 2110 is carried on the wire as you have to take note not only of the video itself (4:2:2 10 bit, for instance) but also the pixel groupings, UDP, RTP and IP headers.

Timing of packets on the wire isn’t anything new as it is also important for compressed applications, but it is of similar importance to ensure that packets are sent properly paced on wire. This is to say that if you need to send 10 packets, you send them one at a time with equal time between them, not all at once right next to each other. Such ‘micro bursting’ can cause problems not only for the receiver which then needs to use more buffers, but also when mixed with other streams on the network it can affect the efficiency of the routers and switches leading to jitter and possibly dropped packets. 2110-21 sets standards to govern the timing of network pacing for all of the 2110 suite.

Referring back to his warning earlier regarding timing and AES67, Wes now goes into detail on the 2110-30 standard which describes the use of audio for these uncompressed workflows. He explains how the sample rates and packet times relate to the ability to carry multiple audios with some configurations allowing 64 audios in one stream rather than the typical 8.

‘Essences’, rather than media, is a word often heard when talking about 2110. This is an acknowledgement that metadata is just as important as the media described in 2110. It’s sent separately as described by 2110-40. Wes explains the way captions/subtitles, ad triggers, timecode and more can be encapsulated in the stream as ancillary ‘ANC’ packets.

2110-22 is an exciting new addition as this enables the use of compressed video such as VC-2 and JPEG-XS which are ultra low latency codecs allowing the video stream to be reduced by half, a quarter or more. As described in this talk the ability to create workflows on a single IP infrastructure seamlessly moving into and out of compressed video is allowing remote production across countries allowing for equipment to be centralised with people and control surfaces elsewhere.

Noted as ‘forthcoming’ by Wes, but having since been published, is RP 2110-23 which adds back in a feature that was lost when migrating from 2022-6 into 2110 – the ability to send a UHD feed as 4x HD feeds. This can be useful to allow for UHD to be used as a production format but for multiviewers to only need to work in HD mode for monitoring. Wes explains the different modes available. The talk finishes by looking at RTP timestamps and SDPs.

Watch now!
The slides for this talk are available here
Speakers

Wes Simpson Wes Simpson
President,
Telecom Product Consulting

Video: ATSC 3.0 – What You Need to Know

ATSC 3.0 is the next sea change in North American broadcasting, shared with South Korea, Mexico and other locations. Depending on your viewpoint, this could be as fundamental as the move to digital lockstep with the move to HD programming all those years ago. ATSC 3.0 takes terrestrial broadcasting in to the IP world enabling traditional broadcast to be mixed with internet-based video, entertainment and services as part of one, seamless, experience.

ATSC 3.0 is gaining traction in the US and some other countries as a way to deliver digital video within a single traditional VHF channel – and with the latest 3.0 version, this actually moves to broadcasting IP packets over the air.

Now ready for deployment, in the US ATSC 3.0 is now at a turning point. With a number of successful trials under its belt, it’s now time for the real deployments to start. In this panel discussion as part from TV Technology looks at the groups of stations working together to deploy the standard.

The ‘Transition Guide‘ document is one of the first topics as this video tackles. With minimum in technical detail, this document explains how ATSC 3.0 is intended to work in terms of spectrum, regulatory matters and its technical features and makeup. We then have a chance to see the ‘NextGenTV’ logo released in September for equipment which is confirmed compliant with ATSC 3.0.

ATSC 3.0 is a suite of standards and work is still ongoing. There are 27 standards completed or progress ranging from the basic system itself to captions to signalling. A lot of work is going in to replicating features of the current broadcast systems like full implementation of the early alert system (EAS) and similar elements.

It’s well known that Phoenix Arizona is a test bed for ATSC and next we hear an update on the group of 12 stations which are participating in the adoption of the standard, sharing experiences and results with the industry. We see that they are carrying out trial broadcasts at the moment and will be moving into further testing, including with SFNs (Single Frequency Networks) come 2020. We then see an example timeframe showing an estimated 8-12 months needed to launch a market.

The video approaches the end by looking at case studies with WKAR and ARK multicasting, answering questions such as when will next-gen audio be available, the benefit of SFNs and how it would work with 5G and a look at deploying immersive audio.

Watch now!
Speakers

Pete Sockett Pete Sockett
Director of Engineering & Operations,
WRAL-TV, Raleigh
Mark Aitken Mark Aitken
Senior VP of Advanced Technology, Sinclair Broadcast Group
President of ONE Media 3.0
Dave Folsom Dave Folsom
Consultant,
Pearl TV
Lynn Claudy Lynn Claudy
Chairman of the ATSC board
Senior VP, Technology at NAB
Tom Butts Tom Butts
Content Director,
TV Technology

Video: ATSC 3.0

“OTT over the air” – ATSC 3.0 deployment has started in the US and has been deployed in Korea. Promising to bring interactivity and ‘internet-style’ services to broadcast TV, moreover allowing ‘TV’ to transition to mobile devices. To help understand what ATSC 3.0 enables, NABShow Live brings together Sinclair’s Mark Aitken, Bill Hayes from Iowa Public Television and SMPTE’s Thomas Bause Mason all of which are deeply involved in the development of ATSC 3.0.

The panelists dive in to what ATSC 1 was and how we get to 3.0, outlining the big things that have changed. One key thing is that broadcasters can now choose how robust the stream is, balanced against bandwidth. Not only that but multiple streams with different robustnesses are possible for the same channel. This allows ATSC 3.0 to be tailored to your market and support different business models.

ATSC 3.0, as Bill Hayes says was ‘built to evolve’ and to deal with new standards as they come along and was at pains to point out that all these advancements came without any extra spectrum allocations. Thomas outlined that not only is SMPTE on the board of ATSC, but the broadcast standards upstream of distribution now need to work and communicate with downstream. HDR, for instance, needs metadata and the movement of that is one of the standards SMPTE has formed. As Mark Aitken says ‘the lines are blurring’ with devices at the beginning of the end of the chain both being responsible for correct results on the TV.

The session ends by asking what the response has been from broadcasters. Are they embracing the standard? After all, they are not obliged to use ATSC 3.0.
Thomas say that interest has picked up and that large and small networks are now showing more interest with 50 broadcasters already having committed to it.

Watch now!
Speakers

Thomas Bause Mason Thomas Bause Mason
Director Standards Development,
SMPTE
Bill Hayes Bill Hayes
Director of Engineering & Technology
Iowa Public Television
Mark Aitken Mark Aitken
SVP of Advanced Technology,
Sinclair Broadcast Group
Linda Rosner Linda Rosner
Managing Director,
Artisans PR