When we look at the parts of our workflows that work well, we usually find standards underneath. SDI is pretty much a solved problem and has been delivering video since before the 90s, albeit with better reliability as time has gone on. MPEG Transport Streams are another great example of a standard that has achieved widespread interoperability. These are just two examples given by John Mailhot from Imagine Communications as he outlines the standards which have built the broadcast industry to what it is today, or perhaps to what it was in 2005. By looking at past successes, John seeks to describe the work that the industry should be doing now and into the future as technology and workflows evolve at a pace.
John’s point is that in the past we had some wildly successful standards in video and video transport. For logging, we relied on IT-based standards like SNMP and Syslog and for control protocols, the wild west was still in force with some defacto standards such as Probel’s SW-P-08 router protocol and the TSL UMD protocol dominating their niches.
The industry is now undergoing a number of transformations simultaneously. We are adopting IP-based transport both compressed and uncompressed (though John quickly points out SDI is still perfectly viable for many). We are moving many workloads to the cloud and we are slowly starting to up our supported resolutions along with moving some production to HDR. All of this work, to be successful should be based on standards, John says. And there are successes in there such as AMWA’s NMOS specifications which are the first multi-vendor, industry-wide control protocol. Technically it is not a standard, but in this case, the effect is close to the same. John feels that the growth of our industry depends on us standardising more control protocols in the future.
John spends some time looking at how the move to IP, UHD, HDR and Cloud have played into the Live Production and Linear Playout parts of the broadcast chain. Live production, as we’ve heard previously is starting to embrace IP now, lagging playout deployments. Whereas playout usually lags production in UHD and HDR support since it’s more important to acquire video now in UHD & HDR even if you can’t transmit it to maximise its long-term value.
John finishes by pointing out that Moore’s law’s continued may not be so clear in CPUs but it’s certainly in effect within optics and network switches and routers. Over the last decade, switches have gone from 10 gig to 50 to 100 and now to 400 gig. This long term cost reduction should be baked into the long-term planning for companies embarking on an IP transformation project.
An industry-wide move to any new technology takes time and there is a steady flow of people new to the technology. This video is a launchpad for anyone just coming into IP infrastructures whether because their company is starting or completing an IP project or because people are starting to ask the question “Should we go IP too?”.
Keycode Media’s Steve Dupaix starts with an overview of how SMPTE’s suite of standards called ST 2110 differs from other IP-based video and audio technologies such as NDI, SRT, RIST and Dante. The key takeaways are that NDI provides compressed video with a low delay of around 100ms with a suite of free tools to help you get started. SRT and RIST are similar technologies that are usually used to get AVC or HEVC video from A to B getting around packet loss, something that NDI and ST 2110 don’t protect for without FEC. This is because SRT and RIST are aimed at moving data over lossy networks like the internet. Find out more about SRT in this SMPTE video. For more on NDI, this video from SMPTE and VizRT gives the detail.
ST 2110’s purpose is to get high quality, usually lossless, video and audio around a local area network originally being envisaged as a way of displacing baseband SDI and was specced to work flawlessly in live production such as a studio. It brings with it some advantages such as separating the essences i.e. video, audio, timing and ancillary data are separate streams. It also brings the promise of higher density for routing operations, lower-cost infrastructure since the routers and switches are standard IT products and increased flexibility due to the much-reduced need to move/add cables.
Robert Erickson from Grass Valley explains that they have worked hard to move all of their product lines to ‘native IP’ as they believe all workflows will move IP whether on-premise or in the cloud. The next step, he sees is enabling more workflows that move video in and out of the cloud and for that, they need to move to JPEG XS which can be carried in ST 2110-20. Thomas Edwards from AWS adds their perspective agreeing that customers are increasingly using JPEG XS for this purpose but within the cloud, they expect the new CDI which is a specification for moving high-bandwidth traffic like 2110-20 streams of uncompressed video from point to point within the cloud.
John Mailhot from Imagine Communications is also the chair of the VSF activity group for ground-cloud-cloud-ground. This aims to harmonise the ways in which vendors provide movement of media, whatever bandwidth, into and out of the cloud as well as from point to point within. From the Imagine side, he says that ST 2110 is now embedded in all products but the key is to choose the most appropriate transport. In the cloud, CDI is often the most appropriate transport within AWS and he agrees that JPEG XS is the most appropriate for cloud<->ground operations.
The panel takes a moment to look at the way that the pandemic has impacted the use of video over IP. As we heard earlier this year, the New York Times had been waiting before their move to IP and the pandemic forced them to look at the market earlier than planned. When they looked, they found the products which they needed and moved to a full IP workflow. So this has been the theme and if anything has driven, and will continue to drive, innovation. The immediate need provided the motivation to consider new workflows and now that the workflow is IP, it’s quicker, cheaper and easier to test new variation. Thomas Edwards points out that many of the current workflows are heavily reliant on AVC or HEVC despite the desire to use JPEG XS for the broadcast content. For people at home, JPEG XS bandwidths aren’t practical but RIST with AVC works fine for most applications.
Interoperability between vendors has long been the focus of the industry for ST 2110 and, in John’s option, is now pretty reliable for inter-vendor essence exchanges. Recently the focus has been on doing the same with NMOS which both he and Robert report is working well from recent, multi-vendor projects they have been involved in. John’s interest is working out ways that the cloud and ground can find out about each other which isn’t a use case yet covered in AMWA’s NMOS IS-04.
The video ends with a Q&A covering the following:
Where to start in your transition to IP
What to look for in an ST 2110-capable switch
Multi-Level routing support
Using multicast in AWS
Whether IT equipment lifecycles conflict with Broadcast refresh cycles
What are the options for moving your playout, processing and distribution into the cloud? What will the workflows look like and what are the options for implementing them? This video covers the basics, describes many of the functions available like subtitle generation and QC then leads you through to harnessing machine learning,
SMPTE’s New York section has brought together Evan Statton and Liam Morrison from AWS, Alex Emmermann from Telestream, Chris Ziemer & Joe Ashba from Imagine Communications and Rick Phelps from Brklyn Media to share their knowledge on the topic. Rick kicks off proceedings with a look at the principles of moving to the cloud. He speaks about the need to prepare your media before the move by de-duplicating files, getting the structure and naming correct and checking your metadata is accurate. Whilst deduplicating data reduces your storage costs, another great way to do this is to store in IMF format. IMF, the Interoperable Media Format, is related to MXF and stores essences separately. By using pointers to the right media, new versions of files can re-use media from other files. This further helps reduce storage costs.
Rick finishes by running through workflow examples covering INgest, Remote Editing using PCoIP, Playout and VoD before running through the pros and cons of Public, Private and Hybrid cloud.
Next on the rosta are Chris & Joe outlining their approach to playout in the cloud. They summarise the context and zoom in to look at linear channels and their Versio product. An important aspect of bringing products to the cloud, explains Joe, is to ensure you optimise the product to take advantage of the cloud. Where a software solution on-prem will use servers running the storage, databases, proxy generation and the like, in the cloud you don’t want to simply set up EC2 instances to run these same services. Rather, you should move your database into AWS’s database service, use AWS storage and use a cloud-provided proxy service. This is when the value is maximised.
Alex leads with his learnings about calculating the benefits of cloud deployment focussing on the costs surrounding your server. You have to calculate the costs of the router port it’s connected to and the rest of the network infrastructure. Power and aircon is easy to calculate but don’t forget, Alex says, about the costs of renting the space in a data centre and the problems you hit when you have to lease another cage because you have become full. Suddenly and extra server has led to a two-year lease on datacentre space. He concludes by outlining Telestream’s approach to delivering transcode. QC, playback and stream monitoring in their Telestream Cloud offering.
Evan Statton talks about the reasons that AWS created CDI and they merged the encoding stages for DTH transmission and OTT into one step. These steps came from customers’ wishes to simplify cloud worksflows or match their on-prem experiences. JPEG-XS, for isntance, is there to ensure that ultra low-latency video can flow in and out of AWS with CDI allowing almost zero delay, uncompressed video to flow within the cloud. Evan then looks through a number of workflows: Playout, stadium connectivity, station entitlement and ATSC 3.0.
Liam’s presenation on machine learning in the cloud is the last of this section meeting. Liam focuses he comments and demos on machine learning for video processing. He explains how ML fits into the Articifical Intelligence banner and looks to where the research sector is heading. Machine learning is well suited to the cloud because of the need to have big GPU-heavy servers to manage large datasets and high levels of compute. the G4 series of EC2 servers is singled out as the machine learning instances of choice.
Liam shows demos of super resolution and frame interpolation the later being used to generate slow motion clips, increasing the framerate of videos, improving the smoothness of stop-motion animations and more. Bringing this together, he finishes by showing some 4K 60fps videos created from ancient black and white film clips.
The extensive Q&A looks at a wide range of topics:
The need for operational change management since however close you get the cloud workflows to match what your staff are used to, there will be issues adjusting to the differences.
How to budget due to the ‘transactional’ nature of AWS cloud microservices
Problems migrating TV infrastructure to the cloud
How big the variables are between different workflow designs
When designing cloud workflows, what are the main causes of latency? When fighting latency what are the trade-offs?
How long ML models for upconverting or transcoding take finish training?
PTP is foundational for SMPTE ST 2110 systems. It provides the accurate timing needed to make the most out of almost zero-latency professional video systems. In the strictest sense, some ST 2110 workflows can work without PTP where they’re not combining signals, but for live production, this is almost never the case. This is why a lot of time and effort goes into getting PTP right from the outset because making it work perfectly from the outset gives you the bedrock on which to build your most valuable infrastructure upon.
In this video, Gerard Phillips from Arista, Leigh Whitcomb from Imagine Communications and Telestream’s Mike Waidson join forces to run down their top 15 best practices of building a PTP infrastructure you can rely on.
Gerard kicks off underlining the importance of PTP but with the reassuring message that if you ‘bake it in’ to your underlying network, with PTP-aware equipment that can support the scale you need, you’ll have the timing system you need. Thinking of scale is important as PTP is a bi-directional protocol. That is, it’s not like the black and burst and TLS that it replaces which are simply waterfall signals. Each endpoint needs to speak to a clock so understanding how many devices you’ll be having and where is important to consider. For a look a look at PTP itself, rather than best practices, have a look at this talk free registration required or this video with Meinberg.
Gerard’s best practices advice continues as he recommends using a routed network meaning having multiple layer 2 networks with layer 3 routing between This reduces the broadcast domain size which, in turn, increases stability and resilience. JT-NM TR-1001 can help to assist in deployments using this network architecture. Gerard next cautions about layer 2 IGMP snoopers and queriers which should exist on every VLAN. As the multicast traffic is flooded to the snooping querier in layer 2, it’s important to consider traffic flows.
When Gerard says PTP should be ‘baked in’, it’s partly boundary clocks he’s referring to. Use them ‘everywhere you can’ is the advice as they bring simplicity to your design and allow for easier debugging. Part of the simplicity they bring is in helping the scalability as they shed load from your GM, taking the brunt of the bi-directional traffic and can reduce load on the endpoints.
It’s long been known that audio devices, for instance, older versions of Dante before v4.2, use version one of PTP which isn’t compatible with SPMTE ST 2059’s requirement to use PTP v2. Gerard says that, if necessary, you should buy a version 1 to version 2 converter from your audio vendor to join the v1 island to your v2 infrastructure. This is linked to best practice point 6; All GMs must have the same time. Mike makes the point that all GMs should be locked to GPS and that if you have multiple sites, they should all have an active, GPS-locked GM even if they do send PTP to each other over a WAN as that is likely to deliver less accurate timing even if it is useful as a backup.
Even if you are using physically separate networks for your PTP and ST 2110 main and backup networks, it’s important to have a link between the two GMs for ST 2022-7 traffic so a link between the two networks just for PTP traffic should be established.
The next 3 points of advice are about the ongoing stability of the network. Firstly, ST 2059-2 specifies the use of TLV messages as part of a mechanism for media notes to generate drop-frame timecode. Whilst this may not be needed day 1, if you have it running and show your PTP system works well with it on, there shouldn’t be any surprises in a couple of years when you need to introduce an end-point that will use it. Similarly, the advice is to give your PTP domain a number which isn’t a SMPTE or AES default for the sole reason that if you ever have a device join your network which hasn’t been fully configured, if it’s still on defaults it will join your PTP domain and could disrupt it. If, part of the configuration of a new endpoint is changing the domain number, the chances of this are notably reduced. One example of a configuration item which could affect the network is ‘ptp role master’ which will stop a boundary clock from taking part in BCMA and prevents unauthorised end-points taking over.
Gerard lays out the ways in which to do ‘proper commissioning’ which is the way you can verify, at the beginning, that your PTP network is working well-meaning you have designed and built your system correctly. Unfortunately, PTP can appear to be working properly when in reality it is not for reasons of design, the way your devices are acting, configuration or simply due to bugs. To account for this, Gerard advocates separate checklists for GM switches and media nodes with a list of items to check…and this will be a long list. Commissioning should include monitoring the PTP traffic, and taking a packet capture, for a couple of days for analysis with test and measurement gear or simply Wireshark.
Leigh finishes up the video talking about verifying functionality during redundancy switches and on power-up. Commissioning is your chance to characterise the behaviour of the system in these transitory states and to observe how equipment attached is affected. His last point before summarising is to implement a PTP monitoring solution to capture the critical parameters and to detect changes in the system. SMPTE RP 2059-15 will define parameters to monitor, with the aim that monitoring across vendors will provide some sort of consistent metrics. Also, a new version of IEEE-1588, version 2.1, will add monitoring features that should aid in actively monitoring the timing in your ST 2110 system.
Views and opinions expressed on this website are those of the author(s) and do not necessarily reflect those of SMPTE or SMPTE Members.
This website is presented for informational purposes only. Any reference to specific companies, products or services does not represent promotion, recommendation, or endorsement by SMPTE