GPI was not without its complexities, but the simplicity of its function in terms of putting a short or a voltage on a wire, is unmatched by any other system we use in broadcasting. So the question here is, how do we do ‘GPI’ with IP given all the complexity, and perceived delay, in networked communication. CTO of Pebble Beach, Miroslav Jeras, is here to explain.
The key to understanding the power of the new specification for GPI from NMOS called IS-07 is to realise that it’s not trying to emulate DC electronics. Rather, by adding the timing information available from the PTP clock, the GPI trigger can now become extremely accurate – down to the audio sample – meaning you can now use GPI to indicate much more detailed situations. On top of that, the GPI messages can contain a number of different data types, which expands the ability of these GPI messages and also helps interoperability between systems.
Miroslav explains the ways in which these messages are passed over the network and how IS-07 interacts with the other specifications such as IS-05 and BCP-002-01. He explains how IS-07 was used in the Techno Project – tpc, Zurich and then takes us through a range of different examples of how IS-07 can be used including synchronisation of the GUI and monitoring as well as routing based on GPI.
Like all good ideas, remote production is certainly not new. Known in the US as REMIs (REmote INtegrations) and in Europe as Remote Productions, producing live events without sending people there has long been seen as something to which most broadcasters have aspired. We’re now at a tipping point of available techniques, codecs and bandwidth which is making large-scale remote production practical and, indeed, common.
Carl Petch took to the podium at the IBC 2019 IP Showcase to explain how telco Telstra have been deploying remote production solutions by looking at three case studies including the Pyeongchang 2018 Winter Olympics, and the technology behind them. Highlighting TICO, SMPTE ST 2022-6 uncompressed and VC-2 compression, previously known as the BBC’s DIRAC, we see how codecs are vital in underpinning successful, low latency, remote production.
Encoding and decoding delay aren’t the only delays to consider, simple propagation time for the signal to travel from one place on the earth to another have to be considered – including the lengths of your different paths – so Carl takes us through a table of real-world measurements between a range of places showing up to 280ms one-way delay.
Much of the success Telstra has had in delivering these solutions has been anchored on their dedicated remote production network based on the Open Transport Network principles which allows them to carve up parts of their bandwidth for different protocols which Carl covers in some detail and allows them to scale in 100Gb increments.
By far the most visited video of 2019 was the Merrick Ackermans’ review of RIST first release. RIST, the Reliable Internet Stream Transport protocol, aims to be an interoperable protocol allowing even lossy networks to be used for mission-critical broadcast contribution. Using RIST can change a bade internet link into a reliable circuit for live programme material, so it’s quite a game changer in terms of cost for links.
An increasing amount of broadcast video is travelling over the public internet which is currently enabled by SRT, Zixi and other protocols. Here, Merrick Ackermans explains the new RIST specification which aims to allow interoperable internet-based video contribution. RIST, which stands for Reliable Internet Stream Transport, ensures reliable transmission of video and other data over lossy networks. This enables broadcast-grade contribution at a much lower cost as well as a number of other benefits.
Many of the protocols which do similar are based on ARQ (Automatic Repeat-reQuest) which, as you can read on wikipedia, allows for recovery of lost data. This is the core functionality needed to bring unreliable or lossy connections into the realm of usable for broadcast contribution. Indeed, RIST is an interesting merging of technologies from around the industry. Many people use Zixi, SRT, and VideoFlow all of which can allow safe contribution of media. Safe meaning it gets to the other end intact and un-corrupted. However, if your encoder only supports Zixi and you use it to deliver to a decoder which only supports SRT, it’s not going to work out. The industry as accepted that these formats should be reconciled into a shared standard. This is RIST.
File-based workflows are mainly based on TCP (Transmission Control Protocol) although, notably, some file transfer service just as Aspera are based on UDP where packet recovery, not unlike RIST, is managed as part of the the protocol. This is unlike web sites where all data is transferred using TCP which sends an acknowledgement for each packet which arrives. Whilst this is great for ensuring files are uncorrupted, it can impact arrival times which can lead to live media being corrupted.
RIST is being created by the VSF – the Video Standards Forum – who were key in introducing VS-03 and VS-04 into the AIMS group on which SMPTE ST 2022-6 was then based. So their move now into a specification for reliable transmission of media over the internet has many anticipating great things. At the point that this talk was given the simple profile has been formed. Whist Merrick gives the details, it’s worth pointing out that this doesn’t include intrinsic encryption. It can, of course, be delivered over a separately encrypted tunnel, but an intrinsic part of SRT is the security that is provided from within the protocol.
Despite Zixi, a proprietary solution, and Haivision’s open source SRT being in competition, they are both part of the VSF working group creating RIST along with VideoFlow. This is because they see the benefit of having a widely accepted, interoperable method of exchanging media data. This can’t be achieved by any single company alone but can benefit all players in the market.
This talk remains true for the simple profile which just aims to recover packets. The main protocol, as opposed to ‘simple’, has since been released and you can hear about it in a separate video here. This protocol adds FEC, encryption and other aspects. Those who are familiar with the basics may whoosh to start there.
As video infrastructures have converged with enterprise IT, they started incorporating technologies and methods typical for data centres. First came virtualisation allowing for COTS (Common Off The Shelf) components to be used. Then came the move towards cloud computing, taking advantage of scale economies.
However, these innovations did little to address the dependence on monolithic projects that impeded change and innovation. Early strategies for Video over IP were based on virtualised hardware and IP gateway cards. As the digital revolution took place with emergence of OTT players, the microservices based on containers have been developed. The aim was to shorten the cycle of software updates and enhancements.
Containers allow to insulate application software from underlying operating systems to remove the dependence on hardware and can be enhanced without changing the underlying operational fabrics. This provides the foundation for more loosely coupled and distributed microservices, where applications are broken into smaller, independent pieces that can be deployed and managed dynamically.
Modern containerized server software methods such as Docker are very popular in OTT and cloud solution, but not in SMPTE ST 2110 systems. In the video above, Greg Shay explains why.
Docker can package an application and its dependencies in a virtual container that can run on any Linux server. It uses the resource isolation features of the Linux kernel and a union-capable file system to allow containers to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines. Docker can get more applications running on the same hardware than comparing with VMs, makes it easy for developers to quickly create ready-to-run containered applications and makes managing and deploying applications much easier.
However, currently there is a huge issue with using Docker for ST 2110 systems, because Docker containers do not work with Multicast traffic. The root of the multicast problem is the specific design of the way that the Linux kernel handles multicast routing. It is possible to wrap a VM around each Docker container just to achieve the independence of multicast network routing by emulating the full network interface, but this defeats capturing and delivering the behaviour of the containerized product in a self-contained software deliverable.
There is a quick and dirty partial shortcut which enable container to connect to all the networking resources of the Docker host machine, but it does not isolate containers into their own IP addresses and does not isolate containers to be able to use their own ports. You don’t really get a nice structure of ‘multiple products in multiple containers’, which defeats the purpose of containerized software.