Video: The Good and the Ugly – IP Studio Production Case Study

What’s implementing SMPTE ST-2110 like in real life? How would you design your network and what were the problems? In this case study Ammar Latif from Cisco Systems presents the architecture, best practices and lessons learned they gleaned in this live IP broadcast production facility project designed for a major US broadcaster. Based on SMPTE ST-2110 standard, it spanned five studios and two control rooms. The central part of this project was a dual Spine-Leaf IP fabric with bandwidth equivalent of a 10,000 x 10,000 HD SDI router with a fully non-blocking multicast architecture. The routing system was based on Grass Valley Convergent broadcast controller and a Cisco DCNM media controller.

As the project was commissioned in 2018, the AMWA IS-04 and IS-05 specifications providing an inter-operable mechanism for routing media around SMPTE 2110 network were not yet available. Multicast flow subscription was based on a combination of IGMP (Internet Group Management Protocol) and PIM (Protocol Independent Multicast) protocols. While PIM is very efficient and mature, it lacks the ability to use bandwidth as a parameter when setting up a flow path. Ammar explains how Non-Blocking Multicast (NBM) developed by Cisco brings bandwidth awareness to PIM by signalling a type of data (video, audio or metadata).

The talk continues by discussing PTP distribution & monitoring, SMPTE 2022-7 seamless protection switching and remote site production. Ammar also lets us see how the user interfaces on the Cisco DCNM media controller were designed which include a visualisation of multicast flow, network topology and link saturation of ports.

You can find the slides here.

Watch now!

Speaker

Ammar Latif
Principal Architect,
Cisco Systems

Video: Where can SMPTE 2110 and NDI co-exist?

When are two video formats better than one? Broadcasters have long sought ‘best of breed’ systems matching equipment as close as possible to your ideal workflow. In this talk we look getting the best of both compressed, low-latency and uncompressed video. NDI, a lightly compressed, ultra low latency codec, allows full productions in visually lossless video with a field of latency. SMPTE’s ST-2110 allows full productions with uncompressed video and almost zero latency.

Bringing together the EBU’s Willem Vermost who paints a picture from the perspective of public broadcasters who are planning their moves into the IP realm, Marc Risby from UK distributor and integrator Boxer brings a more general view of the market’s interest and Will Waters who spent many years in Newtek, the company that invented NDI we hear the two approaches of compressed and uncompressed compliment each other.

This panel took place just after the announcement that Newtek had been bought by VizRT, the graphics vendor, who sees a lot of benefit in being able to work in both types of workflow, for clients large and small and who have made Newtek its own entity under the VizRT umbrella to ensure continued focus.

A key differentiator of NDI is it’s focus on 1 gigabit networking. Its aim has always to enable ‘normal’ companies to be able to deploy IP video easily so they can rapidly benefit from the benefits that IP workflows bring over SDI or other baseband video technologies. A keystone in this strategy is to enable everything to happen on normal, 1Gbit switches which are prevalent in most companies today. Other key elements to the codec are: free, software development kit, bi-directionality, resolution independent, audio sample-rate agnostic, tally support, auto discovery and more.

In the talk, we discuss the pros and cons of this approach where interoperability is assured as everyone has to use the same receive and transmit code, against having an standard such as SMPTE ST-2110. SMPTE ST-2110 has the benefit of being uncompressed, assuring the broadcaster that they have captured the best possible quality of video, promises better management at scale, tighter integration into complex workflows, lower latency and the ability to treat the many different essences separately. Whilst we discuss many of the benefits of SMPTE ST-2110, you can get a more detailed overview from this presentation from the IP Showcase.

Watch now!

This panel was produced by IET Media, a technical network within the IET which runs events, talks and webinars for networking and education within the broadcast industry. More information

Speakers

Willem Vermost Willem Vermost
Senior IP Media Technology Architect,
EBU
Marc Risby Marc Risby
CTO,
Boxer Group
Will Walters Will Waters
Vice President Of Worldwide Customer Success,
VizRT
Russell Trafford-Jones Moderator: Russell Trafford-Jones
Exec Member, IET Media
Manager, Support & Services, Techex
Editor, The Broadcast Knowledge

Video: Wide Area Facilities Interconnect with SMPTE ST 2110

Adoption of SMPTE’s 2110 suite if standards for transport of professional media is increasing with broadcasters increasingly choosing it for use within their broadcast facility. Andy Rayner takes the stage at SMPTE 2019 to discuss the work being undertaken to manage using ST 2110 between facilities. In order to do this, he looks at how to manage the data out of the facility, the potential use of JPEG-XS, timing and control.

Long established practices of using path protection and FEC are already catered for with ST 2022-7 for seamless path protection and ST 2022-5. New to 2110 is the ability to send the separate essences bundled together in a virtual trunk. This has the benefit of avoiding streams being split up during transport and hence potentially suffering different delays. It also helps with FEC efficiency and allows transport of other types of traffic.

Timing is key for ST 2110 which is why it natively uses Precision Timing Protocol, PTP which has been formalised for use in broadcast under ST 2059. Andy highlights the problem of reconciling timing at the far end but also the ‘missed opportunity’ that the timing will usually get regenerated therefore the time of media ingest is lost. This may change over the next year.

The creation of ST 2110-22 includes, for the first time, compressed media into ST 2110. Andy mentions that JPEG XS can be used – and is already being deployed. Control is the next topic with Andy focussing on the secure sharing of NMOS IS-04 & 05 between facilities covering registration, control and the security needed.

The talk ends with questions on FEC Latency, RIST and potential downsides of GRE trunking.

Watch now!
Speaker

Andy Rayner Andy Rayner
Chief Technologist,
Nevion

Video: Investigating Media Over IP Multicast Hurdles in Containerized Platforms

As video infrastructures have converged with enterprise IT, they started incorporating technologies and methods typical for data centres. First came virtualisation allowing for COTS (Common Off The Shelf) components to be used. Then came the move towards cloud computing, taking advantage of scale economies.

However, these innovations did little to address the dependence on monolithic projects that impeded change and innovation. Early strategies for Video over IP were based on virtualised hardware and IP gateway cards. As the digital revolution took place with emergence of OTT players, the microservices based on containers have been developed. The aim was to shorten the cycle of software updates and enhancements.

Containers allow to insulate application software from underlying operating systems to remove the dependence on hardware and can be enhanced without changing the underlying operational fabrics. This provides the foundation for more loosely coupled and distributed microservices, where applications are broken into smaller, independent pieces that can be deployed and managed dynamically.

Modern containerized server software methods such as Docker are very popular in OTT and cloud solution, but not in SMPTE ST 2110 systems. In the video above, Greg Shay explains why.

Docker can package an application and its dependencies in a virtual container that can run on any Linux server. It uses the resource isolation features of the Linux kernel and a union-capable file system to allow containers to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines. Docker can get more applications running on the same hardware than comparing with VMs, makes it easy for developers to quickly create ready-to-run containered applications and makes managing and deploying applications much easier.

However, currently there is a huge issue with using Docker for ST 2110 systems, because Docker containers do not work with Multicast traffic. The root of the multicast problem is the specific design of the way that the Linux kernel handles multicast routing. It is possible to wrap a VM around each Docker container just to achieve the independence of multicast network routing by emulating the full network interface, but this defeats capturing and delivering the behaviour of the containerized product in a self-contained software deliverable.

There is a quick and dirty partial shortcut which enable container to connect to all the networking resources of the Docker host machine, but it does not isolate containers into their own IP addresses and does not isolate containers to be able to use their own ports. You don’t really get a nice structure of ‘multiple products in multiple containers’, which defeats the purpose of containerized software.

You can see the slides here.

Watch now!

Speaker

Greg Shay Greg Shay
CTO
The Telos Alliance