Video: The Good and the Ugly – IP Studio Production Case Study

What’s implementing SMPTE ST-2110 like in real life? How would you design your network and what were the problems? In this case study Ammar Latif from Cisco Systems presents the architecture, best practices and lessons learned they gleaned in this live IP broadcast production facility project designed for a major US broadcaster. Based on SMPTE ST-2110 standard, it spanned five studios and two control rooms. The central part of this project was a dual Spine-Leaf IP fabric with bandwidth equivalent of a 10,000 x 10,000 HD SDI router with a fully non-blocking multicast architecture. The routing system was based on Grass Valley Convergent broadcast controller and a Cisco DCNM media controller.

As the project was commissioned in 2018, the AMWA IS-04 and IS-05 specifications providing an inter-operable mechanism for routing media around SMPTE 2110 network were not yet available. Multicast flow subscription was based on a combination of IGMP (Internet Group Management Protocol) and PIM (Protocol Independent Multicast) protocols. While PIM is very efficient and mature, it lacks the ability to use bandwidth as a parameter when setting up a flow path. Ammar explains how Non-Blocking Multicast (NBM) developed by Cisco brings bandwidth awareness to PIM by signalling a type of data (video, audio or metadata).

The talk continues by discussing PTP distribution & monitoring, SMPTE 2022-7 seamless protection switching and remote site production. Ammar also lets us see how the user interfaces on the Cisco DCNM media controller were designed which include a visualisation of multicast flow, network topology and link saturation of ports.

You can find the slides here.

Watch now!

Speaker

Ammar Latif
Principal Architect,
Cisco Systems

Video: A paradigm shift in codec standards – MPEG-5 Part 2 LCEVC

LCEVC (Low Complexity Enhancement Video Coding) is a low-complexity encoder/decoder is in the process of standardisation as MPEG-5 Part 2. Instead of being an entirely new codec, LCEVC improves detail and sharpness of any base video codec (e.g., AVC, HEVC, AV1, EVC or VVC) while lowering the overall computational complexity expanding the range of devices that can access high quality and/or low-bitrate video.

The idea is to use a base codec at lower resolution and add additional layer of encoded residuals to correct artifacts. Details are encoded with directional decomposition transform using a very small matrix (2×2 or 4×4) which is efficient at preserving high frequencies. As LCEVC uses parallelized techniques to reconstruct the target resolution, it encodes video faster than a full resolution base encoder.

LCEVC allows for enhancement layers to be added on top of existing bitstreams, so for example UHD resolution can be used where only HD was possible before thanks to sharing decoding between the ASIC and CPU. LCEVC can be decoded via light software processing, and even via HTML5.

In this presentation Guido Meardi from V-Nova introduces LCEVC and answers a few imporant question including: is it suitable for very high quality / bitrates compression and will it work with future codecs. He also shows performance data and benchmarks for live and VoD streaming, illustrating the compression quality and encoding complexity benefits achievable with LCEVC as an enhancement to H.264, HEVC and AV1.

Watch now!

Speaker

Guido Meardi
CEO and Co-Founder
V-Nova Ltd.

Video: Investigating Media Over IP Multicast Hurdles in Containerized Platforms

As video infrastructures have converged with enterprise IT, they started incorporating technologies and methods typical for data centres. First came virtualisation allowing for COTS (Common Off The Shelf) components to be used. Then came the move towards cloud computing, taking advantage of scale economies.

However, these innovations did little to address the dependence on monolithic projects that impeded change and innovation. Early strategies for Video over IP were based on virtualised hardware and IP gateway cards. As the digital revolution took place with emergence of OTT players, the microservices based on containers have been developed. The aim was to shorten the cycle of software updates and enhancements.

Containers allow to insulate application software from underlying operating systems to remove the dependence on hardware and can be enhanced without changing the underlying operational fabrics. This provides the foundation for more loosely coupled and distributed microservices, where applications are broken into smaller, independent pieces that can be deployed and managed dynamically.

Modern containerized server software methods such as Docker are very popular in OTT and cloud solution, but not in SMPTE ST 2110 systems. In the video above, Greg Shay explains why.

Docker can package an application and its dependencies in a virtual container that can run on any Linux server. It uses the resource isolation features of the Linux kernel and a union-capable file system to allow containers to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines. Docker can get more applications running on the same hardware than comparing with VMs, makes it easy for developers to quickly create ready-to-run containered applications and makes managing and deploying applications much easier.

However, currently there is a huge issue with using Docker for ST 2110 systems, because Docker containers do not work with Multicast traffic. The root of the multicast problem is the specific design of the way that the Linux kernel handles multicast routing. It is possible to wrap a VM around each Docker container just to achieve the independence of multicast network routing by emulating the full network interface, but this defeats capturing and delivering the behaviour of the containerized product in a self-contained software deliverable.

There is a quick and dirty partial shortcut which enable container to connect to all the networking resources of the Docker host machine, but it does not isolate containers into their own IP addresses and does not isolate containers to be able to use their own ports. You don’t really get a nice structure of ‘multiple products in multiple containers’, which defeats the purpose of containerized software.

You can see the slides here.

Watch now!

Speaker

Greg Shay Greg Shay
CTO
The Telos Alliance

Video: JPEG XS in Action for IP Production

JPEG XS is a new intra-frame compression standard delivering JPEG 2000 quality with 1000x lower latency – microseconds instead of milliseconds. This codec provides relatively low bandwidth (visually lossless compression at ratio of 10:1) with very-low and fixed latency, which makes it ideal for remote production of live events.

In this video Andy Rayner from Nevion shows how JPEG XS fits in all-IP broadcast technology with SMPTE ST 2110-22 standard. Then he presents the world’s first full JPEG-XS deployment for live IP production created for a large sports broadcaster. It was designed for pan-European WAN operation and based on ST 2110 standard with ST 2022-7 protection.

Andy discusses challenges of IP to IP processing (ST 2110-20 to ST 2110-22 conversion) and shows how to keep video and audio in sync through the whole processing chain.

This presentation proves that JPEG-XS is working, low latency distributed production is possible and the value of the ST2110-22 addition to the 2110 suite.

You can see the slides here.

Watch now!

Speaker

Andy Rayner Andy Rayner
Chief Technologist
Nevion Ltd.