Video: A paradigm shift in codec standards – MPEG-5 Part 2 LCEVC

LCEVC (Low Complexity Enhancement Video Coding) is a low-complexity encoder/decoder is in the process of standardisation as MPEG-5 Part 2. Instead of being an entirely new codec, LCEVC improves detail and sharpness of any base video codec (e.g., AVC, HEVC, AV1, EVC or VVC) while lowering the overall computational complexity expanding the range of devices that can access high quality and/or low-bitrate video.

The idea is to use a base codec at lower resolution and add additional layer of encoded residuals to correct artifacts. Details are encoded with directional decomposition transform using a very small matrix (2×2 or 4×4) which is efficient at preserving high frequencies. As LCEVC uses parallelized techniques to reconstruct the target resolution, it encodes video faster than a full resolution base encoder.

LCEVC allows for enhancement layers to be added on top of existing bitstreams, so for example UHD resolution can be used where only HD was possible before thanks to sharing decoding between the ASIC and CPU. LCEVC can be decoded via light software processing, and even via HTML5.

In this presentation Guido Meardi from V-Nova introduces LCEVC and answers a few imporant question including: is it suitable for very high quality / bitrates compression and will it work with future codecs. He also shows performance data and benchmarks for live and VoD streaming, illustrating the compression quality and encoding complexity benefits achievable with LCEVC as an enhancement to H.264, HEVC and AV1.

Watch now!

Speaker

Guido Meardi
CEO and Co-Founder
V-Nova Ltd.

Video: Investigating Media Over IP Multicast Hurdles in Containerized Platforms

As video infrastructures have converged with enterprise IT, they started incorporating technologies and methods typical for data centres. First came virtualisation allowing for COTS (Common Off The Shelf) components to be used. Then came the move towards cloud computing, taking advantage of scale economies.

However, these innovations did little to address the dependence on monolithic projects that impeded change and innovation. Early strategies for Video over IP were based on virtualised hardware and IP gateway cards. As the digital revolution took place with emergence of OTT players, the microservices based on containers have been developed. The aim was to shorten the cycle of software updates and enhancements.

Containers allow to insulate application software from underlying operating systems to remove the dependence on hardware and can be enhanced without changing the underlying operational fabrics. This provides the foundation for more loosely coupled and distributed microservices, where applications are broken into smaller, independent pieces that can be deployed and managed dynamically.

Modern containerized server software methods such as Docker are very popular in OTT and cloud solution, but not in SMPTE ST 2110 systems. In the video above, Greg Shay explains why.

Docker can package an application and its dependencies in a virtual container that can run on any Linux server. It uses the resource isolation features of the Linux kernel and a union-capable file system to allow containers to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines. Docker can get more applications running on the same hardware than comparing with VMs, makes it easy for developers to quickly create ready-to-run containered applications and makes managing and deploying applications much easier.

However, currently there is a huge issue with using Docker for ST 2110 systems, because Docker containers do not work with Multicast traffic. The root of the multicast problem is the specific design of the way that the Linux kernel handles multicast routing. It is possible to wrap a VM around each Docker container just to achieve the independence of multicast network routing by emulating the full network interface, but this defeats capturing and delivering the behaviour of the containerized product in a self-contained software deliverable.

There is a quick and dirty partial shortcut which enable container to connect to all the networking resources of the Docker host machine, but it does not isolate containers into their own IP addresses and does not isolate containers to be able to use their own ports. You don’t really get a nice structure of ‘multiple products in multiple containers’, which defeats the purpose of containerized software.

You can see the slides here.

Watch now!

Speaker

Greg Shay Greg Shay
CTO
The Telos Alliance

Video: JPEG XS in Action for IP Production

JPEG XS is a new intra-frame compression standard delivering JPEG 2000 quality with 1000x lower latency – microseconds instead of milliseconds. This codec provides relatively low bandwidth (visually lossless compression at ratio of 10:1) with very-low and fixed latency, which makes it ideal for remote production of live events.

In this video Andy Rayner from Nevion shows how JPEG XS fits in all-IP broadcast technology with SMPTE ST 2110-22 standard. Then he presents the world’s first full JPEG-XS deployment for live IP production created for a large sports broadcaster. It was designed for pan-European WAN operation and based on ST 2110 standard with ST 2022-7 protection.

Andy discusses challenges of IP to IP processing (ST 2110-20 to ST 2110-22 conversion) and shows how to keep video and audio in sync through the whole processing chain.

This presentation proves that JPEG-XS is working, low latency distributed production is possible and the value of the ST2110-22 addition to the 2110 suite.

You can see the slides here.

Watch now!

Speaker

Andy Rayner Andy Rayner
Chief Technologist
Nevion Ltd.

Video: M6 France – Master Control and Playout IP Migration

French broadcast company M6 Group has recently moved to an all-IP workflow, employing the SMPTE ST 2110 suite of standards for professional media delivery over IP networks. The two main playout channels and MCR have been already upgraded and the next few channels will be transitioned to the new core soon.

The M6 system comprises equipment from five different vendors (Evertz, Tektronix, Harmonic, Ross and TSL), all managed and controlled using the AMWA NMOS IS-04 and IS-05 specifications. Such interoperability is an inherent feature of SMPTE ST 2110 suite of standards allowing customers to focus on the operational workflows and flexibility that IP brings them. Centralised management and configuration of the system is provided through web interfaces which also allows for easy and automated addition of a new equipment.

Thanks to Software Defined Orchestration and intuitive touch screen interfaces information such as source paths, link bandwidth / status, and device details can be quickly accessed via a web GUI. As the system is based on IP network, it is possible to come in and out of fabric numerous times without the same costs implications that you would have in the SDI world. Every point of the signal chain can be easily visualised which enables broadcast engineers to maintain and configure the system with ease.

You can see the slides here.

Watch now!

Speaker

Slavisa Gruborovic
Solution Architect
Evertz Microsystems Inc.
Fernando Solanes
Director Solutions Engineering
Evertz Microsystems Inc.