Video: JPEG XS by intoPIX

Many of the bottlenecks in processing video today are related to bandwidth but most codecs that solve this problem require a lot of compute power and/or add a lot of latency. For those that wish to work with high-quality video such as within cameras and in TV studios, what’s really needed is a ‘zero’ latency codec that maintains lossless video but drops the data rate from gigabits to megabits. This is what JPEG XS does and Jean-Baptiste Lorent joined the NVIDIA GTC21 conference to explain why this is so powerful.

Created by intoPIX who are not only active in compression intellectual property but also within standards bodies such as JPEG, MPEG, ISO, SMPTE and others, JPEG XS is one of the latest technologies to come to market from the company. Lorent explains that it’s designed both to live inside equipment compressing video as it moves between parts of a device such as a phone where it would enable higher resolutions to be used and minimise energy use, and to drive down bandwidths between equipment in media workflows. We’ve featured case studies of JPEG XS in broadcast workflows previously.

JPEG XS prioritisation of quality & latency over compression. Source: intoPIX

The XS in JPEG XS stands for Xtra Small, Xtra Speed. And this underlines the important part of the technology which looks at compression in a different way to MPEG, AV1 and similar codecs. As discussed in
this interview
the codec market is maturing and exploiting other benefits rather than pure bitrate. Nowadays, we need codecs that make life easy for AI/ML algorithms to quickly access video, we need low-complexity codecs for embedded devices like old set-top boxes and new embedded devices like body cams. We also need ulta-low delay codecs, with an encode delay in the microseconds, not milliseconds so that even multiple encodes seem instantaneous. JPEG XS is unique in delivering the latter.

With visually lossless results at compression levels down to 20:1, JPEG XS is expected to be used by most at 10:1 at which point it can render uncompressed HD 1080i at around 200Mbps, down from 1.5Gbps or can bring 76Gbps down to 5Gbps or less. Lorent explains that the maths in the algorithm has low complexity and is highly paralellisable which is a key benefit in modern CPUs which have many cores. Moreover, important for implementation in GPUs and FPGAs, it doesn’t need external memory and is low on logic.

The talk finishes with Lorent highlighting that JPEG XS has been created flexibly to be agnostic to colour space, chroma subsampling, bit depths, resolution and more. It’s also been standardised to be carried in SMPTE ST 2110-22, under ISO IEC 21122, carriage over RTP, in an MPEG TS and in the file domain as MXF, HEIF, JXS and MP4 (in ISO BMFF).

Free Registration required. Easiest way to watch is to click above, register, come back here and click again.
If you have trouble, use the chat on the bottom right of this website and we can send you a link

Speakers

Jean-Baptiste Lorent Jean-Baptiste Lorent
Director Marketing & Sales
intoPIX

Video: FOX – Uncompressed live sports in the cloud

Is using uncompressed video in the cloud with just 6 frames of latency to get there and back ready for production? WebRTC manages sub-second streaming in one direction and can even deliver AV1 in real-time. The key to getting down to a 100ms round trip is to move down to millisecond encoding and to use uncompressed video in the cloud. This video shows how it can be done.

Fox has a clear direction to move into the cloud and last year joined AWS to explains how they’ve put their delivery distribution into the cloud remuxing feeds for ATSC transmitters, satellite uplinks, cable headends and encoding for internet delivery, In this video, Fox’s Joel Williams, Evan Statton from AWS explain their work together making this a reality. Joel explains that latency is not a very hot topic for distribution as there are many distribution delays. The focus has been on getting the contribution feeds into playout and MCR monitoring quickly. After all, when people are counting down to an ad break, it needs to roll exactly on zero.

Evan explains the approach AWS has taken to solving this latency problem and starts with considering using SMPTE’s ST 2110 in the cloud. ST 2110 has video flows of at least 1 Gbps, typically and when implemented on-premise is typically built on a dedicated network with very strict timing. Cloud datacentres aren’t like that and Evan demonstrates this showing how across 8 video streams, there are video drops of several seconds which is clearly not acceptable. Amazon, however, has a product called ‘Scalable Reliable Datagram’ which is aimed at moving high bitrate data through their cloud. Using a very small retransmission buffer, it’s able to use multiple paths across the network to deliver uncompressed video in real-time. The retransmission buffer here being very small enables just enough healing to redeliver missing packets within the 16.7ms it takes to deliver a frame of 60fps video.

On top of SRD, AWS have introduced CDI, the Cloud Digital Interface, which is able to describe uncompressed video flows in a way already familiar to software developers. This ‘Audio Video Metadata’ layer handles flows in the same way as 2110, for instance keeping essences separate. Evan says this has helped vendors react favourably to this new technology. For them instead of using UDP, SRD can be used with CDI giving them not only normal video data structures but since SRD is implemented in the Nitro network card, packet processing is hidden from the application itself.

The final piece to the puzzle is keeping the journey into and out of the cloud low-latency. This is done using JPEG XS which has an encoding time of a few milliseconds. Rather than using RIST, for instance, to protect this on the way into the cloud, Fox is testing using ST 2022-7. 2022-7 takes in two identical streams on two network interfaces, typically. This way it should end up with two copies of each packet. Where one gets lost, there is still another available. This gives path redundancy which a single stream will never be able to offer. Overall, the test with Fox’s Arizona-based Technology Center is shown in the video to have only 6 frames of latency for the return trip. Assuming they used a California-based AWS data centre, the ping time may have been as low as two frames. This leaves four frames for 2022-7 buffers, XS encoding and uncompressed processing in the cloud.

Watch now!
Speakers

Joel Williams Joel Wiliams
VP of Architecutre & Engineering,
Fox Corporation
Evan Statton Evan Statton
Principal Architect, Media & Entertainment,
AWS

Video: IPMX – Debunking the Myths

2110 for AV? IPMX is an IP specification for interoperating Pro AV equipment. SMPTE’s 2110 standard suite is very powerful, but not deployable easily enough to rig for a live event. At the moment the is no open standard in Pro AV that can deliver IP. Whilst there are a number of proprietary alliances, which enable widespread use of a single chip or software core, this interoperability comes at a cost and ultimately is underpinned by one, or a group of companies.

Dave Chiappini from Matrox discusses the work of the AIMS Pro AV working group which is developing IPMX. Dave underlines the fact that this is a pull to unify the Pro AV industry to help people avoid investing over and over again in reinventing protocols or reworking their products to interoperate. He feels that ‘open standards help propel markets forward’ adding energy and avoiding vendor lock-in. This is one reason for the inclusion of NMOS, allowing any vendor to make a control system by working to the same open specification, opening up the market to both small and large companies.

The Pro AV market needs more than just swift deployment. HDMI is pervasive and is able to carry more frame rates and resolutions than SDI so HDMI support is to of the list of features that IPMX will add on top of 2110, NMOS and PTP. HDMI also uses HDCP so AIMS is now working with the DCP on creating a method of carrying HDCP over 2110. TVs are already replacing SDI monitors, such interoperability with HDMI should bring down the costs of monitoring for non-picture critical environments.

Timing can be pricey and complex if PTP and GPS are required. A lot of time and effort goes into making the PTP infrastructure work properly within SMPTE 2110 infrastructure. Having to do this at an event whilst setting up in a short timespan is not helpful to anyone and, elaborates Dave, a point to point video link simply doesn’t need high precision timing. Not only does IPMX relax the timing requirements, but it will also support asynchronous video streams.

David explains that whilst there are times when zero compression is needed in both AV and Broadcast, a lot of the time we need video that will easily fit into 1Gbps. For this, JPEG XS is being used which is a lightweight codec that can be run in software, FPGA and more. This supports 4:4:4 video for maximum fidelity. For more about JPEG XS, have a listen to this talk. Some good news for bandwidth fans is that all new Intel chips support 2.5Gbe networking using existing cabling which IMPX will be supporting.

Pro AV needs the ability to throw some preview video out to an iPad or similar. This isn’t going to work with JPEG XS, the preferred ‘minimal compression’ codec for IPMX, so a system for including H264 or H265 is being investigated which could have knock-on benefits for Broadcast.

David finishes by underlining that IMPX will be an open standard that can be implemented in software on a server, on a desktop or on a mobile phone. It’s scalable and ready to support the ProAV and events industry.

Watch now!
Speakers

David Chiappini David Chiappini
Chair, Pro AV Working Group, AIMS
Executive Vice President, Research & Development,
Matrox Graphics Inc.

Video: The Fundamentals of Virtualisation

Virtualisation is continuing to be a driving factor in the modernisation of broadcast workflows both from the technical perspective of freeing functionality from bespoke hardware and from the commercial perspective of maximising ROI by increasing utilisation of infrastructure. Virtualisation itself is not new, but using it in broadcast is still new to many and the technology continues to advance to deal with modern bitrate and computation requirements.

In these two videos, Tyler Kern speaks to Mellanox’s Richard Hastie, NVIDIA’s Jeremy Krinitt and John Naylor from Ross Video explain how virtualisation fits with SMPTE ST 2110 and real-time video workflows.

Richard Hastie explains that the agility is the name of the game by separating the software from hardware. Suddenly your workflow, in principle can be deployed anywhere and has the freedom to move within the same infrastructure. This opens up the move to the cloud or to centralised hosting with people working remotely. One of the benefits of doing this is the ability to have a pile of servers and continually repurpose them throughout the day. Rather than have discrete boxes which only do a few tasks, often going unused, you can now have a quota of compute which is much more efficiently used so the return on investment is higher as is the overall value to the company. As an example, this principle is at the heart of Discovery’s transition of Eurosport to ST 2110 and JPEG XS. They have centralised all equipment allowing for the many countries around Europe which have production facilities to produce remotely from one, heavily utilised, set of equipment.

Part I

John Naylor explains the recent advancements brought to the broadcast market in virtualisation. vMotion from VMware allows live-migration of virtual. machines without loss of performance. When you’re running real-time graphics, this is really important. GPU’s are also vital for graphics and video tasks. In the past, it’s been difficult for VMs to have full access to GPUs, but now not only is that practical but work’s happened to allow a GPU to be broken up and these reserved partitions dedicated to a VM using NVIDIA Ampere architecture.
John continues by saying that VMWare have recently focussed on the media space to allow better tuning for the hypervisor. When looking to deploy VM infrastructures, John recommends that end-users work closely with their partners to tune not only the hypervisor but the OS, NIC firmware and the BIOS itself to deliver the performance needed.

“Timing is the number one challenge to the use of virtualisation in broadcast production at the moment”

Richard Hastie

Mellanox, now part of NVIDIA, has continued improving its ConnectX network cards, according to Richard Hastie, to deal with the high-bandwidth scenarios that uncompressed production throws up. These network cards now have onboard support for ST 2110, traffic shaping and PTP. Without hardware PTP, getting 500-nanosecond-accurate timing into a VM is difficult. Mellanox also use SR-IOV, a technology which bypasses the software switch in the hypervisor, reducing I/O overhead and bringing performance close to non-virtualised performance. It does this by partitioning the PCI bus meaning one NIC can present itself multiple times to the computer and whilst the NIC is shared, the software has direct access to it. For more information on SR-IOV, have a look at this article and this summary from Microsoft.

Part II

Looking to the future, the panel sees virtualisation supporting the deployment of uncompressed ST 2110 and JPEG XS workflows enabling a growing number of virtual productions. And, for virtualisation itself, a move down from OS-level virtualisation to containerised microservices. Not only can these be more efficient but, if managed by an orchestration layer, allow for processing to move to the ‘edge’. This should allow some logic to happen. much closer to the end-user at the same time as allowing the main computation to be centralised.

Watch part I and part II now!
Speakers

Tyler Kern Tyler Kern
Moderator
John Naylor John Naylor
Technology Strategist & Director of Product Security
Ross
Richard Hastie Richard Hastie
Senior Sales Director, Business Development
NVIDIA
Jeremy Krinitt Jeremy Krinitt
Senior Developer Relations Manager
NVIDIA