Video: The Fundamentals of Virtualisation

Virtualisation is continuing to be a driving factor in the modernisation of broadcast workflows both from the technical perspective of freeing functionality from bespoke hardware and from the commercial perspective of maximising ROI by increasing utilisation of infrastructure. Virtualisation itself is not new, but using it in broadcast is still new to many and the technology continues to advance to deal with modern bitrate and computation requirements.

In these two videos, Tyler Kern speaks to Mellanox’s Richard Hastie, NVIDIA’s Jeremy Krinitt and John Naylor from Ross Video explain how virtualisation fits with SMPTE ST 2110 and real-time video workflows.

Richard Hastie explains that the agility is the name of the game by separating the software from hardware. Suddenly your workflow, in principle can be deployed anywhere and has the freedom to move within the same infrastructure. This opens up the move to the cloud or to centralised hosting with people working remotely. One of the benefits of doing this is the ability to have a pile of servers and continually repurpose them throughout the day. Rather than have discrete boxes which only do a few tasks, often going unused, you can now have a quota of compute which is much more efficiently used so the return on investment is higher as is the overall value to the company. As an example, this principle is at the heart of Discovery’s transition of Eurosport to ST 2110 and JPEG XS. They have centralised all equipment allowing for the many countries around Europe which have production facilities to produce remotely from one, heavily utilised, set of equipment.

Part I

John Naylor explains the recent advancements brought to the broadcast market in virtualisation. vMotion from VMware allows live-migration of virtual. machines without loss of performance. When you’re running real-time graphics, this is really important. GPU’s are also vital for graphics and video tasks. In the past, it’s been difficult for VMs to have full access to GPUs, but now not only is that practical but work’s happened to allow a GPU to be broken up and these reserved partitions dedicated to a VM using NVIDIA Ampere architecture.
John continues by saying that VMWare have recently focussed on the media space to allow better tuning for the hypervisor. When looking to deploy VM infrastructures, John recommends that end-users work closely with their partners to tune not only the hypervisor but the OS, NIC firmware and the BIOS itself to deliver the performance needed.

“Timing is the number one challenge to the use of virtualisation in broadcast production at the moment”

Richard Hastie

Mellanox, now part of NVIDIA, has continued improving its ConnectX network cards, according to Richard Hastie, to deal with the high-bandwidth scenarios that uncompressed production throws up. These network cards now have onboard support for ST 2110, traffic shaping and PTP. Without hardware PTP, getting 500-nanosecond-accurate timing into a VM is difficult. Mellanox also use SR-IOV, a technology which bypasses the software switch in the hypervisor, reducing I/O overhead and bringing performance close to non-virtualised performance. It does this by partitioning the PCI bus meaning one NIC can present itself multiple times to the computer and whilst the NIC is shared, the software has direct access to it. For more information on SR-IOV, have a look at this article and this summary from Microsoft.

Part II

Looking to the future, the panel sees virtualisation supporting the deployment of uncompressed ST 2110 and JPEG XS workflows enabling a growing number of virtual productions. And, for virtualisation itself, a move down from OS-level virtualisation to containerised microservices. Not only can these be more efficient but, if managed by an orchestration layer, allow for processing to move to the ‘edge’. This should allow some logic to happen. much closer to the end-user at the same time as allowing the main computation to be centralised.

Watch part I and part II now!
Speakers

Tyler Kern Tyler Kern
Moderator
John Naylor John Naylor
Technology Strategist & Director of Product Security
Ross
Richard Hastie Richard Hastie
Senior Sales Director, Business Development
NVIDIA
Jeremy Krinitt Jeremy Krinitt
Senior Developer Relations Manager
NVIDIA

Video: NMOS – Ready, Steady, Go!

We have NMOS IS-04,-05, 6, 7…all the way to 10. Is it possibly too complex? Each NMOS specification brings an important feature to an IP/SMPTE ST-2022 workflow and not every system needs each one so life can become confusing. To help, NVIDIA (who own Mellanox) have been developing an open-source project which allows for quick and easy deployment of an NMOS test system.

Kicking off the presentation, Félix Poulin, explains how the EBU Pyramid for Media Nodes shows how SMPTE ST 2110 depends on a host of technologies surrounding it to create a large system. These are such as ‘Discovery and registration; channel mapping, event and tally, Network control, security and more. Félix shows how AMWA’s BCP-003-01 gives guidelines on securing NMOS comms. How IS-09 allows nodes to join the system and collect system parameters and then register itself in the IS-04 database. IS-05 and IS-06 allow end-points to be connected either through IGMP with IS-05 or by an SDN controller, using -06. IS-08 allows for audio mapping/shuffling with BCP-002-01 marking which streams belong to each other and can be taken as a bundle. IS-07 gives a way for event and tally information to be passed from place to place.

There’s a lot going on, already published and getting started can seem quite daunting. For that reason, there is an ‘NMOS at a glance‘ document now on the NMOS website. Gareth Sylvester-Bradley from Sony looks at the ongoing work within NMOS such as finalising IS-10 and BCP-003-02 both of which will enable secure authorisation of clients in the system and explains how AMWA works and ensures the correct direction of the NMOS activity groups with sufficient business cases and participation. He also outlines the importance of the NMOS testing tool and the criteria used for quality and adoption. Gareth finishes by discussing the other in-progress work from NMOS including work on EDID connection management as part of the pro AV IPMX project.

Finally, Richard Hastie introduces the ‘Easy-NMOS’ which provides very easy deployment of IS-04, 05 & 09 along with BCP-003-01 and BCP-002-01. Introduced in 2019, Mellanox – now part of NVIDIA – developed this easy-to-deploy, containerised set of 3 ‘servers’ which quickly and easily deploy these technologies including a test suite. This doesn’t move media, but it creates valid NMOS nodes and includes an MQTT broker. One container contains the NMOS Registry, controller and MQTT broker. One is a virtual mode and the last is an NMOS testing service. Richard walks us through the 4-line install and brief configuration ahead of installing this and demonstrating how to use it.

Watch now!
Speakers

Félix Poulin Félix Poulin
Director, Media Transport Architecture & Lab
CBC/Radio-Canada
Gareth Sylvester-Bradley Gareth Sylvester-Bradley
Principal engineer,
Sony EPE
Richard Hastie Richard Hastie
Senior Sales Director, Mellanox Business Development
NVIDIA

Video: CDN Trends in FPGAs & GPUs

As technology continues to improve, immersive experiences are all the more feasible. This video looks at how the CDNs can play their part in enabling technologies which seem to rely on fast, local, compute. However, as with many internet services, low latency is very important.

Greg Jones from Nvidia and Nehal Mehta form Intel give us the lowdown in this video on what’s happening today to enable low-latency CDNs and what the future might look like. Intel, owners of FPGA makers Altera, and Nvidia are both interested in how their products can be of as much service at the edge as in the core datacentres.

Greg is involved in XR development at Nvidia. ‘XR’ is a term which refers to an outcome rather than any specific technology. Ostensibly ‘eXtended’ reality, it includes some VR, some augmented reality and anything else which helps improve the immersive experience. Greg explains that the importance of getting the ‘motion to photon’ delay to within 20ms. CDNs can play a role in this by moving compute to the edge. This tracks with current trends on wanting to reduce backhaul, edge computation is already on the rise.

Greg also touches on recent power improvements on newer GPUs. Similar to what we heard the other day from Gerard Phillips from Arista who said that switch manufacturers were still using technology that CPU’s were on several years ago meaning there’s plenty in the bank for speed increases over the coming years. According to Greg, the same is true for GPUs. Moreover, it’s important to compare compute per watt rather than doing it in absolute terms.

Nehal Mehta explains that, in the same way that GPUs can offload certain tasks from the CPU, so do FPGAs. At scale, this can be critical for tasks like deep packet inspection, encryption or even dynamic ad insertion at the edge,

The second half of video looks at what’s happening during the pandemic. Nehal explains that need for encryption has increased and Greg sees that large engineering functions are now, or many are soon likely to be, done in the cloud. Greg sees XR as going a long way to helping people collaborate around a large digital model and may help to reduce travel.

The last point made is regarding video conferencing all day long leaving people wanting “more meaningful interactions”. We are seeing attempts at richer and richer meeting experiences, both with and without XR.
Watch now!
Speakers

Greg Jones Greg Jones
Global Business Development, XR
NVIDIA
Nehal Mehta Nehal Mehta
Direcotr Visiual Cloud, CDN Segment,
Intel
Tim Siglin Moderator: Tim Siglin
Founding Executive Director,
Help Me Stream

Video: Hardware Transcoding Solutions For The Cloud

Hardware encoding is more pervasive with Intel’s Quick Sync embedding CUDA GPUs inside GPUs plus NVIDIA GPUs have MPEG NVENC encoding support so how does it compare with software encoding? For HEVC, can Xilinx’s FPGA solution be a boost in terms of quality or cost compared to software encoding?

Jan Ozer has stepped up to the plate to put this all to the test analysing how many real-time encodes are possible on various cloud computing instances, the cost implications and the quality of the output. Jan’s analytical and systematic approach brings us data rather than anecdotes giving confidence in the outcomes and the ability to test it for yourself.

Over and above these elements, Jan also looks at the bit rate stability of the encodes which can be important for systems which are sensitive to variations such services running at scale. We see that the hardware AVC solutions perform better than x264.

Jan takes us through the way he set up these tests whilst sharing the relevant ffmpeg commands. Finally he shares BD plots and example images which exemplify the differences between the codecs.

Watch now!
Download the slides
Speaker

Jan Ozer Jan Ozer
Principal, Streaming Learning Center
Contributing Editor, Streaming Media