Video: The Fundamentals of Virtualisation

Virtualisation is continuing to be a driving factor in the modernisation of broadcast workflows both from the technical perspective of freeing functionality from bespoke hardware and from the commercial perspective of maximising ROI by increasing utilisation of infrastructure. Virtualisation itself is not new, but using it in broadcast is still new to many and the technology continues to advance to deal with modern bitrate and computation requirements.

In these two videos, Tyler Kern speaks to Mellanox’s Richard Hastie, NVIDIA’s Jeremy Krinitt and John Naylor from Ross Video explain how virtualisation fits with SMPTE ST 2110 and real-time video workflows.

Richard Hastie explains that the agility is the name of the game by separating the software from hardware. Suddenly your workflow, in principle can be deployed anywhere and has the freedom to move within the same infrastructure. This opens up the move to the cloud or to centralised hosting with people working remotely. One of the benefits of doing this is the ability to have a pile of servers and continually repurpose them throughout the day. Rather than have discrete boxes which only do a few tasks, often going unused, you can now have a quota of compute which is much more efficiently used so the return on investment is higher as is the overall value to the company. As an example, this principle is at the heart of Discovery’s transition of Eurosport to ST 2110 and JPEG XS. They have centralised all equipment allowing for the many countries around Europe which have production facilities to produce remotely from one, heavily utilised, set of equipment.

Part I

John Naylor explains the recent advancements brought to the broadcast market in virtualisation. vMotion from VMware allows live-migration of virtual. machines without loss of performance. When you’re running real-time graphics, this is really important. GPU’s are also vital for graphics and video tasks. In the past, it’s been difficult for VMs to have full access to GPUs, but now not only is that practical but work’s happened to allow a GPU to be broken up and these reserved partitions dedicated to a VM using NVIDIA Ampere architecture.
John continues by saying that VMWare have recently focussed on the media space to allow better tuning for the hypervisor. When looking to deploy VM infrastructures, John recommends that end-users work closely with their partners to tune not only the hypervisor but the OS, NIC firmware and the BIOS itself to deliver the performance needed.

“Timing is the number one challenge to the use of virtualisation in broadcast production at the moment”

Richard Hastie

Mellanox, now part of NVIDIA, has continued improving its ConnectX network cards, according to Richard Hastie, to deal with the high-bandwidth scenarios that uncompressed production throws up. These network cards now have onboard support for ST 2110, traffic shaping and PTP. Without hardware PTP, getting 500-nanosecond-accurate timing into a VM is difficult. Mellanox also use SR-IOV, a technology which bypasses the software switch in the hypervisor, reducing I/O overhead and bringing performance close to non-virtualised performance. It does this by partitioning the PCI bus meaning one NIC can present itself multiple times to the computer and whilst the NIC is shared, the software has direct access to it. For more information on SR-IOV, have a look at this article and this summary from Microsoft.

Part II

Looking to the future, the panel sees virtualisation supporting the deployment of uncompressed ST 2110 and JPEG XS workflows enabling a growing number of virtual productions. And, for virtualisation itself, a move down from OS-level virtualisation to containerised microservices. Not only can these be more efficient but, if managed by an orchestration layer, allow for processing to move to the ‘edge’. This should allow some logic to happen. much closer to the end-user at the same time as allowing the main computation to be centralised.

Watch part I and part II now!
Speakers

Tyler Kern Tyler Kern
Moderator
John Naylor John Naylor
Technology Strategist & Director of Product Security
Ross
Richard Hastie Richard Hastie
Senior Sales Director, Business Development
NVIDIA
Jeremy Krinitt Jeremy Krinitt
Senior Developer Relations Manager
NVIDIA

Video: PTP in Virtualized Media Environment

How do we reconcile the tension between the continual move towards virtualisation, microservices and docker-like deployments and the requirements of SMPTE 2110 to have highly precise timing so it can synchronise the video, audio and other essence streams? Virtualisation adds fluidity in to computing so it can share a single set of resources amongst many virtual computers yet PTP, the Precision Time Protocol a successor to NTP, requires close to nano-second precision in its timestamps.

Alex Vainman from Mellanox explains how to make PTP work in these cases and brings along a case study to boot. Starting with a little overview and a glossary, Alex explains the parts of the virtual machine and the environment in which it sits. There’s the physical layer, the hypervisor as well as the virtual machines themselves – each virtual machine being it’s own self-contained computer sitting on a shared computer. Hardware must be shared between, often, many different computers. However some devices aren’t intended to be shared. Take, for instance, a dongle that contains a licence for software. This should clearly be only owned by one computer therefore there is a ‘direct’ mode which means that it is only seen by one computer. Alex goes on to explain the different virtualisation I/O modes which allow devices which can be shared, a printer, storage or CPU for instance need to have access computers may need to wait until they have access to the device to enable sharing. Waiting, of course, is not good for a precision time protocol.

In order to understand the impact that virtualisation might have, Alex details the accuracy and other requirements necessary to have PTP working well enough to support SMPTE 2110 workflows. Although PTP is an IEEE standard, this is just a standard to define how to establish accurate time. It doesn’t help us understand how to phase and bring together media signals without SMPTE ST 2059-1 and -2 which provide the standard of the incoming PTP signal and the way by which we can compare timing and media signals (more info here.) All important is to understand how PTP can actually determine the accurate time given that we know every single message has an unknown propagation delay. By exchanging messages, Alex shows, it is quite practical to measure the delays involved and bring them into the time calculation.

We now have enough information to see why the increased jitter of VM-based systems is going to cause a problem as there are non-deterministic factors such as contention and traffic load to consider as well as factors such as software overhead. Alex takes us through the different options of getting PTP well synchronised in a variety of different VM architectures. The first takes the host clock and ensures that is synchronised to PTP. Using a dedicated PTP library within the VM, this then speaks to the host clock and synchronises the VM OS clock providing very accurate timing. Another, where hardware support in the VM’s hardware for PTP isn’t present, is to have NICs with dedicated PTP support which can directly support the VM OSes maintaining their own PTP clock. The major downside here is that each OS will have to make their own PTP calls creating more load on the PTP system as opposed to the previous architecture whereby the host clock of the VM was the only clock synchronising to the system PTP and therefore there was only ever one set of PTP messages no matter how many VMs were being supported.

To finish off, Alex explains how Windows VMs can be supported – for now through third-party software – and summarises the ways in which we can, in fact, create PTP ecosystems that incorporate virtual machines.

Watch now!
Download the slides
Speakers

Alex Vainman Alex Vainman
Senior Staff Engineer,
Mellanox Technologies

Video: Investigating Media Over IP Multicast Hurdles in Containerized Platforms

As video infrastructures have converged with enterprise IT, they started incorporating technologies and methods typical for data centres. First came virtualisation allowing for COTS (Common Off The Shelf) components to be used. Then came the move towards cloud computing, taking advantage of scale economies.

However, these innovations did little to address the dependence on monolithic projects that impeded change and innovation. Early strategies for Video over IP were based on virtualised hardware and IP gateway cards. As the digital revolution took place with emergence of OTT players, the microservices based on containers have been developed. The aim was to shorten the cycle of software updates and enhancements.

Containers allow to insulate application software from underlying operating systems to remove the dependence on hardware and can be enhanced without changing the underlying operational fabrics. This provides the foundation for more loosely coupled and distributed microservices, where applications are broken into smaller, independent pieces that can be deployed and managed dynamically.

Modern containerized server software methods such as Docker are very popular in OTT and cloud solution, but not in SMPTE ST 2110 systems. In the video above, Greg Shay explains why.

Docker can package an application and its dependencies in a virtual container that can run on any Linux server. It uses the resource isolation features of the Linux kernel and a union-capable file system to allow containers to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines. Docker can get more applications running on the same hardware than comparing with VMs, makes it easy for developers to quickly create ready-to-run containered applications and makes managing and deploying applications much easier.

However, currently there is a huge issue with using Docker for ST 2110 systems, because Docker containers do not work with Multicast traffic. The root of the multicast problem is the specific design of the way that the Linux kernel handles multicast routing. It is possible to wrap a VM around each Docker container just to achieve the independence of multicast network routing by emulating the full network interface, but this defeats capturing and delivering the behaviour of the containerized product in a self-contained software deliverable.

There is a quick and dirty partial shortcut which enable container to connect to all the networking resources of the Docker host machine, but it does not isolate containers into their own IP addresses and does not isolate containers to be able to use their own ports. You don’t really get a nice structure of ‘multiple products in multiple containers’, which defeats the purpose of containerized software.

You can see the slides here.

Watch now!

Speaker

Greg Shay Greg Shay
CTO
The Telos Alliance

Webinar: Unlocking global success for channel operators and broadcasters

Date: Tuesday 4th December, 2018. 16:00 GMT

Join IBC365 and Tata Communications to explore how organisations are using cloud playout to deploy a unified solution for both playout and distribution on a global basis, and why cloud is fast becoming the preferred option for many linear channel operators and broadcasters.

Cloud playout offers the potential for rapid channel launches, more efficient and resilient operations and a clear commercial model enabling linear channels to be more successful and profitable.

In this Webinar:

  • Explore cloud success stories, like Woohoo TV, Latin America’s first dedicated channel for sports, music and youth culture. It expanded rapidly into new markets in the U.S. without needing capital investment by deploying Tata Communications’ Cloud Master Control and Video Connect solutions.
  • Learn how Cloud Master Control delivers a future-ready virtualised IP environment powered by industry-leading technology vendors, enabling channel operators to scale rapidly from a single channel to a large-scale multi-channel operation, with complete flexibility and reliability.
  • Understand the business, technology and operational benefits and the crucial questions to ask before moving your channel playout operation to a cloud playout provider.

Register now!

Speakers

Jeremy Dujardin Jeremy Dujardin
CTO Media Services,
Tata Communications
Dhaval Ponda Dhaval Ponda
Global Sales Head Media Services,
Tata Communications