Video: The Fundamentals of Virtualisation

Virtualisation is continuing to be a driving factor in the modernisation of broadcast workflows both from the technical perspective of freeing functionality from bespoke hardware and from the commercial perspective of maximising ROI by increasing utilisation of infrastructure. Virtualisation itself is not new, but using it in broadcast is still new to many and the technology continues to advance to deal with modern bitrate and computation requirements.

In these two videos, Tyler Kern speaks to Mellanox’s Richard Hastie, NVIDIA’s Jeremy Krinitt and John Naylor from Ross Video explain how virtualisation fits with SMPTE ST 2110 and real-time video workflows.

Richard Hastie explains that the agility is the name of the game by separating the software from hardware. Suddenly your workflow, in principle can be deployed anywhere and has the freedom to move within the same infrastructure. This opens up the move to the cloud or to centralised hosting with people working remotely. One of the benefits of doing this is the ability to have a pile of servers and continually repurpose them throughout the day. Rather than have discrete boxes which only do a few tasks, often going unused, you can now have a quota of compute which is much more efficiently used so the return on investment is higher as is the overall value to the company. As an example, this principle is at the heart of Discovery’s transition of Eurosport to ST 2110 and JPEG XS. They have centralised all equipment allowing for the many countries around Europe which have production facilities to produce remotely from one, heavily utilised, set of equipment.

Part I

John Naylor explains the recent advancements brought to the broadcast market in virtualisation. vMotion from VMware allows live-migration of virtual. machines without loss of performance. When you’re running real-time graphics, this is really important. GPU’s are also vital for graphics and video tasks. In the past, it’s been difficult for VMs to have full access to GPUs, but now not only is that practical but work’s happened to allow a GPU to be broken up and these reserved partitions dedicated to a VM using NVIDIA Ampere architecture.
John continues by saying that VMWare have recently focussed on the media space to allow better tuning for the hypervisor. When looking to deploy VM infrastructures, John recommends that end-users work closely with their partners to tune not only the hypervisor but the OS, NIC firmware and the BIOS itself to deliver the performance needed.

“Timing is the number one challenge to the use of virtualisation in broadcast production at the moment”

Richard Hastie

Mellanox, now part of NVIDIA, has continued improving its ConnectX network cards, according to Richard Hastie, to deal with the high-bandwidth scenarios that uncompressed production throws up. These network cards now have onboard support for ST 2110, traffic shaping and PTP. Without hardware PTP, getting 500-nanosecond-accurate timing into a VM is difficult. Mellanox also use SR-IOV, a technology which bypasses the software switch in the hypervisor, reducing I/O overhead and bringing performance close to non-virtualised performance. It does this by partitioning the PCI bus meaning one NIC can present itself multiple times to the computer and whilst the NIC is shared, the software has direct access to it. For more information on SR-IOV, have a look at this article and this summary from Microsoft.

Part II

Looking to the future, the panel sees virtualisation supporting the deployment of uncompressed ST 2110 and JPEG XS workflows enabling a growing number of virtual productions. And, for virtualisation itself, a move down from OS-level virtualisation to containerised microservices. Not only can these be more efficient but, if managed by an orchestration layer, allow for processing to move to the ‘edge’. This should allow some logic to happen. much closer to the end-user at the same time as allowing the main computation to be centralised.

Watch part I and part II now!

Tyler Kern Tyler Kern
John Naylor John Naylor
Technology Strategist & Director of Product Security
Richard Hastie Richard Hastie
Senior Sales Director, Business Development
Jeremy Krinitt Jeremy Krinitt
Senior Developer Relations Manager

Video: ST-2110 – Measuring and Testing the Data, Control and Timing Planes

An informal chat touching on the newest work around SMPTE ST-2110 standards and related specifications in today’s video. The industry’s leading projects are now tracking the best practices in IT as much as the latest technology in IP because simply getting video working over the network isn’t enough. Broadcasters demand solutions which are secure from the ground up, easy to deploy and have nuanced options for deployment.

Andy Rayner from Nevion talks to Prin Boon from Phabrix to understand the latest trends. Between then, Andy and Prin account for a lot of activity in standards work within standards and industry bodies such as SMPTE, VSF and JT-NM to name a but a few, so whom better to hear from regarding the latest thinking and ongoing work.

Andy starts by outlining the context of SMPTE’s ST-2110 suite of standards which covers not only the standards within 2110, but also the NMOS specifications from AMWA as well as the timing standards (SMPTE 2059 and IEEE 1588). Prin and Andy both agree that the initial benefit of moving to IT networking was benefiting from the massive network switches which now delivering much higher switching density than SDI ever could or would, now the work of 2110 projects is also tracking IT, rather than simply IP. By benefiting from the best practices of the IT industry as a whole, the broadcast industry is getting a much better product. Andy makes the point that broadcast-uses have very much pushed fabric manufacturers to implement PTP and other network technologies in a much more mature and scalable way than was imagined before.

Link to video

The focus of conversation now moves to the data, control and timing plane. The data plane contains the media essences and all of the ST 21110 standards. Control is about the AMWA/NMOS specs such as the IS-0X specs as well as the security-focused BCP-003 and JT-NM TR-1001. Timing is about PTP and associated guidelines.

Prin explains that in-service test and measurement is there to give a feeling for the health of a system; how close to the edge is the system? This is about early alerting of engineering specialists and then enable deep faultfinding with hand-held 2110 analysers. Phabrix, owned by Leader, are one of a number of companies who are creating monitoring and measurement tools. In doing this Willem Vermost observed that little of the vendor data was aligned so couldn’t be compared. This has directly led to work between many vendors and broadcasters to standardise the reported measurement data in terms of how it’s measured and how it is named and is being standardised under 2110-25. This will cover latency, video timing, margin and RTP offset.

More new work discussed by the duo includes the recommended practice, RP 2059-15 which is related to the the ST 2059 standards which apply PTP to media streams. As PTP, also known as IEEE 1588 has been updated to version 2.1 as part of the 2019 update, this RP creates a unified framework to expose PTP data in a structured manner and relies on RFC 8575 which, itself, relies on the YANG data modeling language.

We also hear about work to ensure that NMOS can fully deal with SMPTE 2022-7 flows in all the cases where a receiver is expecting a single or dual feed. IS-08 corner cases have been addressed and an all-encompassing model to develop against has been created as a reference.

Pleasingly, as this video was released in December, we are treated to a live performance of a festive song on piano and trombone. Whilst this doesn’t progress the 2110 narrative, it is welcomed as a great excuse to have a mine pie.

Watch now!

Andy Rayner Andy Rayner
Chief Technologist,
Prinyar Boon Prinyar Boon
Product Manager,

Video: Proper Network Designs and Considerations for SMPTE ST-2110

Networks from SMPTE ST 2110 systems can be fairly simple, but the simplicity achieved hides a whole heap of careful considerations. By asking the right questions at the outset, a flexible, scalable network can be built with relative ease.

“No two networks are the same” cautions Robert Welch from Arista as he introduces the questions he asks at the beginning of the designs for a network to carry professional media such as uncompressed audio and video. His thinking focusses on the network interfaces (NICs) of the devices: How many are there? Which receive PTP? Which are for management and how do you want out-of-band/ILO access managed? All of these answers then feed into the workflows that are needed influencing how the rest of the network is created. The philosophy is to work backwards from the end-nodes that receive the network traffic.

Robert then shows how these answers influence the different networks at play. For resilience, it’s common to have two separate networks at work sending the same media to each end node. Each node then uses ST 2022-7 to find the packets it needs from both networks. This isn’t always possible as there are some devices which only have one interface or simply don’t have -7 support. Sometimes equipment has two management interfaces, so that can feed into the network design.

PTP is an essential service for professional media networks, so Robert discusses some aspects of implementation. When you have two networks delivering the same media simultaneously, they will both need PTP. For resilience, a network should operate with at least two Grand Masters – and usually, two is the best number. Ideally, your two media networks will have no connection between them except for PTP whereby the amber network can benefit from the PTP from the blue network’s grandmaster. Robert explains how to make this link a pure PTP-only link, stopping it from leaking other information between networks.

Multicast is a vital technology for 2110 media production, so Robert looks at its incarnation at both layer 2 and layer 3. With layer 2, multicast is handled using multicast MAC addresses. It works well with snooping and a querier except when it comes to scaling up to a large network or when using a number of switches. Robert explains that this because all multicast traffic needs to be sent through the rendez-vous point. If you would like more detail on this, check out Arista’s Gerard Phillips’ talk on network architecture.

Looking at JT-NM TR-1001, the guidelines outlining the best practices for deploying 2110 and associated technologies, Robert explains that multicast routing at layer 3 works much increases stability, enables resiliency and scalability. He also takes a close look at the difference between ‘all source’ multicasting supported by IGMP version 2 and the ability to filter for only specific sources using IGMP version 3.

Finishing off, Robert talks about the difficulties in scaling PTP since all the replies/requests go into the same multicast group which means that as the network scales, so does the traffic on that multicast group. This can be a problem for lower-end gear which needs to process and reject a lot of traffic.

Watch now!

Robert Welch Robert Welch
Technical Solutions Lead
Arista Networks

Video: Remote Production in Real Life

Remote production is in heightened demand at the moment, but the trend has been ongoing for several years. For each small advance in technology, it becomes practical for another event to go remote. Remote production solutions have to be extremely flexible as a remote production workflow for one copmany won’t work for the next This is why the move to remote has been gradual over the last decade.

In this video, Dirk Skyora from Lawo gives three examples of remote production projects stretching back as far as 2016 to the present day in this RAVENNA webinar with evangelist Andreas Hildebrand.

The first case study is remote production for Belgian second division football. Working with Belgian telco Proximus along with Videohouse & NEP, LAWO setup remote production for stadia kitted out with 6 cameras and 2 commentary positions. With only 1 gigabit connectivity to the stadiums, they opted to use JPEG 2000 encoding at 100 Mbps for both the cameras out of the stadia but also the two return feeds back in for the commentators.

The project called for two simultaneous matches feeding into an existing gallery/PCR. Deployment was swift with flightcases deployed remotely and a double set of equipment being installed into the fixed PCR. Overall latency was around 2.5 frames one-way, so the camera viewfinders were about 5 frames adrift once transport and en/decoding delay were accounted for.

The main challenges were with the MPLS network into the stadia which would spontanously reroute and be loaded with unrelated traffic at around 21:00. Although there was packet loss, none was noticable on the 100Mbps J2K feeds. Latency for the commentators was a problem so some local mixing was needed and lastly PTP wasn’t possible over the network. Timing was, therefore, derived from the return video feed into the stadium which had come from the PTP-locked gallery. Locally this incoming timing was used to lock a locally generated PTP signal.

The next case study is inter-country links for the European Council connecting the Luxembourg and Brussels buildings for the European Council. The project was to move all production to a single tech control room in Brussells and relied on two 10GbE links between the buildings going through an Arista 7280 carrying 18 videos in one direction and two in return. Although initially reluctant to compress, the Council realised after testing that VC2 which offers around 4x compression would work well and deliver no noticable latency (approx 20ms end to end). Thanks to using VC2, the 10Gig links saw low usage from the project and the Council were able to migrate other business activities onto the link. PTP was generated in Brussels and Luxembourg re-generated their PTP from the Brussels signal, to be distributed locally. Overall latency was 1 frame.

Lastly, Dirk outlines the work done for the Belgium Daily News which had been bought out by DPG Media. This buy-out prompted a move from Brussels to Antwerp where a new building opened. However, all of the techinical equipmennt was already in Brussels. This led to the decision to remote control everything in Brussels from Antwerp. The production staff moved to Antwerp, causing some issues with the disconnect between production and technical, but also due to personnel relocating and getting used to new facilities.

The two locations were connected with a 400GbE, redundant infrastructure using IP<->SDI gateways. Latency was 1 frame and, again, PTP on one site was created from the incoming PTP from the other.

The video finishes with a detailed Q&A.

Watch now!

Dirk Sykora Dirk Sykora
Technical Sales Manager,
Andreas Hildebrand Andreas Hildebrand
RAVENNA Evangelist,
ALC NetworX