Video: IP for Broadcast, Virtual Immersive Studios, Esports

A wide range of topics today covering live virtual production, lenses, the reasons to move to IP, Esports careers and more. This is a recording of the SMPTE Toronto sections’ February meeting with guest speakers from Arista, Arri, TFO and Ross Video.

The first talk of the evening was from Ryan Morris of Arista talking about the importance of the move to IP. Those with an IP infrastructure have noticed that it’s easier to continue using their system during lockdown when access to the equipment itself is limited. While there will always be a need to move a 100Gbe fibre at some point or other, a running 2110 system easily allows new connections without needing SDI cables plugging up. This is down to IP’s ability to carry multiple signals, in both directions, down a single cable. A 100 gigabit fibre can carry 65 1080i59.94 signals, for instance which is in stark constrast to SDI cabling. Similarly when using an IP router, you can route thousands of flows in a few U of space where as a 1152×1152 SDI router takes up a whole rack.

Ryan moves to an overview of the protocols that make broadcast on IP networks possible starting with unicast, multicast and broadcast. The latter, he likens to a baby screaming. Multicast is like you talking to a group of friends. Multicast is the protocol used for audio, video and other essences when being sent over IP whether as part of SMPTE ST 2110 or ST 2022-6. And whilst it works well, the protocol managing it, IGMP, isn’t really as smart as we need it to be. IGMP knows nothing about the bandwidth of the flow being sent and has no knowledge of capacity or loading of any link. As such, links can get saturated using this method and can even mean that routine maintenance overloads the backup path resulting in an outage. Ryan concludes by saying that SDN resolves this problem. Ryan explains IGMP as analogous to knowing which address you need to drive to and simply setting off in the right direction, reacting to any traffic jams and roadblocks you find. In contrast, he says SDN is like having GPS where everything is taken in to account from the beginning and you know the whole path before you set off. Both will get you there, SDN will be more efficient, predictable and accountable.

To understand more about IP, watch these talks:
“Is IP really better than SDI?” by Ed Calverly detailing on how video over IP works and,
“Network design for live production” by, colleague of Ryan, Gerard Philips
 

 
Next in the line-up is François Gauthier who takes u through the history of cinema-related technologies showing how, at each stage, stanards helped the increasingly global industry work together. SMPTE’s earliest, well known, standardisation efforts were to aid the efforts around World War 1 interchanging films between projectors/cameras. Similarly, ARRI started in 1917 and has benefited from and worked to create SMPTE standards in cameras, lighting, workflows, colour grading and now mixed reality. François eloquently takes us on this journey showing at each stage the motivation for standardisation and how ARRI has developed in step.

A different type of innovation is on show in the next talk. Given by Cliff Lavalée updates on the latest improvements to his immersive studio. It was formerly featured in a previous SMPTE Toronto section talk when he explained the benefits of having a gaming-based 3D engine in this green-screen studio with camera tracking. In fact, it was the first studio of its kind as it came on line in 2016. Since then, game engined have made great inroads into studio production.

Having a completely virtual studio with camera tracking and 3D objects available to be live-rendered in response to the scene, has a number of benefits, Cliff explains. He can track the talent and make objects appear in front or behind them as appropriate in response to their movements. Real-time rendering and the green blank canvas gives design freedom as well as the ability to see what scenes will look like during the shoot rather than after. It’s no surprise that there are also cost savings. In one of a number of videos he shows, we see a children’s programme which takes place in a small village. By using the green screen, the live-action puppets can quickly change sets from place to place integrating real props with virtual backgrounds which move with the camera.

The last talk is from Cameron Reed who’s a former esports director and now works for Ross Video. Cameron gives a brief overview of how esports is split up into developers who make the game, tournament organisers, teams, live production companies and distribution platforms. The Broadcast Knowledge has followed esports for a while. Check out the back catalogue for more detailed videos on the subject.

It’s no surprise that the developers own the game. What’s interesting is that a computer game is much more complex and directly malluable than traditional sports games. Whilst FIFA might control football/soccer world-wide, there is little it can do to change the game. Formula 1 is, perhaps, closest to the esports model where rules will come and go about engines, tyres, refueling strategies etc. With esports, aspects of the game can change week to week in response to fans. Cameron explains esports as ‘free’ adverstising for the developers. Although they won’t always make money, even if they make 90% of their money back directly from the tournament and events for that year, it means they’ve had a 90% discount on their advertising budget. All the while, they’ve managed to inject life in to their game and extend the amount of interest it’s garnered. Camerong gives a brief acknowledgement that for distribution “Twitch is king” but underlines that this platform doesn’t support UHD as of the date of the meeting which doesn’t sit well with the efforts of the gameing industry to increase resolution and detail in games.

Cameron’s presentation finishes with a look at career progressions in esports both following a non/semi-technichal path and a technical path. The market holds a lot of interesting opportunities.

The session ends with a Q&A for all the panelists.

Watch now!
Speakers

Ryan Morris Ryan Morris
Systems Engineer,
Arista Networks
François Gauthier François Gauthier
TSR,
ARRI
Cliff Lavalée Cliff Lavallée
Director of LUV Studio Services,
Groupe Média TFO
Cameron Reed
Esports Business Development Manager,
Ross Video

Video: FOX – Uncompressed live sports in the cloud

Is using uncompressed video in the cloud with just 6 frames of latency to get there and back ready for production? WebRTC manages sub-second streaming in one direction and can even deliver AV1 in real-time. The key to getting down to a 100ms round trip is to move down to millisecond encoding and to use uncompressed video in the cloud. This video shows how it can be done.

Fox has a clear direction to move into the cloud and last year joined AWS to explains how they’ve put their delivery distribution into the cloud remuxing feeds for ATSC transmitters, satellite uplinks, cable headends and encoding for internet delivery, In this video, Fox’s Joel Williams, Evan Statton from AWS explain their work together making this a reality. Joel explains that latency is not a very hot topic for distribution as there are many distribution delays. The focus has been on getting the contribution feeds into playout and MCR monitoring quickly. After all, when people are counting down to an ad break, it needs to roll exactly on zero.

Evan explains the approach AWS has taken to solving this latency problem and starts with considering using SMPTE’s ST 2110 in the cloud. ST 2110 has video flows of at least 1 Gbps, typically and when implemented on-premise is typically built on a dedicated network with very strict timing. Cloud datacentres aren’t like that and Evan demonstrates this showing how across 8 video streams, there are video drops of several seconds which is clearly not acceptable. Amazon, however, has a product called ‘Scalable Reliable Datagram’ which is aimed at moving high bitrate data through their cloud. Using a very small retransmission buffer, it’s able to use multiple paths across the network to deliver uncompressed video in real-time. The retransmission buffer here being very small enables just enough healing to redeliver missing packets within the 16.7ms it takes to deliver a frame of 60fps video.

On top of SRD, AWS have introduced CDI, the Cloud Digital Interface, which is able to describe uncompressed video flows in a way already familiar to software developers. This ‘Audio Video Metadata’ layer handles flows in the same way as 2110, for instance keeping essences separate. Evan says this has helped vendors react favourably to this new technology. For them instead of using UDP, SRD can be used with CDI giving them not only normal video data structures but since SRD is implemented in the Nitro network card, packet processing is hidden from the application itself.

The final piece to the puzzle is keeping the journey into and out of the cloud low-latency. This is done using JPEG XS which has an encoding time of a few milliseconds. Rather than using RIST, for instance, to protect this on the way into the cloud, Fox is testing using ST 2022-7. 2022-7 takes in two identical streams on two network interfaces, typically. This way it should end up with two copies of each packet. Where one gets lost, there is still another available. This gives path redundancy which a single stream will never be able to offer. Overall, the test with Fox’s Arizona-based Technology Center is shown in the video to have only 6 frames of latency for the return trip. Assuming they used a California-based AWS data centre, the ping time may have been as low as two frames. This leaves four frames for 2022-7 buffers, XS encoding and uncompressed processing in the cloud.

Watch now!
Speakers

Joel Williams Joel Wiliams
VP of Architecutre & Engineering,
Fox Corporation
Evan Statton Evan Statton
Principal Architect, Media & Entertainment,
AWS

Video: Native Processing of Transport Streams to/from Uncompressed IP

As much as the move to IP hasn’t been trivial for end-users, it’s been all the harder for vendors who have had to learn all the same lessons as end-users, but also press the technology into action. Whilst broadcast is building on the expertise, success and scale of the IT industry, we are also pushing said technology to its limits and, in some cases, in ways not yet seen by the IT industry at large.

Kieran Kunhya from encoder and decoder vendor Open Broadcast Systems, explains to us the problems faced in making this work for software-based systems. As we heard earlier this week on The Broadcast Knowledge, the benefits of moving functions away from bespoke hardware are the ability to move your workflows more easily into data centres or even the cloud. Indeed, flexibility is one important factor for OBS which is why they are a software-first company. Broadcast workflows have been traditionally static and still, today, tends to only do one thing so a move to software removes the dependence on specific, custom chips.

The move to IP has many benefits, as Kieran outlines next. In today’s pandemic, a big benefit is simply not needing a person to go and move an SDI cable. But freeing ourselves from SDI, we hear, is more than just that. Kieran acknowledges that SDI achieves ultra-low delay in the realm of microseconds to move gigabits of video, but this comes at a high price. Each cable only carries one signal and only in one direction, but more critically routers top out at 1152×1152 in size. Whilst this does seem like a large number, larger operators are finding this is is simply not enough as they continue to both expand their offerings and also merge (compare Comcast’s NBC and Sky businesses).

The industry, by looking towards higher bandwidth and more scalable technologies for video has solved many of these problems. The bandwidth routing capability of IT switches can be in the terabits with each port being 100 or 400Gbps. Each cable works bidirectionally and, typically, carries multiple signals. This not leaves the infrastructure future-proof to moves, say, to 8K video but enables much denser routing of signals well above 1152×1152. The result of Kieran’s work is 64 channel encoding/decoding in 2U which can replace up to a full rack of traditional equipment.

This success hasn’t come without a lot of work. The timings are very tight and getting standard servers to deliver 100% of packets onto a network within 20 microseconds takes hard-won knowledge. Kieran explains that two of the keys to success are using kernel bypass techniques where he’s able to write directly into the memory space the NIC uses rather than the traditional method which would take the data via the Linux kernel. Secondly, he uses SIMD CPU instructions directly. This can speed up code by up to twenty times compared to C and only needs to be done once per CPU generation.

Once these techniques are harnessed, OBS still has to deal with the variety of unusual pixel formats, the difficulty of reference counting with many small buffers, uncompressed audio which has low bitrate and short 125 microsecond packets. Coupled with other equipment which doesn’t verify checksums, doesn’t use timestamps and doesn’t necessarily hadn’t 16 channel flows, making this work is tough but Kieran’s very clear the benefits of uncompressed IP video are worth it.

Watch now!
Speakers

Kieran Kunhya Kieran Kunhya
Founder & CEO
Open Broadcast Systems

Video: The Fundamentals of Virtualisation

Virtualisation is continuing to be a driving factor in the modernisation of broadcast workflows both from the technical perspective of freeing functionality from bespoke hardware and from the commercial perspective of maximising ROI by increasing utilisation of infrastructure. Virtualisation itself is not new, but using it in broadcast is still new to many and the technology continues to advance to deal with modern bitrate and computation requirements.

In these two videos, Tyler Kern speaks to Mellanox’s Richard Hastie, NVIDIA’s Jeremy Krinitt and John Naylor from Ross Video explain how virtualisation fits with SMPTE ST 2110 and real-time video workflows.

Richard Hastie explains that the agility is the name of the game by separating the software from hardware. Suddenly your workflow, in principle can be deployed anywhere and has the freedom to move within the same infrastructure. This opens up the move to the cloud or to centralised hosting with people working remotely. One of the benefits of doing this is the ability to have a pile of servers and continually repurpose them throughout the day. Rather than have discrete boxes which only do a few tasks, often going unused, you can now have a quota of compute which is much more efficiently used so the return on investment is higher as is the overall value to the company. As an example, this principle is at the heart of Discovery’s transition of Eurosport to ST 2110 and JPEG XS. They have centralised all equipment allowing for the many countries around Europe which have production facilities to produce remotely from one, heavily utilised, set of equipment.

Part I

John Naylor explains the recent advancements brought to the broadcast market in virtualisation. vMotion from VMware allows live-migration of virtual. machines without loss of performance. When you’re running real-time graphics, this is really important. GPU’s are also vital for graphics and video tasks. In the past, it’s been difficult for VMs to have full access to GPUs, but now not only is that practical but work’s happened to allow a GPU to be broken up and these reserved partitions dedicated to a VM using NVIDIA Ampere architecture.
John continues by saying that VMWare have recently focussed on the media space to allow better tuning for the hypervisor. When looking to deploy VM infrastructures, John recommends that end-users work closely with their partners to tune not only the hypervisor but the OS, NIC firmware and the BIOS itself to deliver the performance needed.

“Timing is the number one challenge to the use of virtualisation in broadcast production at the moment”

Richard Hastie

Mellanox, now part of NVIDIA, has continued improving its ConnectX network cards, according to Richard Hastie, to deal with the high-bandwidth scenarios that uncompressed production throws up. These network cards now have onboard support for ST 2110, traffic shaping and PTP. Without hardware PTP, getting 500-nanosecond-accurate timing into a VM is difficult. Mellanox also use SR-IOV, a technology which bypasses the software switch in the hypervisor, reducing I/O overhead and bringing performance close to non-virtualised performance. It does this by partitioning the PCI bus meaning one NIC can present itself multiple times to the computer and whilst the NIC is shared, the software has direct access to it. For more information on SR-IOV, have a look at this article and this summary from Microsoft.

Part II

Looking to the future, the panel sees virtualisation supporting the deployment of uncompressed ST 2110 and JPEG XS workflows enabling a growing number of virtual productions. And, for virtualisation itself, a move down from OS-level virtualisation to containerised microservices. Not only can these be more efficient but, if managed by an orchestration layer, allow for processing to move to the ‘edge’. This should allow some logic to happen. much closer to the end-user at the same time as allowing the main computation to be centralised.

Watch part I and part II now!
Speakers

Tyler Kern Tyler Kern
Moderator
John Naylor John Naylor
Technology Strategist & Director of Product Security
Ross
Richard Hastie Richard Hastie
Senior Sales Director, Business Development
NVIDIA
Jeremy Krinitt Jeremy Krinitt
Senior Developer Relations Manager
NVIDIA