Video: The Future Impact of Moore’s Law on Networking

Many feel that Moore’s law has lost its way when it comes to CPUs since we’re no longer seeing a doubling of chip density every two years. This change is tied to the difficulty in shrinking transistors even more when their size is already close to some of the limits imposed by physics. In the networking world, transistors are bigger so which is allowing significant growth in bandwidth to continue. In recent years we have tracked the rise of 1GbE which made way for 10GbE, 40GbE and 100 Gigabit networking. We’re now seeing general availability of 400Gb with 800Gb firmly on the near-term roadmaps as computation within SFPs and switches increases.

In this presentation, Arista’s Robert Welch and Andy Bechtolsheim, explain how the 400GbE interfaces are made up, give insight into 800GbE and talks about deployment possibilities for 400GbE both now and in the near future harnessing in-built multiplexing. It’s important to realise that high capacity links we’re used to today of 100GbE or above are delivered by combining multiple lower-bandwidth ethernet links, known as lanes. 4x25GbE gives a 100GbE interface and 8x50GbE lanes provides 400GbE. The route to 800GbE is, then to increase the number of lanes or bump the speed of the lanes. The latter is the chosen route with 8x100GbE in the works for 2022/2023.

 

 

One downside of using lanes is that you will often need to break these out into individual fibres which is inconvenient and damages cost savings. Robert outlines the work being done to bring wavelength multiplexing (DWDM) into SFPs so that multiple wavelengths down one fibre are used rather than multiple fibres. This allows a single fibre pair to be used, much simplifying cabling and maintaining compatibility with the existing infrastructure. DWDM is very powerful as it can deliver 800GB over distances of over 5000km or 1.6TB for 1000km, It also allows you to have full-bandwidth interconnects between switches. Long haul SFPs with DWDM built in are called OSFP-LS transceivers.

Cost per bit is the religion at play here with the hyperscalers keenly buying into the 400Gb technology because this is only twice-, not four-, times the price of the 100Gb technology it’s replacing. The same is true of 800Gb. The new interfaces will run the ASICs faster and so will need to dissipate more heat. This has led to two longer form factors, the OSFP and QSFP-DD. The OSFP is a little larger than the QSFP but an adaptor can be used to maintain QSFP-form factor compatibility.

Andy explains that 800Gb Ethernet has been finished by the Ethernet Technology Alliance and is going into 51,2t silicon which will allow channels of native 800Gb capacity. This is somewhat in the future, though and Andy says that in the way 25G has worked well for us the last 5 years, 100gig is where the focus is for the next 5. Andy goes on to look at what future 800G chassis might loo like saying that in 2U you would expect 64 800G OSFP interfaces which could provide 128 400G outputs or 512 100G outputs with no co-packaged optics required.

Watch now!
Speakers

Robert Welch Robert Welch
Technical Solutions Lead,
Arista
Andy Bechtolsheim Andy Bechtolsheim
Chairman, Chief Development Officer and Co-Founder,
Arista Networks

Video: ST 2110 The Future of Live Remote Production

Trying to apply the SMPTE ST 2110 hype to the reality of your equipment? This video is here to help. There are many ‘benefits’ of IP which are banded about yet it’s almost impossible to realise them all in one company. For the early adopters, there’s usually one benefit that has been the deal-breaker with other benefits helping boost confidence. Smaller broadcast companies, however, can struggle to get the scale needed for cost savings, don’t require as much flexibility and can’t justify the scalability. But as switches get cheaper and ST 2110 support continues to mature, it’s clear that we’re beyond the early adopter phase.

This panel gives context to ST 2110 and advises on ways to ‘get started’ and skill up. Moderated by Ken Kerschbaumer from the Sports Video Group, Leader’s Steve Holmes, Prinyar Boon from Phabrix join the panel with Arista colleagues Gerard Phillips and Robert Welch and Bridge Technologies’ Chairman Simen Frostad.

The panel quickly starts giving advice. Under the mantra ‘no packet left behind’, Gerard explains that, to him, COTS (Commercial Off The Shelf) means a move to enterprise-grade switches ‘if you want to sleep at night’. Compared to SDI, the move to IT can bring cost savings but don’t skimp on your switch infrastructure if you want a good quality product. Simen was pleased to welcome 2110 as he appreciated the almost instant transmission that analogue gave. The move to digital added a lot of latency, even in the SDI portions of the chain thanks to frame syncs. ST 2110, he says, allows us to get back, most of the way, to no-latency production. He’s also pleased to bid good-bye to embedded data.

It is possible to start small, is the reassuring message next from the panel. The trick here is to start with an island of 2110 and do your learning there. Prinyar lifts up a tote bag saying he has a 2110 system he can fit in there which takes just 10 minutes to get up and running. With two switches, a couple of PTP grandmasters and some 2110 sources, you have what you need to start a small system. There is free software that can help you learn about it, Easy NMOS is a quick-to-deploy NMOS repository that will give you the basics to get your system up and running. You can test NMOS APIs for free with AMWA’s testing tool. The EBU’s LIST project is a suite of software tools that help to inspect, measure and visualize the state of IP-based networks and the high-bitrate media traffic they carry and there’s is also SDPoker which lets you test ST 2110 SDP files. So whilst there are some upfront costs, to get the learning, experience and understanding you need to make decisions on your ST 2110 trajectory, it’s cost-effective and can form part of your staging/test system should you decide to proceed with a project.

The key here is to find your island project. For larger broadcasters or OB companies, a great island is to build an IP OB truck. IP has some big benefits for OB Trucks as we heard in this webinar, such as weight reduction, integration with remote production workflows and scalability to ‘any size’ of event. Few other ‘islands’ are able to benefit in so many ways, but a new self-op studio or small control room may be just the project for learning how to design, install, troubleshoot and maintain a 2110 system. Prinyar cautions that 2110 shouldn’t be just about moving an SDI workflow into IP. The justification should be about improving workflows.

Remote control is big motivator for the move to ST 2110. Far before the pandemic, Discovery chose 2110 for their Eurosport production infrastructure allowing them to centralise into two European locations all equipment controlled in production centres in countries around Europe. During the pandemic, we’ve seen the ability to create new connections without having to physically install new SDI is incredibly useful. Off the back of remote control of resources, some companies are finding they are able to use operators from locations where the hourly rate is low.

Before a Q&A, the panel addresses training. From one quarter we hear that ensuring your home networking knowledge is sound (DHCP, basic IP address details) is a great start and that you can get across the knowledge needed very little time. Prinyar says that he took advantage of a SMPTE Virtual Classroom course teaching the CCNA, whilst Robert from Arista says that there’s a lot in the CCNA that’s not very relevant. The Q&A covers 2110 over WAN, security, hardware life cycles and the reducing carbon footprint of production.

Watch now!
Speakers

Steve Holmes Steve Holmes
Applications Engineer,
Leader
Prinyar Boon Prinyar Boon
Product Manager,
PHABRIX
Gerard Phillips Gerard Phillips
Systems Engineer,
Arista
Simen Frostad Simen Frostad
Chairman,
Bridge Technologies
Robert Welch Robert Welch
Technical Solutions Lead,
Arista
Ken Kerschbaumer Moderator: Ken Kerschbaumer
Chair & Editorial Directo,
Sports Video Group

Video: IP for Broadcast, Virtual Immersive Studios, Esports

A wide range of topics today covering live virtual production, lenses, the reasons to move to IP, Esports careers and more. This is a recording of the SMPTE Toronto sections’ February meeting with guest speakers from Arista, Arri, TFO and Ross Video.

The first talk of the evening was from Ryan Morris of Arista talking about the importance of the move to IP. Those with an IP infrastructure have noticed that it’s easier to continue using their system during lockdown when access to the equipment itself is limited. While there will always be a need to move a 100Gbe fibre at some point or other, a running 2110 system easily allows new connections without needing SDI cables plugging up. This is down to IP’s ability to carry multiple signals, in both directions, down a single cable. A 100 gigabit fibre can carry 65 1080i59.94 signals, for instance which is in stark constrast to SDI cabling. Similarly when using an IP router, you can route thousands of flows in a few U of space where as a 1152×1152 SDI router takes up a whole rack.

Ryan moves to an overview of the protocols that make broadcast on IP networks possible starting with unicast, multicast and broadcast. The latter, he likens to a baby screaming. Multicast is like you talking to a group of friends. Multicast is the protocol used for audio, video and other essences when being sent over IP whether as part of SMPTE ST 2110 or ST 2022-6. And whilst it works well, the protocol managing it, IGMP, isn’t really as smart as we need it to be. IGMP knows nothing about the bandwidth of the flow being sent and has no knowledge of capacity or loading of any link. As such, links can get saturated using this method and can even mean that routine maintenance overloads the backup path resulting in an outage. Ryan concludes by saying that SDN resolves this problem. Ryan explains IGMP as analogous to knowing which address you need to drive to and simply setting off in the right direction, reacting to any traffic jams and roadblocks you find. In contrast, he says SDN is like having GPS where everything is taken in to account from the beginning and you know the whole path before you set off. Both will get you there, SDN will be more efficient, predictable and accountable.

To understand more about IP, watch these talks:
“Is IP really better than SDI?” by Ed Calverly detailing on how video over IP works and,
“Network design for live production” by, colleague of Ryan, Gerard Philips
 

 
Next in the line-up is François Gauthier who takes u through the history of cinema-related technologies showing how, at each stage, stanards helped the increasingly global industry work together. SMPTE’s earliest, well known, standardisation efforts were to aid the efforts around World War 1 interchanging films between projectors/cameras. Similarly, ARRI started in 1917 and has benefited from and worked to create SMPTE standards in cameras, lighting, workflows, colour grading and now mixed reality. François eloquently takes us on this journey showing at each stage the motivation for standardisation and how ARRI has developed in step.

A different type of innovation is on show in the next talk. Given by Cliff Lavalée updates on the latest improvements to his immersive studio. It was formerly featured in a previous SMPTE Toronto section talk when he explained the benefits of having a gaming-based 3D engine in this green-screen studio with camera tracking. In fact, it was the first studio of its kind as it came on line in 2016. Since then, game engined have made great inroads into studio production.

Having a completely virtual studio with camera tracking and 3D objects available to be live-rendered in response to the scene, has a number of benefits, Cliff explains. He can track the talent and make objects appear in front or behind them as appropriate in response to their movements. Real-time rendering and the green blank canvas gives design freedom as well as the ability to see what scenes will look like during the shoot rather than after. It’s no surprise that there are also cost savings. In one of a number of videos he shows, we see a children’s programme which takes place in a small village. By using the green screen, the live-action puppets can quickly change sets from place to place integrating real props with virtual backgrounds which move with the camera.

The last talk is from Cameron Reed who’s a former esports director and now works for Ross Video. Cameron gives a brief overview of how esports is split up into developers who make the game, tournament organisers, teams, live production companies and distribution platforms. The Broadcast Knowledge has followed esports for a while. Check out the back catalogue for more detailed videos on the subject.

It’s no surprise that the developers own the game. What’s interesting is that a computer game is much more complex and directly malluable than traditional sports games. Whilst FIFA might control football/soccer world-wide, there is little it can do to change the game. Formula 1 is, perhaps, closest to the esports model where rules will come and go about engines, tyres, refueling strategies etc. With esports, aspects of the game can change week to week in response to fans. Cameron explains esports as ‘free’ adverstising for the developers. Although they won’t always make money, even if they make 90% of their money back directly from the tournament and events for that year, it means they’ve had a 90% discount on their advertising budget. All the while, they’ve managed to inject life in to their game and extend the amount of interest it’s garnered. Camerong gives a brief acknowledgement that for distribution “Twitch is king” but underlines that this platform doesn’t support UHD as of the date of the meeting which doesn’t sit well with the efforts of the gameing industry to increase resolution and detail in games.

Cameron’s presentation finishes with a look at career progressions in esports both following a non/semi-technichal path and a technical path. The market holds a lot of interesting opportunities.

The session ends with a Q&A for all the panelists.

Watch now!
Speakers

Ryan Morris Ryan Morris
Systems Engineer,
Arista Networks
François Gauthier François Gauthier
TSR,
ARRI
Cliff Lavalée Cliff Lavallée
Director of LUV Studio Services,
Groupe Média TFO
Cameron Reed
Esports Business Development Manager,
Ross Video

Video: As Time Goes by…Precision Time Protocol in the Emerging Broadcast Networks

How much timing do you need? PTP can get you timing in the nanoseconds, but is that needed, how can you transport it and how does it work? These questions and more are under the microscope in this video from RTS Thames Valley.

SMPTE Standards Vice President, Bruce Devlin introduces the two main speakers by reminding us why we need timing and how we dealt with it in the past. Looking back to the genesis of television, points out Bruce, everything was analogue and it was almost impossible to delay a signal at all. An 8cm, tightly wound coil of copper would give you only 450 nanoseconds of delay alternatively quartz crystals could be used to create delays. In the analogue world, these delays were used to time signals and since little could be delayed, only small adjustments were necessary. Bruce’s point is that we’ve swapped around now. Delays are everywhere because IP signals need to be buffered at every interface. It’s easy to find buffers that you didn’t know about and even small ones really add up. Whereas analogue TV got us from camera to TV within microseconds, it’s now a struggle to get below two seconds.

Hand in hand with this change is the change from metadata and control data being embedded in the video signal – and hence synchronised with the video signal – to all data being sent separately. This is where PTP, Precision Time Protocol, comes in. An IP-based timing mechanism that can keep time despite the buffers and allow signals to be synchronised.

 

 

Next to speak is Richard Hoptroff whose company works with broadcasters and financial services to provide accurate time derived from 4 satellite services (GPS, GLONASS etc) and the Swedish time authority RiSE. They have been working on the problem of delivering time to people who can’t put up antennas either because they are operating in an AWS datacentre or broadcasting from an underground car park. Delivering time by a wired network, Richard points out, is much more practical as it’s not susceptible to jamming and spoofing, unlike GPS.

Richard outlines SMPTE’s ST 2059-2 standard which says that a local system should maintain accuracy to within 1 microsecond. the JT-NM TR1001-1 specification calls for a maximum of 100ms between facilities, however Richard points out that, in practice, 1ms or even 10 microseconds is highly desired. And in tests, he shows that with layer 2, PTP unicast looping around western Europe was able to adhere to 1 microsecond, layer 3 within 10 microseconds. Over the internet, with a VPN Richard says he’s seen around 40 microseconds which would then feed into a boundary clock at the receiving site.

Summing up Richard points out that delivering PTP over a wired network can deliver great timing without needing timing hardware on an OPEX budget. On top of that, you can use it to add resilience to any existing GPS timing.

Gerard Philips from Arista speaks next to explain some of the basics about how PTP works. If you are interested in digging deeper, please check out this talk on PTP from Arista’s Robert Welch.

Already in use by many industries including finance, power and telecoms, PTP is base on IEEE-1588 allowing synchronisation down to 10s of nanoseconds. Just sending out a timestamp to the network would be a problem because jitter is inherent in networks; it’s part and parcel of how switches work. Dealing with the timing variations as smaller packets wait for larger packets to get out of the way is part of the job of PTP.

To do this, the main clock – called the grandmaster – sends out the time to everyone 8 times a second. This means that all the devices on the network, known as endpoints, will know what time it was when the message was sent. They still won’t know the actual time because they don’t know how long the message took to get to them. To determine this, each endpoint has to send a message back to the grandmaster. This is called a delay request. All that happens here is that the grandmaster replies with the time it received the message.

PTP Primary-Secondary Message Exchange.
Source: Meinberg [link]

This gives us 4 points in time. The first (t1) is when the grandmaster sent out the first message. The second (t2) is when the device received it. t3 is when the endpoint sent out its delay request and t4 is the time when the master clock received that request. The difference between t2 and t1 indicates how long the original message took to get there. Similarly, t4-t3 gives that information in the other direction. These can be combined to derive the time. For more info either check out Arista’s talk on the topic or this talk from RAVENNA and Meinberg from which the figure above comes.

Gerard briefly gives an overview of Boundary Clock which act as secondary time sources taking pressure off the main grandmaster(s) so they don’t have to deal with thousands of delay requests, but they also solve a problem with jitter of signals being passed through switches as it’s usually the switch itself which is the boundary clock. Alternatively, Transparent Clock switches simply pass on the PTP messages but they update the timestamps to take account of how long the message took to travel through the switch. Gerard recommends only using one type in a single system.

Referring back to Bruce’s opening, Gerard highlights the need to monitor the PTP system. Black and burst timing didn’t need monitoring. As long as the main clock was happy, the DA’s downstream just did their job and on occasion needed replacing. PTP is a system with bidirectional communication and it changes depending on network conditions. Gerard makes a plea to build a monitoring system as part of your solution to provide visibility into how it’s working because as soon as there’s a problem with PTP, there could quickly be major problems. Network switches themselves can provide a lot of telemetry on this showing you delay values and allowing you to see grandmaster changes.

Gerard’s ‘Lessons Learnt’ list features locking down PTP so only a few ports are actually allowed to provide time information to the network, dealing carefully with audio protocols like Dante which need PTP version 1 domains, and making sure all switches are PTP-aware.

The video finishes with Q&A after a quick summary of SMPTE RP 2059-15 which is aiming to standardise telemetry reporting on PTP and associated information. Questions from the audience include asking how easy it is to do inter-continental PTP, whether the internet is prone to asymmetrical paths and how to deal with PTP in the cloud.

Watch now!
Speakers

Bruce Devlin Bruce Devlin
Standards Vice President,
SMPTE
Gerard Phillips Gerard Phillips
Systems Engineer,
Arista
Richard Hoptroff Richard Hoptroff
Founder and CTO
Hoptroff London Ltd