Video: Proper Network Designs and Considerations for SMPTE ST-2110

Networks from SMPTE ST 2110 systems can be fairly simple, but the simplicity achieved hides a whole heap of careful considerations. By asking the right questions at the outset, a flexible, scalable network can be built with relative ease.

“No two networks are the same” cautions Robert Welch from Arista as he introduces the questions he asks at the beginning of the designs for a network to carry professional media such as uncompressed audio and video. His thinking focusses on the network interfaces (NICs) of the devices: How many are there? Which receive PTP? Which are for management and how do you want out-of-band/ILO access managed? All of these answers then feed into the workflows that are needed influencing how the rest of the network is created. The philosophy is to work backwards from the end-nodes that receive the network traffic.

Robert then shows how these answers influence the different networks at play. For resilience, it’s common to have two separate networks at work sending the same media to each end node. Each node then uses ST 2022-7 to find the packets it needs from both networks. This isn’t always possible as there are some devices which only have one interface or simply don’t have -7 support. Sometimes equipment has two management interfaces, so that can feed into the network design.

PTP is an essential service for professional media networks, so Robert discusses some aspects of implementation. When you have two networks delivering the same media simultaneously, they will both need PTP. For resilience, a network should operate with at least two Grand Masters – and usually, two is the best number. Ideally, your two media networks will have no connection between them except for PTP whereby the amber network can benefit from the PTP from the blue network’s grandmaster. Robert explains how to make this link a pure PTP-only link, stopping it from leaking other information between networks.

Multicast is a vital technology for 2110 media production, so Robert looks at its incarnation at both layer 2 and layer 3. With layer 2, multicast is handled using multicast MAC addresses. It works well with snooping and a querier except when it comes to scaling up to a large network or when using a number of switches. Robert explains that this because all multicast traffic needs to be sent through the rendez-vous point. If you would like more detail on this, check out Arista’s Gerard Phillips’ talk on network architecture.

Looking at JT-NM TR-1001, the guidelines outlining the best practices for deploying 2110 and associated technologies, Robert explains that multicast routing at layer 3 works much increases stability, enables resiliency and scalability. He also takes a close look at the difference between ‘all source’ multicasting supported by IGMP version 2 and the ability to filter for only specific sources using IGMP version 3.

Finishing off, Robert talks about the difficulties in scaling PTP since all the replies/requests go into the same multicast group which means that as the network scales, so does the traffic on that multicast group. This can be a problem for lower-end gear which needs to process and reject a lot of traffic.

Watch now!
Speaker

Robert Welch Robert Welch
Technical Solutions Lead
Arista Networks

Video: Broadcast Playout Cloud Transformation

Playout has been gradually moving to the cloud for a number of years now. Famously Discovery moved all of their thematic playout to the cloud in 2018 and many have done the same since. As we saw the other day, Sky Italia are now seeing ‘code as infrastructure’ whereby automated API calls launch the in-cloud infrastructure they need as part of their linear playout.

In this video, we hear from Matt Westrup from A+E EMEA on how they’ve moved their playout to the cloud with their partner Amagi. Running 30 channels in Europe, Matt explains that due to some business uncertainty with a partner company, the need for a DR facility was identified. Talking to Srinivasan KA from Amagi, they were able to create this using Amagi’s product portfolio based in AWS. Matt explains that after the DR facility was set up, they moved quickly to full mirroring and ultimately they flipped the switch and they announced they were now broadcasting from the cloud.

Srinivasan KA explains that many companies take a similar route when working in the cloud. Sometimes a cost-effective DR facility is all they need, however it’s easy to replicate all your workflows in the cloud and have that on standby. This can be done by keeping the content in the cloud evergreen, running automation but keeping the playout functions switched off to save money which can be quickly brought online as needed. Srinivasan KA looks at the high-level diagram of the A+E operation showing how S3 holds the content, goes through a workflow to the CPU-powered playout and then is handed off using direct connect to affiliates and telcos using Amagi’s POPs.

Matt comments that this was relatively easy to do from a business perspective “No-one was investing massively in fixed infrastructure” and they’ve found they have been faster to market with a speed they’ve “never experienced before.” Needless to say, the move to the cloud also came into its own and provided a seamless move work home working during the pandemic. And, looking more longterm, A+E will continue to benefit from not having to manage the physical datacentre/serber room infrastructure.

The video finishe swith an overview of Broadcast in AWS from Andy Kane. He covers the main drivers for broadcasters moving to the cloud such as business agility, a preference with some companies for increasing Opex spending, increased ease in experiementing with new technologies/ways of engaging with customers, using a remote workforce among others. Andy covers an example broadcast flow using MediaConnect for contribution, MediaLive Statmux for distibution, redundancy strategies and other building blocks such as TAG multiviewers.

Watch now!
From the AWS Media Insights Webcast Series
Speakers

Andy Kane Andy Kane
Principal AI/ML Specialist Solutions Architect (Languages),
Amazon Web Services (AWS)
Matt Westrup Matt Westrup
VP Technology and Operations,
A+E EMEA
Srinivasan KA Srinivasan KA
Co-founder,
Amagi Corporation
Ian McPherson Ian McPherson
Partner Development Lead – Media & Entertainment,
Amazon Web Services (AWS)

Video: Remote Production in Real Life

Remote production is in heightened demand at the moment, but the trend has been ongoing for several years. For each small advance in technology, it becomes practical for another event to go remote. Remote production solutions have to be extremely flexible as a remote production workflow for one copmany won’t work for the next This is why the move to remote has been gradual over the last decade.

In this video, Dirk Skyora from Lawo gives three examples of remote production projects stretching back as far as 2016 to the present day in this RAVENNA webinar with evangelist Andreas Hildebrand.

The first case study is remote production for Belgian second division football. Working with Belgian telco Proximus along with Videohouse & NEP, LAWO setup remote production for stadia kitted out with 6 cameras and 2 commentary positions. With only 1 gigabit connectivity to the stadiums, they opted to use JPEG 2000 encoding at 100 Mbps for both the cameras out of the stadia but also the two return feeds back in for the commentators.

The project called for two simultaneous matches feeding into an existing gallery/PCR. Deployment was swift with flightcases deployed remotely and a double set of equipment being installed into the fixed PCR. Overall latency was around 2.5 frames one-way, so the camera viewfinders were about 5 frames adrift once transport and en/decoding delay were accounted for.

The main challenges were with the MPLS network into the stadia which would spontanously reroute and be loaded with unrelated traffic at around 21:00. Although there was packet loss, none was noticable on the 100Mbps J2K feeds. Latency for the commentators was a problem so some local mixing was needed and lastly PTP wasn’t possible over the network. Timing was, therefore, derived from the return video feed into the stadium which had come from the PTP-locked gallery. Locally this incoming timing was used to lock a locally generated PTP signal.

The next case study is inter-country links for the European Council connecting the Luxembourg and Brussels buildings for the European Council. The project was to move all production to a single tech control room in Brussells and relied on two 10GbE links between the buildings going through an Arista 7280 carrying 18 videos in one direction and two in return. Although initially reluctant to compress, the Council realised after testing that VC2 which offers around 4x compression would work well and deliver no noticable latency (approx 20ms end to end). Thanks to using VC2, the 10Gig links saw low usage from the project and the Council were able to migrate other business activities onto the link. PTP was generated in Brussels and Luxembourg re-generated their PTP from the Brussels signal, to be distributed locally. Overall latency was 1 frame.

Lastly, Dirk outlines the work done for the Belgium Daily News which had been bought out by DPG Media. This buy-out prompted a move from Brussels to Antwerp where a new building opened. However, all of the techinical equipmennt was already in Brussels. This led to the decision to remote control everything in Brussels from Antwerp. The production staff moved to Antwerp, causing some issues with the disconnect between production and technical, but also due to personnel relocating and getting used to new facilities.

The two locations were connected with a 400GbE, redundant infrastructure using IP<->SDI gateways. Latency was 1 frame and, again, PTP on one site was created from the incoming PTP from the other.

The video finishes with a detailed Q&A.

Watch now!
Speakers

Dirk Sykora Dirk Sykora
Technical Sales Manager,
Lawo
Andreas Hildebrand Andreas Hildebrand
RAVENNA Evangelist,
ALC NetworX

Video: NMOS Technology: A User’s Perspective

Bringing you discovery, registration, control, audio remapping, security and more, the open NMOS specifications from AMWA make using SMPTE’s ST 2110 practical. Most importantly, it makes using 2110 open meaning that different equipment can co-exist in the same ecosystem without being many different drivers being written to translate between each vendor.

Led by Wes Simpson this video talks about implementing NMOS from the perspective of a user, not a vendor with Willem Vermost> from Belgium’s public broadcaster, VRT. One drawback of IP-based solutions, they say early on, is that there are so many options on how to deploy. This potential choice paralysis goes hand in hand with trying to adapt to the new possibilities which come with the technologies. For instance, identifies Willem, says engineers need to adapt their thinking just to design differently knowing that, now, multiple signals can now flow in both directions down a cable. It’s not like SDI’s point to point, unidirectional nature.

Any large plant can get busy with thousands of signals. The question is how to control this massive number of streams; not forgetting that in 2110, an SDI video stream is split up into at least 4 streams. To help put this into perspective, Willem looks back to the original telephone exchange and considers the different workflows there, They work, certainly, but having people present plugging in each individual call doesn’t scale well. In our IP world, we want to get beyond the need to ‘type in an address’ as we want to capture the ease at which cameras are connected

The telephone exchanges worked well but in the early days, there were many exchange manufacturers which, when calling from Berlin to New York all had to work. Willem suggests this is why telecoms acted upon what the broadcast industry is now learning. The last point in this analogy is the need to stop your links between exchanges from becoming over-subscribed. This task is one which NMOS can also be used to deal with, using IS-05.

NMOS is fully available on GitHub and whilst you can take that software and modify it to your needs, Willem says it’s important to maintain interoperability between vendor implementations which is why the JT-NM Tested programme exists to ensure that it’s easy to buy on the market solutions which say they support NMOS and when they do, that it works. Getting an NMOS test system is easy with open projects from Siny and NVIDIA which are ready for deployment.

Willem ends his talk by saying that ST 2110 is easier now than it was, including a recent experience when the en/decoder worked ‘out of the box’. He then answers the question “How do I start out?” Saying you should try something small first, perhaps even an island project. Once you have done that, gained the experience and the concepts, you can take it from there.

Watch now!
Speakers

Willem Vermost Willem Vermost
Design & Engineering Manager,
VRT
Wes Simpson Wes Simpson
Owner, LearnIPVideo.com