Video: Monolithic and Spine-Leaf Architectures

It’s hard to talk about SMPTE 2110 system design without hearing the term ‘spine and leaf’. It’s a fundamental decision that needs to be made early on in the project; how many switches will you use and how will they be interconnected? Deciding is not without accepting compromises, so what needs to be considered?

Chris Lapp from Diversified shares his experience in designing such systems. Monolithic design has a single switch at the centre of the network with everything connected directly to it. For redundancy, this is normally complemented by a separate, identical switch providing a second network. For networks which are likely to need to scale, monolithic designs can add a hurdle to expansion once they get full. Also, if there are many ‘low bandwidth’ devices, it may not be cost-effective to attach them. For instance, if your central switch has many 40Gbps ports, it’s a waste to use many to connect to 1Gbps devices such as audio endpoints.

The answer to these problems is spine and leaf. Chris explains that this is more resilient to failure and allows easy scaling whilst retaining a non-blocking network. These improvements come at a price, naturally. Firstly, it does cost more and secondly, there is. added complexity. In a large facility with endpoints spread out, spine and leaf may be the only sensible option. However, Chris explores a cheaper version of spine and leaf often called ‘hub and spoke’ or ‘hybrid’.

If you are interested in this topic, listen to last week’s video from Arista’s Gerard Philips which talked in more detail about network design covering the pros and cons of spine and leaf, control using IGMP and SDN, PTP design amongst other topics. Read more here.

Watch now!
Speakers

Chris Lapp Chris Lapp
Project Engineer, SME Routing
Diversified
Wes Simpson Wes Simpson
President, Telcom Product Consulting
Owner, LearnIPVideo.com

Video: Case Study – ST 2110 4K OB Van for AMV

Systems based on SMPTE ST 2110 continue to come online throughout the year and, as they do, it’s worth seeing the choices they made to make it happen. We recently featured a project building two OB truck and how they worked around COVID 19 to deliver them. Today we’re looking at an OB truck based on Grass Valley and Cisco equipment.

Anup Mehta and Rahul Parameswaran from Cisco join the VSF’s Wes Simpson to explain their approach to getting ST 2110 working to deliver a scalable truck for All Mobile Video. This brief was to deliver a truck based on NMOS control, maximal COTS equipment, flexible networking with scalable PTP and security.

Thinking back to yesterday’s talk on Network Architecture we recognise the ‘hub and spoke’ architecture in use which makes a lot of sense in OB trucks. Using monolithic routers is initially tempting for OB trucks, but there is a need for a lot of 1G and 10G ports which tends to use up high-bandwidth ports on core routers quickly. Therefore moving to a monolithic architecture with multiple, directly connected, access switches makes them most sense. As Gerard Philips commented, this is a specialised form of the more general ‘spine-leaf’ architecture which is typically deployed in larger systems.

One argument against using IGMP/PIM routing in larger installations is that those protocols have no understanding of the wider picture. They don’t take a system-wide view like a SDN controller would. If IGMP is a paper roadmap, SDN is satnav with up to date road metrics, full knowledge of width/weight restrictions and live traffic alerts. To address this, Cisco created their own technology Non-Blocking Multicast (NBM) which takes in to account the bandwidth of the streams and works closely with Cisco’s DCNM (Data Centre Network Manager). These Cisco technologies allow more insight into the system as a whole, thus make better decisions.

Anup and Rahul continue to explain how the implementation of PTP was scaled by offloading the processing to line cards than relying on the main CPU of the unit before explaining how the DCNM, not only supporting the NBM feature, also supports GV Orbit. This is the configuration and system management unit from GV. From a security perspective, the network, by default, denies access to any connections into the port plus it has the ability to enforce bandwidth limits to stop accidental flooding or similar.

Watch now!
Speakers

Anup Mehta Anup Mehta
Product Manager,
Cisco
Rahul Parameswaran Rahul Parameswaran
Senior Technical Product Manager,
Cisco

Video: Network Design for Live Production

The benefits of IP sound great, but many are held back with real-life concerns: Can we afford it? Can we plug the training gap? and how do we even do it? This video looks at the latter; how do you deploy a network good enough for uncompressed video, audio and metadata? The network needs to deal with a large number of flows, many of which are high bandwidth. If you’re putting it to air, you need reliability and redundancy. You need to distribute PTP timing, control and maintain it.

Gerard Philips from Arista talks to IET Media about the choices you need to make when designing your network. Gerard starts by reminding us of the benefits of moving to IP, the most tangible of which is the switching density possible. SDI routers can use a whole rack to switch over one thousand sources, but with IP Gerard says you can achieve a 4000-square router within just 7U. With increasingly complicated workflows and with the increasing scale of some broadcasters, this density is a major motivating factor in the move. Doubling down on the density message, Gerard then looks at the difference in connectivity available comparing SDI cables which have signal per cable, to 400Gb links which can carry 65 UHD signals per link.

Audio is always ahead of video when it comes to IP transitions so there are many established audio-over-IP protocols, many of which work at Layer 2 over the network stack. Using Layer 2 has great benefits because there is no routing which means that discovering everything on the network is as simple as broadcasting a question and waiting for answers. Discovery is very simple and is one reason for the ‘plug and play’ ease of NDI, being a layer 2 protocol, it can use mDNS or similar to query the network and display sources and destinations available within seconds. Layer 3-based protocols don’t have this luxury as some resources can be on a separate network which won’t receive a discovery request that’s simply broadcast on the local network.

Gerard examines the benefits of layer 2 and explains how IGMP multicast works detailing the need for an IGMP querier to be in one location and receiving all the traffic. This is a limiting factor in scaling a network, particularly with high-bandwidth flows. Layer 3, we hear, is the solution to this scaling problem bringing with it more control of the size of ‘failure domains’ – how much of your network breaks if there’s a problem.

The next section of the video gets down to the meat of network design and explains the 3 main types of architecture: Monolithic, Hub and spoke and leaf and spoke. Gerard takes time to discuss the validity of all these architectures before discussing coloured networks. Two identical networks dubbed ‘Red’ and ‘Blue’ are often used to provide redundancy in SMPTE ST 2110, and similar uncompressed, networks with the idea that the source generates two identical streams and feeds them over these two identical networks. The receiver receives both streams and uses SMPTE ST 2022-7 to seamlessly deal with packet loss. Gerard then introduces ‘purple’ networks, ones where all switch infrastructure is in the same network and the network orchestrator ensures that each of the two essence flows from the source takes a separate route through the infrastructure. This means that for each flow there is a ‘red’ and a ‘blue’ route, but overall each switch is carrying a mixture of ‘red’ and ‘blue’ traffic.

The beauty of using IGMP/PIM for managing traffic over your networks is that the network itself decides how the flows move over the infrastructure. This makes for a low-footprint, simple installation. However, without the ability to take into account individual link capacity, the capacity of the network in general, bitrate of individual flows and understanding the overall topology, there is very control over where your traffic is which makes maintenance and fault-finding hard and, more generally, what’s the right decision for one small part of the network is not necessarily the right decision for the flow or for the network as a whole. Gerard explains how Software-Defined Networking (SDN) address this and give absolute control over the path your flows take.

Lastly, Gerard looks at PTP, the Precision Time Protocol. 2110 relies on having the PTP in the flow, in the essence allowing flows of separate audio and video to have good lip-sync and to avoid phase errors when audio is mixed together (where PTP has been used for some time). We see different architectures which include two grandmaster clocks (GMs), discuss whether boundary clocks (BCs) or transparent clocks (TCs) are the way to go and examine the little security that is available to stop rogue end-points taking charge and becoming grandmaster themselves.

Watch now!
Speaker

Gerard Phillips Gerard Phillips
Systems Engineer,
Arista

Video: Migrating to IP – Top Questions from Broadcasters


Moving to IP can be difficult. For some, it’s about knowing where to even start. For others, it’s a matter of understanding some of the details which is the purpose of this talk from Leader US which looks at the top questions that Leader’s heard from its customer base:

  • How do we look at it?
  • How do we test it?
  • How is the data sent?
  • What is PTP?
  • How do we control it?
  • What is NMOS?
  • What are the standards involved?

These questions, and more, are covered in this webinar.

Steve Holmes from Lader Us details the IP relevant basics starting with the motivations: weight, cost, scale, density, and independent essences. We can then move on to the next questions covering RTP itself and how 2022-6 was built upon it. SMPTE ST 2022-6 splits up a regular SDI signal into sections and encapsulates them, uncompressed. This is one big difference from SMPTE ST 2110 where all essences are sent separately. For some, this is not a benefit, but for general broadcast workflows, it can sometimes be tricky getting them into alignment and some workflows are aimed at delivering an incoming bundle of PIDs so being able to separate them is a backward step.

With this groundwork laid, Steve explains how seamless redundancy works with SMPTE 2022-7 going on to then describe the difficulty of keeping jitter low and the importance of sender profiles in ST 2110. Steve finishes this section with a discussion of NMOS specifications such as IS-05 and IS-06. The session ends with a Q&A.

Watch now!
Speaker

Steve Holmes Steve Holmes
Freelance consultant