Leaf & spine networks have started taking over data centres in the last few years. It’s no secret that people prefer scale-out over scale-up solutions and you can see a similar approach in ST 2110 networks, when large monolithic video switches are replaced with smaller leaf and spine switches.
Leaf and spine refers to networks where a number of main, high throughput switches link to a number of smaller switches. These smaller switches tend to be aggregators and offer the promise of cheaper ports delivered closer to your equipment. The alternative to leaf & spine is monolithic switches which do have their merits, but are certainly not always the right choice.
To provide non-blocking switching in leaf & spine networks you need an SDN controller that orchestrates media flows. Advances in SDN capabilities have led to the emergence of “Purple” network architectures. In this video Gerard Phillips from Arista shows how it differs from a “Red/Blue” architecture, how path diversity is maintained and how ST 2110 IP live production or playout applications could benefit from it.
It’s important to be aware of the different uses of Layer 2 vs Layer 3:
• Layer 2 devices are typically used for audio networks like Dante and RAVENNA. A layer 2 network is a simple, scalable and affordable choice for audio flows where there are no challenges in terms of bandwidth. However, this type of network doesn’t really work for high bit rate live production video multicast since all multicasts need to be delivered to the IGMP querier which isn’t scalable.
• Layer 3 have distributed IGMP management since PIM is used on each router to route multicast traffic, so there is no more flooding network with unnecessary traffic. This type of network works well with high bit rate video multicasts, but as IGMP is not bandwidth aware, it’s best to use an SDN system for flow orchestration.
Gerard then looks at resilience:
- Using 2022-7 seamless switching (plus a robust monitoring system that can provide quick, accurate information to resolve the issue)
- Choosing quality components (switches, NOS, fibres etc.)
- Providing redundancy (redundant PSU, fans, fabric modules etc., redundant links between switches, ensuring that routing protocol or SDN can use these “spares”)
- Dividing up failure domains
- Using leaf and spine architecture (routing around failed components with SDN)
- Using resilient IP protocols (BGP, ECMP)
The talk finishes up discussing the pros and cons of the different architectures available:
- Monolithic systems which are non-blocking, but have a wide failure domain
- Monolithic – expansion toward spine and leaf with SDN for non-blocking switching
- Leaf & spine with air-gapped Red and Blue networks
- Leaf & spine hybrid with Purple switches connected to both Red and Blue spines to support single homed devices
- Leaf & spine Purple. Here, red and blue flows are connected to physically separate switches, but the switches are not identified as red and blue anymore. This is a converged network and an SDN controller is required to provide diverse paths flows to go to two different spines.
You can download the slides from here.