Is SMPTE ST 2110 suitable for inter-site connectivity over the WAN? ST 2110 is moving past the early adopter phase with more and more installations and OB vans bringing 2110 into daily use but today, each site works independently. What if we could maintain a 2110 environment between sites. There are a number of challenges still to be overcome and moving a large number of essence flows long distances and between PTP time domains is one of them.
Nevion’s Andy Rayner is chair of the VSF Activity Group looking into transporting SMPTE ST 2110 over WAN and is here to give an update on the work in progress which started 18 months ago. The presentation looks at how to move media between locations which has been the primary focus to date then introduces how controlling over which media are shared will be handled which is new to the discussions. Andy starts by outlining the protection offered in the scheme which supports both 2022-7 and FEC. Andy explains that though FEC is valuable for single links where 2022-7 isn’t viable, only some of the possible ST 2022-5 FEC configurations are supported, in part, to keep latency low.
The headline to carrying 2110 over the WAN is that it will be done over a trunk. GRE is a widely used Cisco trunking technology. Trunking, also known as tunnelling, is a technique of carrying ‘private’ traffic over a network such that a device sending into the trunk doesn’t see any of the infrastructures between the entrance and the exit. It allows, for instance, IPv6 traffic to be carried over IPv4 equipment where the v4 equipment has no idea about the v6 data since it’s been wrapped in a v4 envelope. Similarly, the ipv6 equipment has no idea that the ipv6 data is being wrapped and carried by routers which don’t understand ipv6 since the wrapping and unwrapping of the data is done transparently at the handoff.
In the context of SMPTE ST 2110, a trunk allows one port to be used to create a single connection to the destination, yet carry many individual media streams within. This has the big benefit of simplifying the inter-site connectivity at the IT level, but importantly also means that the single connection is quite high bandwidth. When FEC is applied to a connection, the latency introduced increases as the bit rate reduces. Since ST 2110 carries audio and metadata separately, an FEC-protected stream would have variable latency depending on the type of the of traffic. Bundling them in to one large data stream allows FEC to be applied once and all traffic then suffers the same latency increase. The third reason is to ensure all essences take the same network path. If each connection was separate, it would be possible for some to be routed on a physically different route and therefore be subject to a different latency.
Entering the last part of the talk, Andy switches gears to talk about how site A can control streams in site B. The answer is that it doesn’t ‘control’, rather there is the concept of requesting streams. Site A will declare what is available and site B can state what it would like to connect to and when. In response, site A can accept and promise to have those sources available to the WAN interface at the right time. When the time is right, they are released over the WAN. This protects the WAN connectivity from being filled with media which isn’t actually being used. These exchanges are mediated and carried out with NMOS IS-04 an IS-05.
NMOS is the open standard for multiple vendors co-operating on a broadcaster network, particularly ST 2110, to announce new devices and configure them. Acting as both a database but also a way of easily describing settings to be shared between systems. Often new ST 2110 systems are specified to be NMOS IS-04 and IS-05 capable.
NMOS IS-04 is the name of the specification which defines discovery and registration of devices while IS-05 describes the control of said devices. It’s very hard to run a SMPTE ST 2110 system without these or a proprietary protocol which exchanges the same information. It’s not practical to manage any of these tasks at anything more than the smallest scale.
John Mailhot from Imagine Communications delivers a concise summary of these technologies which may be new to you. He explains that an SDP will be generated and John reviews how you would read them. John explains that the stack is open source with the aim of promoting interoperability.
John takes the time needed to look at IS-04 and IS-05 in terms of practically implementing it at the end of this short talk.
It’s hard to talk about SMPTE 2110 system design without hearing the term ‘spine and leaf’. It’s a fundamental decision that needs to be made early on in the project; how many switches will you use and how will they be interconnected? Deciding is not without accepting compromises, so what needs to be considered?
Chris Lapp from Diversified shares his experience in designing such systems. Monolithic design has a single switch at the centre of the network with everything connected directly to it. For redundancy, this is normally complemented by a separate, identical switch providing a second network. For networks which are likely to need to scale, monolithic designs can add a hurdle to expansion once they get full. Also, if there are many ‘low bandwidth’ devices, it may not be cost-effective to attach them. For instance, if your central switch has many 40Gbps ports, it’s a waste to use many to connect to 1Gbps devices such as audio endpoints.
The answer to these problems is spine and leaf. Chris explains that this is more resilient to failure and allows easy scaling whilst retaining a non-blocking network. These improvements come at a price, naturally. Firstly, it does cost more and secondly, there is. added complexity. In a large facility with endpoints spread out, spine and leaf may be the only sensible option. However, Chris explores a cheaper version of spine and leaf often called ‘hub and spoke’ or ‘hybrid’.
If you are interested in this topic, listen to last week’s video from Arista’s Gerard Philips which talked in more detail about network design covering the pros and cons of spine and leaf, control using IGMP and SDN, PTP design amongst other topics. Read more here.
How does NDI fit into the recent refocussing of interest in working remotely, operating broadcast workflows remotely and moving workflows into the cloud? Whilst SRT and RIST have ignited imaginations over how to reliably ingest content into the cloud, an MPEG AVC/HEVC workflow doesn’t make sense due to the latencies. NDI is a technology with light compression with latencies low enough to make cloud workflows feel almost immediate.
Vizrt’s Ted Spruill and Jorge Dighero join moderator Russell Trafford-Jones to explore how the challenges the pandemic have thrown up and the practical ways in which NDI can meet many of the needs of cloud workflows. We saw in the talk Where can SMPTE ST 2110 and NDI co-exist? how NDI is a tool to get things done, just like ST 2110 and that both have their place in a broadcast facility. This video takes that as read looks at the practical abilities of NDI both in and out of the cloud.
Taking the of a demo and then extensive Q&A, this talk covers latency, running NDI in the cloud, networking considerations such as layer 2 and layer 3 networks, ease of discovery and routing, contribution into the cloud, use of SRT and RIST, comparison with JPEG XS, speed of deployment and much more!.
Sales Manager-US Group Stations,
Director of Education, Emerging Technologies, SMPTE
Manager, Support & Services, Techex
Subscribe to get daily updates
Views and opinions expressed on this website are those of the author(s) and do not necessarily reflect those of SMPTE or SMPTE Members.
This website is presented for informational purposes only. Any reference to specific companies, products or services does not represent promotion, recommendation, or endorsement by SMPTE