Video: RTMP: A Quick Deep-Dive

RTMP hasn’t left us yet, though, between HLS, DASH, SRT and RIST, the industry is doing its best to get rid of it. At the time RTMP’s latency was seen as low and it became a defacto standard. But as it hasn’t gone away, it pays to take a little time to understand how it works

Nick Chadwick from Mux is our guide in this ‘quick deep-dive’ into the protocol itself. To start off he explains the history of the Adobe-created protocol to help put into context why it was useful and how the specification that Adobe published wasn’t quite as helpful as it could have been.

Nick then gives us an overview of the protocol explaining that it’s TCP-based and allows for multiple, bi-directional streams. He explains that RTMP multiplexes larger, say video, messages along with very short data requests, such as RPC, but breaking down the messages into chunks which can be multiplexed over just the one TCP connection. Multiplexing at the packet level allows RTMP to be asking the other end a question at the same time as delivering a long message.

Nick has a great ability to make describing the protocol and showing ASCII tables accessible and interesting. We quickly start looking at the header for chunks explaining what the different chunks are and how you can compress the headers to save bit rate. He also describes how the RTMP timestamp works and the control message and command message mechanism. Before answering Q&A questions, Nick outlines the difficulty in extending RTMP to new codecs due to the hard-coded list of codecs that can be used as well as recommending improvements to the protocol. It’s worth noting that this talk is from 2017. Whilst everything about RTMP itself will still be correct, it’s worth remembering that SRT, RIST and Zixi have taken the place of a lot of RTMP workflows.

Watch now!
Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux

Video: Monolithic and Spine-Leaf Architectures

It’s hard to talk about SMPTE 2110 system design without hearing the term ‘spine and leaf’. It’s a fundamental decision that needs to be made early on in the project; how many switches will you use and how will they be interconnected? Deciding is not without accepting compromises, so what needs to be considered?

Chris Lapp from Diversified shares his experience in designing such systems. Monolithic design has a single switch at the centre of the network with everything connected directly to it. For redundancy, this is normally complemented by a separate, identical switch providing a second network. For networks which are likely to need to scale, monolithic designs can add a hurdle to expansion once they get full. Also, if there are many ‘low bandwidth’ devices, it may not be cost-effective to attach them. For instance, if your central switch has many 40Gbps ports, it’s a waste to use many to connect to 1Gbps devices such as audio endpoints.

The answer to these problems is spine and leaf. Chris explains that this is more resilient to failure and allows easy scaling whilst retaining a non-blocking network. These improvements come at a price, naturally. Firstly, it does cost more and secondly, there is. added complexity. In a large facility with endpoints spread out, spine and leaf may be the only sensible option. However, Chris explores a cheaper version of spine and leaf often called ‘hub and spoke’ or ‘hybrid’.

If you are interested in this topic, listen to last week’s video from Arista’s Gerard Philips which talked in more detail about network design covering the pros and cons of spine and leaf, control using IGMP and SDN, PTP design amongst other topics. Read more here.

Watch now!
Speakers

Chris Lapp Chris Lapp
Project Engineer, SME Routing
Diversified
Wes Simpson Wes Simpson
President, Telcom Product Consulting
Owner, LearnIPVideo.com

Video: CDN Trends in FPGAs & GPUs

As technology continues to improve, immersive experiences are all the more feasible. This video looks at how the CDNs can play their part in enabling technologies which seem to rely on fast, local, compute. However, as with many internet services, low latency is very important.

Greg Jones from Nvidia and Nehal Mehta form Intel give us the lowdown in this video on what’s happening today to enable low-latency CDNs and what the future might look like. Intel, owners of FPGA makers Altera, and Nvidia are both interested in how their products can be of as much service at the edge as in the core datacentres.

Greg is involved in XR development at Nvidia. ‘XR’ is a term which refers to an outcome rather than any specific technology. Ostensibly ‘eXtended’ reality, it includes some VR, some augmented reality and anything else which helps improve the immersive experience. Greg explains that the importance of getting the ‘motion to photon’ delay to within 20ms. CDNs can play a role in this by moving compute to the edge. This tracks with current trends on wanting to reduce backhaul, edge computation is already on the rise.

Greg also touches on recent power improvements on newer GPUs. Similar to what we heard the other day from Gerard Phillips from Arista who said that switch manufacturers were still using technology that CPU’s were on several years ago meaning there’s plenty in the bank for speed increases over the coming years. According to Greg, the same is true for GPUs. Moreover, it’s important to compare compute per watt rather than doing it in absolute terms.

Nehal Mehta explains that, in the same way that GPUs can offload certain tasks from the CPU, so do FPGAs. At scale, this can be critical for tasks like deep packet inspection, encryption or even dynamic ad insertion at the edge,

The second half of video looks at what’s happening during the pandemic. Nehal explains that need for encryption has increased and Greg sees that large engineering functions are now, or many are soon likely to be, done in the cloud. Greg sees XR as going a long way to helping people collaborate around a large digital model and may help to reduce travel.

The last point made is regarding video conferencing all day long leaving people wanting “more meaningful interactions”. We are seeing attempts at richer and richer meeting experiences, both with and without XR.
Watch now!
Speakers

Greg Jones Greg Jones
Global Business Development, XR
NVIDIA
Nehal Mehta Nehal Mehta
Direcotr Visiual Cloud, CDN Segment,
Intel
Tim Siglin Moderator: Tim Siglin
Founding Executive Director,
Help Me Stream

Video: Network Design for Live Production

The benefits of IP sound great, but many are held back with real-life concerns: Can we afford it? Can we plug the training gap? and how do we even do it? This video looks at the latter; how do you deploy a network good enough for uncompressed video, audio and metadata? The network needs to deal with a large number of flows, many of which are high bandwidth. If you’re putting it to air, you need reliability and redundancy. You need to distribute PTP timing, control and maintain it.

Gerard Philips from Arista talks to IET Media about the choices you need to make when designing your network. Gerard starts by reminding us of the benefits of moving to IP, the most tangible of which is the switching density possible. SDI routers can use a whole rack to switch over one thousand sources, but with IP Gerard says you can achieve a 4000-square router within just 7U. With increasingly complicated workflows and with the increasing scale of some broadcasters, this density is a major motivating factor in the move. Doubling down on the density message, Gerard then looks at the difference in connectivity available comparing SDI cables which have signal per cable, to 400Gb links which can carry 65 UHD signals per link.

Audio is always ahead of video when it comes to IP transitions so there are many established audio-over-IP protocols, many of which work at Layer 2 over the network stack. Using Layer 2 has great benefits because there is no routing which means that discovering everything on the network is as simple as broadcasting a question and waiting for answers. Discovery is very simple and is one reason for the ‘plug and play’ ease of NDI, being a layer 2 protocol, it can use mDNS or similar to query the network and display sources and destinations available within seconds. Layer 3-based protocols don’t have this luxury as some resources can be on a separate network which won’t receive a discovery request that’s simply broadcast on the local network.

Gerard examines the benefits of layer 2 and explains how IGMP multicast works detailing the need for an IGMP querier to be in one location and receiving all the traffic. This is a limiting factor in scaling a network, particularly with high-bandwidth flows. Layer 3, we hear, is the solution to this scaling problem bringing with it more control of the size of ‘failure domains’ – how much of your network breaks if there’s a problem.

The next section of the video gets down to the meat of network design and explains the 3 main types of architecture: Monolithic, Hub and spoke and leaf and spoke. Gerard takes time to discuss the validity of all these architectures before discussing coloured networks. Two identical networks dubbed ‘Red’ and ‘Blue’ are often used to provide redundancy in SMPTE ST 2110, and similar uncompressed, networks with the idea that the source generates two identical streams and feeds them over these two identical networks. The receiver receives both streams and uses SMPTE ST 2022-7 to seamlessly deal with packet loss. Gerard then introduces ‘purple’ networks, ones where all switch infrastructure is in the same network and the network orchestrator ensures that each of the two essence flows from the source takes a separate route through the infrastructure. This means that for each flow there is a ‘red’ and a ‘blue’ route, but overall each switch is carrying a mixture of ‘red’ and ‘blue’ traffic.

The beauty of using IGMP/PIM for managing traffic over your networks is that the network itself decides how the flows move over the infrastructure. This makes for a low-footprint, simple installation. However, without the ability to take into account individual link capacity, the capacity of the network in general, bitrate of individual flows and understanding the overall topology, there is very control over where your traffic is which makes maintenance and fault-finding hard and, more generally, what’s the right decision for one small part of the network is not necessarily the right decision for the flow or for the network as a whole. Gerard explains how Software-Defined Networking (SDN) address this and give absolute control over the path your flows take.

Lastly, Gerard looks at PTP, the Precision Time Protocol. 2110 relies on having the PTP in the flow, in the essence allowing flows of separate audio and video to have good lip-sync and to avoid phase errors when audio is mixed together (where PTP has been used for some time). We see different architectures which include two grandmaster clocks (GMs), discuss whether boundary clocks (BCs) or transparent clocks (TCs) are the way to go and examine the little security that is available to stop rogue end-points taking charge and becoming grandmaster themselves.

Watch now!
Speaker

Gerard Phillips Gerard Phillips
Systems Engineer,
Arista