Video: Uncompressed Video in the Cloud

Moving high bitrate flows such as uncompressed media through cloud infrastructure = which is designed for scale rather than real-time throughput requires more thought than simply using UDP and multicast. That traditional approach can certainly work, but is liable to drop the occasional packet compromising the media.

In this video, Thomas Edwards and Evan Statton outline the work underway at Amazon Web Services (AWS) for reliable real-time delivery. On-prem 2110 network architectures usually have two separate networks. Media essences are sent as single, high bandwidth flows over both networks allowing the endpoint to use SMPTE ST 2022-7 seamless switching to deal with any lost packets. Network architectures in the cloud differ compared to on-prem networks. They are usually much wider and taller providing thousands of possible paths to get to any one destination.

 

 

AWS have been working to find ways of harnessing the cloud network architectures and have come up with two protocols. The first to discuss is Scalable Reliable Delivery, SRD, a protocol created by Amazon which guarantees delivery of packets. Delivery is likely to be out of order, so packet order needs to be restored by a layer above SRD. Amazon have custom network cards called ‘Nitro’ and it’s these cards which run the SRD protocol to keep the functionality as close to the physical layer as possible.

SRD capitalises on hyperscale networks by splitting each media flow up into many smaller flows. A high bandwidth uncompressed video flow could be over 1 Gbps. SRD would deliver this over one or more hundred ‘flowlets’ each leaving on a different path. Paths are partially controlled using ECMP, Equal Cost Multipath, routing whereby the egress port used on a switch is chosen by hashing together a number of parameters such as the source IP and destination port. The sender controls the ECMP path selection by manipulating packet encapsulation. SRD employs a specialized congestion control algorithm that helps further decrease the chance of packet drops and minimize retransmit times, by keeping queuing to a minimum. SRD keeps an eye on the RTT (round trip time) of each of the flowlets and adjusts the bandwidth appropriately. This is particularly useful as a way to deal with the problem where upstream many flowlets may end up going through the same interface which is close to being overloaded, known as ‘incast congestion’. In this way, SRD actively works to reduce latency and congestion. SRD is able to monitor round trip time since it also has a very small retransmit buffer so that any packets which get lost can be resent. Similar to SRT and RIST, SRD does expect to receive acknowledgement packets and by looking at when these arrive and the timing between packets, RTT and bandwidth estimations can be made.

CDI, the Cloud Digital Interface, is a layer on top of SRD which acts as an interface for programmers. Available on Github under a BSD licence, it gives access to the incoming essence streams in a way similar to SMPTE’s ST 2110 making it easy to deal with pixel data, get access to RGB graphics including an alpha layer as well as providing metadata information for subtitles or SCTE 104 signalling.

Thomas Edwards Thomas Edwards
Principal Solutions Architect & Evangelist,
Amazon Web Services
Evan Statton Evan Statton
Principal Architect,
Amazon Web Services (AWS)

Video: The Future Impact of Moore’s Law on Networking

Many feel that Moore’s law has lost its way when it comes to CPUs since we’re no longer seeing a doubling of chip density every two years. This change is tied to the difficulty in shrinking transistors even more when their size is already close to some of the limits imposed by physics. In the networking world, transistors are bigger so which is allowing significant growth in bandwidth to continue. In recent years we have tracked the rise of 1GbE which made way for 10GbE, 40GbE and 100 Gigabit networking. We’re now seeing general availability of 400Gb with 800Gb firmly on the near-term roadmaps as computation within SFPs and switches increases.

In this presentation, Arista’s Robert Welch and Andy Bechtolsheim, explain how the 400GbE interfaces are made up, give insight into 800GbE and talks about deployment possibilities for 400GbE both now and in the near future harnessing in-built multiplexing. It’s important to realise that high capacity links we’re used to today of 100GbE or above are delivered by combining multiple lower-bandwidth ethernet links, known as lanes. 4x25GbE gives a 100GbE interface and 8x50GbE lanes provides 400GbE. The route to 800GbE is, then to increase the number of lanes or bump the speed of the lanes. The latter is the chosen route with 8x100GbE in the works for 2022/2023.

 

 

One downside of using lanes is that you will often need to break these out into individual fibres which is inconvenient and damages cost savings. Robert outlines the work being done to bring wavelength multiplexing (DWDM) into SFPs so that multiple wavelengths down one fibre are used rather than multiple fibres. This allows a single fibre pair to be used, much simplifying cabling and maintaining compatibility with the existing infrastructure. DWDM is very powerful as it can deliver 800GB over distances of over 5000km or 1.6TB for 1000km, It also allows you to have full-bandwidth interconnects between switches. Long haul SFPs with DWDM built in are called OSFP-LS transceivers.

Cost per bit is the religion at play here with the hyperscalers keenly buying into the 400Gb technology because this is only twice-, not four-, times the price of the 100Gb technology it’s replacing. The same is true of 800Gb. The new interfaces will run the ASICs faster and so will need to dissipate more heat. This has led to two longer form factors, the OSFP and QSFP-DD. The OSFP is a little larger than the QSFP but an adaptor can be used to maintain QSFP-form factor compatibility.

Andy explains that 800Gb Ethernet has been finished by the Ethernet Technology Alliance and is going into 51,2t silicon which will allow channels of native 800Gb capacity. This is somewhat in the future, though and Andy says that in the way 25G has worked well for us the last 5 years, 100gig is where the focus is for the next 5. Andy goes on to look at what future 800G chassis might loo like saying that in 2U you would expect 64 800G OSFP interfaces which could provide 128 400G outputs or 512 100G outputs with no co-packaged optics required.

Watch now!
Speakers

Robert Welch Robert Welch
Technical Solutions Lead,
Arista
Andy Bechtolsheim Andy Bechtolsheim
Chairman, Chief Development Officer and Co-Founder,
Arista Networks

Video: Live Media Production – The Ultimate End Game

A lot of our time on this website is devoted to understanding the changes we are going through now, but we don’t adopt technology for the sake of it. Where’s this leading and what work is going on now to forge our path? Whilst SMPTE ST 2110 and the associated specifications aren’t yet a mature technology in that sense SDI, we’re past the early adopter phase and we can see which of the industry’s needs aren’t yet met.

Andy Rayner from Nevion is here to help us navigate the current technology space and understand the future he and Nevion envision. The beginning of the video shows the big change in process from the workflows of the 90s where the TV station moved to sports events to now where we bring the event to the broadcaster in the form of a light connectivity truck turning up and deploying cameras at the event leaving most people either at home or back at base doing the production there. Andy has been involved in a number of implementations enabling this such as at Discovery’s Eurosport where the media processing is done in two locations separate from the production rooms around Europe.

 

 

Generalising around the Discovery case study, Andy shows a vision of how many companys will evolve their workflows which includes using 5G, public and private clouds as appropriate and including control surfaces being at home. To get there, Andy lays out the work within AMWA and SMPTE creating the specifications and standards that we need. He then shows how the increasing use of IT in live production, the already IT-based NLE workflows are able to integrate much better.

Looking to the future, Andy explains the work ongoing to specify a standard way of getting video into and out of the cloud including specifying a way of carrying 2110 on the WAN, helping RIST and formalising the use of JPEG XS. Andy anticipates a more standardised future where a best of breed system is possible down to individual logical components like ‘video keyer’ and ‘logo insertion’ could be done by separate software but which seamlessly integrate. Lastly, Andy promises us that work is underway to improve timing within 2110 and 2110-associated workflows.

Watch now!
Speaker

Andy Rayner Andy Rayner
Chief Technologist
Nevion

Video: A Review of the IP Live Core Implementation in BBC Cymru Wales

Whenever there’s a step change in technology, we need early adopters and moving to SMPTE’s ST 2110 is no exception. Not only do early adopters help show that the path ahead is good, but they often do a lot to beat down the bushes and make the path easier to pass for all that follow. For larger companies whose tech refresh or building move comes at a time when the industry is facing a major technology change, there comes a time when whilst the ground may not be firm ahead, the company can’t justify investing in technology that would soon be out of date or in technology which won’t support the needs of the company in several years’ time. This is just the situation that BBC Cymru Wales found themselves in when it was time to move out of their old property into a purpose-built national HQ in the heart of Cardiff.

In this video from the IP Showcase, Mark Patrick and Dan Ashcroft guide us through ‘whys’ and the ‘hows’ of the relocation project. It’s important to remember that this project was long in the making with the decision on location taking place in 2014 with the technology decisions taking place in 2016 and 2017. The project took an open approach to the IP/SDI question and asked for RFP responses to include a fully-SDI and a fully-IP option. It was clear during the selection process that IP was the way to go not because the solution was cheaper in the short term, but because it was much more future-proof and the costs would come down over time giving a much better total cost of ownership. Don’t forget that the initial costs of HD video equipment were much higher than those now. For more on the pros and cons of SDI, watch ‘Is IP really better than SDI?‘ by Ed Calverly.

 

 

Mark and Dan talk through the thinking for the IP choice and their decision to pick a vendor who would be their partner in the project. The theory being that given the standards were still very young, it would be important to work closely to ensure success. In addition to Grass Valley equipment, they chose a Cisco network with Cisco SDN control and operational control by BNCS. The talk references architectures we’ve featured on The Broadcast Knowledge before with Arista’s Gerard Phillips discussing the dual-network spine-leaf architecture chosen and noting the difficulty they had incorporating the Dante network into the 2110 infrastructure and their choice of a third network purely for control traffic.

We often hear about the importance of PTP in a SMPTE ST 2110 network for live production because it is vital to keep all the essences in sync. For more information about the basics of ST 2110 check out this talk by Wes Simpson. PTP is both simple and complex so Mark explains how they’ve approached distributing PTP ensuring that the separate networks, amber and blue, can share PTP grandmasters for resilience.

Other topics covered in the talk include

  • Control Methodology
  • AES67 and Dante
  • Testing equipment
  • JT-NM interoperability testing
  • Successes and difficulties

Watch now!
Speakers

Mark Patrick Mark Patrick
Lead Architect,
BBC
Dan Ashcroft Dan Ashcroft
Senior Project Manager,
BBC
Wes Simpson Moderator: Wes Simpson
LearnIPVideo.com