Video: Uncompressed Video in the Cloud

Moving high bitrate flows such as uncompressed media through cloud infrastructure = which is designed for scale rather than real-time throughput requires more thought than simply using UDP and multicast. That traditional approach can certainly work, but is liable to drop the occasional packet compromising the media.

In this video, Thomas Edwards and Evan Statton outline the work underway at Amazon Web Services (AWS) for reliable real-time delivery. On-prem 2110 network architectures usually have two separate networks. Media essences are sent as single, high bandwidth flows over both networks allowing the endpoint to use SMPTE ST 2022-7 seamless switching to deal with any lost packets. Network architectures in the cloud differ compared to on-prem networks. They are usually much wider and taller providing thousands of possible paths to get to any one destination.

 

 

AWS have been working to find ways of harnessing the cloud network architectures and have come up with two protocols. The first to discuss is Scalable Reliable Delivery, SRD, a protocol created by Amazon which guarantees delivery of packets. Delivery is likely to be out of order, so packet order needs to be restored by a layer above SRD. Amazon have custom network cards called ‘Nitro’ and it’s these cards which run the SRD protocol to keep the functionality as close to the physical layer as possible.

SRD capitalises on hyperscale networks by splitting each media flow up into many smaller flows. A high bandwidth uncompressed video flow could be over 1 Gbps. SRD would deliver this over one or more hundred ‘flowlets’ each leaving on a different path. Paths are partially controlled using ECMP, Equal Cost Multipath, routing whereby the egress port used on a switch is chosen by hashing together a number of parameters such as the source IP and destination port. The sender controls the ECMP path selection by manipulating packet encapsulation. SRD employs a specialized congestion control algorithm that helps further decrease the chance of packet drops and minimize retransmit times, by keeping queuing to a minimum. SRD keeps an eye on the RTT (round trip time) of each of the flowlets and adjusts the bandwidth appropriately. This is particularly useful as a way to deal with the problem where upstream many flowlets may end up going through the same interface which is close to being overloaded, known as ‘incast congestion’. In this way, SRD actively works to reduce latency and congestion. SRD is able to monitor round trip time since it also has a very small retransmit buffer so that any packets which get lost can be resent. Similar to SRT and RIST, SRD does expect to receive acknowledgement packets and by looking at when these arrive and the timing between packets, RTT and bandwidth estimations can be made.

CDI, the Cloud Digital Interface, is a layer on top of SRD which acts as an interface for programmers. Available on Github under a BSD licence, it gives access to the incoming essence streams in a way similar to SMPTE’s ST 2110 making it easy to deal with pixel data, get access to RGB graphics including an alpha layer as well as providing metadata information for subtitles or SCTE 104 signalling.

Thomas Edwards Thomas Edwards
Principal Solutions Architect & Evangelist,
Amazon Web Services
Evan Statton Evan Statton
Principal Architect,
Amazon Web Services (AWS)

Video: TV moving to all IP – Dream or Reality?

As IP continues to infiltrate all aspects of the broadcast industry, this panel asks how close we are to all-IP TV delivery, what that would actually mean and how what technologies exist to get us there. As we’ve seen in contribution and production, IP brings with it benefits to those that embrace it, but not all of those benefits apply to every business so this panel considers where the real value actually lies.

Pedro Bandeira from Deutsche Telekom, Rob Suero from RDK, Xavier Leclercq from Broadpeak joins Wyplay’s Dominique Feral in this discussion moderated by Andreas Waltenspiel. The discussion starts with the motivations to move to IP with Pedro explaining that the services he delivers are viewed by the viewers alongside the big internet-delivered services like Netflix. As such, he needs access to the same technologies and sees a lot of innovation in that sphere. This is why he’s advocating a move away from multicast delivery of video to unicast; delivering with exactly the same technologies the giants are using.

 

 

For Pedro, streaming technology is an enabler, not a differentiator. As the foundation of his service, he wants it to be rock solid so feels the choice of partners to provide the technology is very important as he intends to benefit from incremental improvements as the base technologies improve. Part of the flexibility that unicast technologies provide, says Pedro, is removing the baggage of older technologies. He sees these as a burden when he wants the same service and quality of experience on devices as well as STBs.

Rob from Broadpeak feels that Multicast, or specifically Multicast-ABR is a really interesting technology because of the scalability and network efficiency which Pedro is willing to sacrifice to access other streaming technologies. Multicast ABR, however, delivers to the home as multicast so the impact on the telco network is minimised and only in the home is the service translated into a standard stream like HLS or DASH. In principle this allows companies like Deutsche Telekom to use the technologies he’s interested in whilst also delivering with network efficiency.

“A great technology for transitioning” is Pedro’s view of ABR Multicast. If we had the bandwidth, he feels no one would bother using it. However, he does agree that it’s useful in those markets whether the infrastructure can’t support a pure unicast offering and he does see ABR Multicast being part of his delivery strategy. He would prefer to avoid it as it requires home gateways and vendor support as well as being another point of failure. With 50 million homes in Europe on IPTV, there are plenty of services to transition.

The conversation then turns to RDK, the generically titled Reference Development Kit which is the name of an open source project, Rob explains, which abstracts the creation of new OTT apps and services from the underlying vendor equipment meaning you don’t have to develop software for each and every device. Removing the ties to OEMs keeps costs down for operators and allows them to be more agile. Dominique explains how writing with RDK may be free, but that doesn’t mean it’s easy and points to an experience where Wyplay shaved 6 seconds of latency off a customer’s service by optimising the way the app was written. At the end of the day, Dominique sees the route to a good, low-latency service as a fight with all aspects of the system including the encoder, packaging protocol, CDN, DRM latency and much more. This means optimising RDK is just part of a wide package of services that companies like Wyplay can offer.

The panel concludes by talking about learning RDK, upskilling colleagues, bringing them along on the journey to all-IP and offering advice to those embarking on projects.

Watch now!
Speakers

Pedro Bandeira
VP Product & New Business, Europe,
Deutsche Telekom
Rob Suero Rob Suero
Head of Technology,
RDK
Dominique Feral Dominique Feral
Chief Sales & Marketing Officer,
Wyplay
Xavier Leclercq Xavier Leclercq
Head of Business Development,
Broadpeak
Andy Waltenspiel Moderator: Andreas Waltenspiel
Founder & GM,
Waltenspiel Management Consulting

Video: Proper Network Designs and Considerations for SMPTE ST-2110

Networks from SMPTE ST 2110 systems can be fairly simple, but the simplicity achieved hides a whole heap of careful considerations. By asking the right questions at the outset, a flexible, scalable network can be built with relative ease.

“No two networks are the same” cautions Robert Welch from Arista as he introduces the questions he asks at the beginning of the designs for a network to carry professional media such as uncompressed audio and video. His thinking focusses on the network interfaces (NICs) of the devices: How many are there? Which receive PTP? Which are for management and how do you want out-of-band/ILO access managed? All of these answers then feed into the workflows that are needed influencing how the rest of the network is created. The philosophy is to work backwards from the end-nodes that receive the network traffic.

Robert then shows how these answers influence the different networks at play. For resilience, it’s common to have two separate networks at work sending the same media to each end node. Each node then uses ST 2022-7 to find the packets it needs from both networks. This isn’t always possible as there are some devices which only have one interface or simply don’t have -7 support. Sometimes equipment has two management interfaces, so that can feed into the network design.

PTP is an essential service for professional media networks, so Robert discusses some aspects of implementation. When you have two networks delivering the same media simultaneously, they will both need PTP. For resilience, a network should operate with at least two Grand Masters – and usually, two is the best number. Ideally, your two media networks will have no connection between them except for PTP whereby the amber network can benefit from the PTP from the blue network’s grandmaster. Robert explains how to make this link a pure PTP-only link, stopping it from leaking other information between networks.

Multicast is a vital technology for 2110 media production, so Robert looks at its incarnation at both layer 2 and layer 3. With layer 2, multicast is handled using multicast MAC addresses. It works well with snooping and a querier except when it comes to scaling up to a large network or when using a number of switches. Robert explains that this because all multicast traffic needs to be sent through the rendez-vous point. If you would like more detail on this, check out Arista’s Gerard Phillips’ talk on network architecture.

Looking at JT-NM TR-1001, the guidelines outlining the best practices for deploying 2110 and associated technologies, Robert explains that multicast routing at layer 3 works much increases stability, enables resiliency and scalability. He also takes a close look at the difference between ‘all source’ multicasting supported by IGMP version 2 and the ability to filter for only specific sources using IGMP version 3.

Finishing off, Robert talks about the difficulties in scaling PTP since all the replies/requests go into the same multicast group which means that as the network scales, so does the traffic on that multicast group. This can be a problem for lower-end gear which needs to process and reject a lot of traffic.

Watch now!
Speaker

Robert Welch Robert Welch
Technical Solutions Lead
Arista Networks

Video: Case Study – ST 2110 4K OB Van for AMV

Systems based on SMPTE ST 2110 continue to come online throughout the year and, as they do, it’s worth seeing the choices they made to make it happen. We recently featured a project building two OB truck and how they worked around COVID 19 to deliver them. Today we’re looking at an OB truck based on Grass Valley and Cisco equipment.

Anup Mehta and Rahul Parameswaran from Cisco join the VSF’s Wes Simpson to explain their approach to getting ST 2110 working to deliver a scalable truck for All Mobile Video. This brief was to deliver a truck based on NMOS control, maximal COTS equipment, flexible networking with scalable PTP and security.

Thinking back to yesterday’s talk on Network Architecture we recognise the ‘hub and spoke’ architecture in use which makes a lot of sense in OB trucks. Using monolithic routers is initially tempting for OB trucks, but there is a need for a lot of 1G and 10G ports which tends to use up high-bandwidth ports on core routers quickly. Therefore moving to a monolithic architecture with multiple, directly connected, access switches makes them most sense. As Gerard Philips commented, this is a specialised form of the more general ‘spine-leaf’ architecture which is typically deployed in larger systems.

One argument against using IGMP/PIM routing in larger installations is that those protocols have no understanding of the wider picture. They don’t take a system-wide view like a SDN controller would. If IGMP is a paper roadmap, SDN is satnav with up to date road metrics, full knowledge of width/weight restrictions and live traffic alerts. To address this, Cisco created their own technology Non-Blocking Multicast (NBM) which takes in to account the bandwidth of the streams and works closely with Cisco’s DCNM (Data Centre Network Manager). These Cisco technologies allow more insight into the system as a whole, thus make better decisions.

Anup and Rahul continue to explain how the implementation of PTP was scaled by offloading the processing to line cards than relying on the main CPU of the unit before explaining how the DCNM, not only supporting the NBM feature, also supports GV Orbit. This is the configuration and system management unit from GV. From a security perspective, the network, by default, denies access to any connections into the port plus it has the ability to enforce bandwidth limits to stop accidental flooding or similar.

Watch now!
Speakers

Anup Mehta Anup Mehta
Product Manager,
Cisco
Rahul Parameswaran Rahul Parameswaran
Senior Technical Product Manager,
Cisco