Video: Uncompressed Video in the Cloud

Moving high bitrate flows such as uncompressed media through cloud infrastructure = which is designed for scale rather than real-time throughput requires more thought than simply using UDP and multicast. That traditional approach can certainly work, but is liable to drop the occasional packet compromising the media.

In this video, Thomas Edwards and Evan Statton outline the work underway at Amazon Web Services (AWS) for reliable real-time delivery. On-prem 2110 network architectures usually have two separate networks. Media essences are sent as single, high bandwidth flows over both networks allowing the endpoint to use SMPTE ST 2022-7 seamless switching to deal with any lost packets. Network architectures in the cloud differ compared to on-prem networks. They are usually much wider and taller providing thousands of possible paths to get to any one destination.

 

 

AWS have been working to find ways of harnessing the cloud network architectures and have come up with two protocols. The first to discuss is Scalable Reliable Delivery, SRD, a protocol created by Amazon which guarantees delivery of packets. Delivery is likely to be out of order, so packet order needs to be restored by a layer above SRD. Amazon have custom network cards called ‘Nitro’ and it’s these cards which run the SRD protocol to keep the functionality as close to the physical layer as possible.

SRD capitalises on hyperscale networks by splitting each media flow up into many smaller flows. A high bandwidth uncompressed video flow could be over 1 Gbps. SRD would deliver this over one or more hundred ‘flowlets’ each leaving on a different path. Paths are partially controlled using ECMP, Equal Cost Multipath, routing whereby the egress port used on a switch is chosen by hashing together a number of parameters such as the source IP and destination port. The sender controls the ECMP path selection by manipulating packet encapsulation. SRD employs a specialized congestion control algorithm that helps further decrease the chance of packet drops and minimize retransmit times, by keeping queuing to a minimum. SRD keeps an eye on the RTT (round trip time) of each of the flowlets and adjusts the bandwidth appropriately. This is particularly useful as a way to deal with the problem where upstream many flowlets may end up going through the same interface which is close to being overloaded, known as ‘incast congestion’. In this way, SRD actively works to reduce latency and congestion. SRD is able to monitor round trip time since it also has a very small retransmit buffer so that any packets which get lost can be resent. Similar to SRT and RIST, SRD does expect to receive acknowledgement packets and by looking at when these arrive and the timing between packets, RTT and bandwidth estimations can be made.

CDI, the Cloud Digital Interface, is a layer on top of SRD which acts as an interface for programmers. Available on Github under a BSD licence, it gives access to the incoming essence streams in a way similar to SMPTE’s ST 2110 making it easy to deal with pixel data, get access to RGB graphics including an alpha layer as well as providing metadata information for subtitles or SCTE 104 signalling.

Thomas Edwards Thomas Edwards
Principal Solutions Architect & Evangelist,
Amazon Web Services
Evan Statton Evan Statton
Principal Architect,
Amazon Web Services (AWS)

Video: Workflow Evolution Within the CDN

The pandemic has shone a light on CDNs as they are the backbone of much of what we do with video for streaming and broadcast. CDNs aim to scale up in a fast, sophisticated way so you don’t have to put in the research to achieve this yourself. This panel from the Content Delivery Summit sees Dom Robinson bringing together Jim Hall from Fastly with Akamai’s Peter Chave, Ted Middleton from Amazon and Paul Tweedy from BBC Design + Engineering.

The panel discusses the fact that although much video conferencing traffic being WebRTC isn’t supported, there are a lot of API calls that are handled by the CDN. In fact, over 300 trillion API calls were made to Amazon last year. Zoom and other solutions do have an HLS streaming option that has been used and can benefit from CDN scaling. Dom asks whether people’s expectations have changed during the pandemic and then we hear from Paul as he talks a little about the BBC’s response to Covid.

 

 

THE CTA’s Common Media Client Data standard, also known as CTA 5004, is a way for a video player to pass info back to the CDN. In fact, this is so powerful that it can provide highly granular real-time reports for customers but also enables hints to be handed back from the players so the CDNs can pre-fetch content that is likely to be needed. Furthermore, having a standard for logging will be great for customers who are multi-CDN and need a way to match logs and analyse their system in its entirety. This work is also being extended, under a separate specification to be able to look upstream in a CDN workflow to understand the status of other systems like edge servers.

The panel touches on custom-made analytics, low latency streaming such as Apples LL-HLS and why it’s not yet been adopted, current attempts in the wild to bring HLS latency down, Edge computing and piracy.

Watch now!
Speakers

Peter Chave Peter Chave
Principal Architect,
Akamai Technologies
Paul Tweedy Paul Tweedy
Lead Architect, Online Technology Group,
BBC Design + Engineering
Ted Middleton Ted Middleton
Global Leader – Specialized Solution Architects, Edge Services
Amazon
Jim Hall Jim Hall
Principal Sales Engineer,
Fastly
Dom Robinson Moderator: Dom Robinson
Director and Creative Firestarter, id3as
Contributing Editor, StreamingMedia.com, UK

Video: Cloud Services for Media and Entertainment – Processing, Playout and Distribution

What are the options for moving your playout, processing and distribution into the cloud? What will the workflows look like and what are the options for implementing them? This video covers the basics, describes many of the functions available like subtitle generation and QC then leads you through to harnessing machine learning,

SMPTE’s New York section has brought together Evan Statton and Liam Morrison from AWS, Alex Emmermann from Telestream, Chris Ziemer & Joe Ashba from Imagine Communications and Rick Phelps from Brklyn Media to share their knowledge on the topic. Rick kicks off proceedings with a look at the principles of moving to the cloud. He speaks about the need to prepare your media before the move by de-duplicating files, getting the structure and naming correct and checking your metadata is accurate. Whilst deduplicating data reduces your storage costs, another great way to do this is to store in IMF format. IMF, the Interoperable Media Format, is related to MXF and stores essences separately. By using pointers to the right media, new versions of files can re-use media from other files. This further helps reduce storage costs.

 

 

Rick finishes by running through workflow examples covering INgest, Remote Editing using PCoIP, Playout and VoD before running through the pros and cons of Public, Private and Hybrid cloud.

Next on the rosta are Chris & Joe outlining their approach to playout in the cloud. They summarise the context and zoom in to look at linear channels and their Versio product. An important aspect of bringing products to the cloud, explains Joe, is to ensure you optimise the product to take advantage of the cloud. Where a software solution on-prem will use servers running the storage, databases, proxy generation and the like, in the cloud you don’t want to simply set up EC2 instances to run these same services. Rather, you should move your database into AWS’s database service, use AWS storage and use a cloud-provided proxy service. This is when the value is maximised.

Alex leads with his learnings about calculating the benefits of cloud deployment focussing on the costs surrounding your server. You have to calculate the costs of the router port it’s connected to and the rest of the network infrastructure. Power and aircon is easy to calculate but don’t forget, Alex says, about the costs of renting the space in a data centre and the problems you hit when you have to lease another cage because you have become full. Suddenly and extra server has led to a two-year lease on datacentre space. He concludes by outlining Telestream’s approach to delivering transcode. QC, playback and stream monitoring in their Telestream Cloud offering.

Evan Statton talks about the reasons that AWS created CDI and they merged the encoding stages for DTH transmission and OTT into one step. These steps came from customers’ wishes to simplify cloud worksflows or match their on-prem experiences. JPEG-XS, for isntance, is there to ensure that ultra low-latency video can flow in and out of AWS with CDI allowing almost zero delay, uncompressed video to flow within the cloud. Evan then looks through a number of workflows: Playout, stadium connectivity, station entitlement and ATSC 3.0.

Liam’s presenation on machine learning in the cloud is the last of this section meeting. Liam focuses he comments and demos on machine learning for video processing. He explains how ML fits into the Articifical Intelligence banner and looks to where the research sector is heading. Machine learning is well suited to the cloud because of the need to have big GPU-heavy servers to manage large datasets and high levels of compute. the G4 series of EC2 servers is singled out as the machine learning instances of choice.

Liam shows demos of super resolution and frame interpolation the later being used to generate slow motion clips, increasing the framerate of videos, improving the smoothness of stop-motion animations and more. Bringing this together, he finishes by showing some 4K 60fps videos created from ancient black and white film clips.

The extensive Q&A looks at a wide range of topics:
The need for operational change management since however close you get the cloud workflows to match what your staff are used to, there will be issues adjusting to the differences.
How to budget due to the ‘transactional’ nature of AWS cloud microservices
Problems migrating TV infrastructure to the cloud
How big the variables are between different workflow designs
When designing cloud workflows, what are the main causes of latency? When fighting latency what are the trade-offs?
How long ML models for upconverting or transcoding take finish training?

Watch now!
Speakers

Liam Morrison Liam Morrison
Principal Architect, Machine Learning,
Amazon Web Services (AWS)
Alex Emmermann Alex Emmermann
Cloud Business Development,
Telestream
Joe Ashba Joe Ashba
Senior Solutions Architect,
Imagine Communications
Chris Ziemer Chris Ziemer
VP Strategic Accounts & Partnerships,
Imagine Communications
Rick Phelps Rick Phelps
Founder,
Brklyn Media
Evan Statton Evan Statton
Principal Architect,
Amazon Web Services (AWS)
Ed DeLauter Moderator: Ed DeLauter

Video: 2019 What did I miss? HDR Formats and Trends

The second most popular video of 2019 looked at HDR. A long promised format which routinely wows spectators at conferences and shops a like is increasingly seen, albeit tentatively, in the wild. For instance, this Christmas UK viewers were able to watch HDR Premiership football in HDR with Amazon Prime, but only a third of the matches benefitted from the format. Whilst there are many reasons for this, many of them due to commercial and practical reasons rather than technical reasons, this is an important part of the story.

Brian Alvarez from Amazon Prime Video goes into detail on the background and practicalities of HDR in this talk given at the Video Tech Seattle meet up in August, part of the world-wide movement of streaming video engineers who meet to openly swap ideas and experiences in making streaming work. We are left with a not only understanding HDR better, but with a great insight into the state of the consumer market – who can watch HDR and in what format – as well as who’s transmitting HDR.

Read more about the video or just hit play below!

If you want to start from the beginning on HDR, check out the other videos on the topic. HDR relies on both the understanding of how people see, the way we describe colour and light, how we implement it and how theworkflows are modified to suit. Fortunately, you’re already at the one place that brings all this together! Explore, learn and enjoy.

Speaker

Brian Alvarez Brian Alvarez
Principal Product Manager,
Amazon Prime Video