Video: A Cloudy Future For Post Production

Even before the pandemic, post-production was moving into the cloud. With the mantra of bringing your apps to the media, remote working was coming to offices and now it’s also coming to homes. As with any major shift in an industry, it will suit some people earlier than others so while we’re in this transition, it’s work taking some time to ask why people are doing this, why some people are not and what problems are still left unsolved. For a wider context on the move to remote editing, watch this video from IET Media.

This video sees Key Code Media CTO,Jeff Sengpiehl talking to Bebop Technology’s Michael Kammes, Ian McPherson from AWS and Ian Main from Teradici. After laying the context for the discussion, he asks the panel why consumer cloud solutions aren’t suitable for professionals. Michael picks this up first by picking on consumer screen sharing solutions which are optimised for getting a task done and they don’t offer the fidelity and consistency you need for media workflows. When it comes to storage at the consumer level, the cost usually prevents investment in the hardware which would give the low-latency, high-capacity storage which is needed for many professional video formats. Ian then adds that security plays a much bigger role in professional workflows and the moment you bring down some assets to your PC, you’re extending the security boundary into your consumer software and to your house.

 

 

The security topic features heavily in this conversation and Michael talks about the Trusted Partner Network who are working on a security specification which, it is hoped, will be a ‘standard’ everyone can work to in order to show a product or solution is secure. The aim is to stop every company having their own thick document detailing their security needs because that means each vendor has to certify themselves time and time again against similar demands but which are all articulated differently and therefore defended differently. Ian explains that cloud providers like AWS provide better physical security than most companies could manage and offer security tools for customers to secure their solution. Many are hoping to form their workflows around the Movielabs 2030 vision which recommends ways to move content through the supply chain with security and auditing in mind.

“What’s stopping people from adopting the cloud for post-production?”, poses Jeff. Cost is one reason people are adopting the cloud and one reason others aren’t. Not dissimilar to the ‘IP’ question in other parts of the supply chain, at this point in the technology’s maturity, the cost savings are most tangible to bigger companies or those with particularly high demands for flexibility and scalability. For a smaller operation, there may simply not be enough benefit to justify the move. Particularly as it would mean adopting tools that take time to learn so, even if temporary, slow down an editor’s ability to deliver a project in the time they’re used to. But on top of that, there’s the issue of cost uncertainty. It’s easy to say how much storage will cost in the cloud, but when you’re using dynamic amounts of computation and moving data in and out of the cloud, estimating your costs becomes difficult and in a conservative industry this uncertainty can form part of a blocker to adoption.

Starting to take questions from the audience, Ian outlines some of the ways to get terabytes of media quickly into the cloud whilst Michael explains his approach to editing with proxies to at least get you started or even for the whole process. Conforming to local, hi-res media may still make sense out of the cloud or you have time to upload the files whilst the project is underway. There’s a brief discussion on the rise of availability of Macs for cloud workflows and a discussion about the difficulty, but possibility, of still having a high-quality monitoring feed on a good monitor even if your workstation is totally remote.

Watch now!
Speakers

Ian Main Ian Main
Technical Marketing Principal,
Teradici Corporation
Ian McPherson Ian McPherson
Head of Global Business Development,
Media Supply Chain
Michael Kammes Michael Kammes
VP Marketing & Business Development,
BeBop Technology
Jeff Sengpiehl Jeff Sengpiehl
CTO
Key Code Media

Video: Uncompressed Video in the Cloud

Moving high bitrate flows such as uncompressed media through cloud infrastructure = which is designed for scale rather than real-time throughput requires more thought than simply using UDP and multicast. That traditional approach can certainly work, but is liable to drop the occasional packet compromising the media.

In this video, Thomas Edwards and Evan Statton outline the work underway at Amazon Web Services (AWS) for reliable real-time delivery. On-prem 2110 network architectures usually have two separate networks. Media essences are sent as single, high bandwidth flows over both networks allowing the endpoint to use SMPTE ST 2022-7 seamless switching to deal with any lost packets. Network architectures in the cloud differ compared to on-prem networks. They are usually much wider and taller providing thousands of possible paths to get to any one destination.

 

 

AWS have been working to find ways of harnessing the cloud network architectures and have come up with two protocols. The first to discuss is Scalable Reliable Delivery, SRD, a protocol created by Amazon which guarantees delivery of packets. Delivery is likely to be out of order, so packet order needs to be restored by a layer above SRD. Amazon have custom network cards called ‘Nitro’ and it’s these cards which run the SRD protocol to keep the functionality as close to the physical layer as possible.

SRD capitalises on hyperscale networks by splitting each media flow up into many smaller flows. A high bandwidth uncompressed video flow could be over 1 Gbps. SRD would deliver this over one or more hundred ‘flowlets’ each leaving on a different path. Paths are partially controlled using ECMP, Equal Cost Multipath, routing whereby the egress port used on a switch is chosen by hashing together a number of parameters such as the source IP and destination port. The sender controls the ECMP path selection by manipulating packet encapsulation. SRD employs a specialized congestion control algorithm that helps further decrease the chance of packet drops and minimize retransmit times, by keeping queuing to a minimum. SRD keeps an eye on the RTT (round trip time) of each of the flowlets and adjusts the bandwidth appropriately. This is particularly useful as a way to deal with the problem where upstream many flowlets may end up going through the same interface which is close to being overloaded, known as ‘incast congestion’. In this way, SRD actively works to reduce latency and congestion. SRD is able to monitor round trip time since it also has a very small retransmit buffer so that any packets which get lost can be resent. Similar to SRT and RIST, SRD does expect to receive acknowledgement packets and by looking at when these arrive and the timing between packets, RTT and bandwidth estimations can be made.

CDI, the Cloud Digital Interface, is a layer on top of SRD which acts as an interface for programmers. Available on Github under a BSD licence, it gives access to the incoming essence streams in a way similar to SMPTE’s ST 2110 making it easy to deal with pixel data, get access to RGB graphics including an alpha layer as well as providing metadata information for subtitles or SCTE 104 signalling.

Thomas Edwards Thomas Edwards
Principal Solutions Architect & Evangelist,
Amazon Web Services
Evan Statton Evan Statton
Principal Architect,
Amazon Web Services (AWS)

Video: How to Deploy an IP-Based Infrastructure

An industry-wide move to any new technology takes time and there is a steady flow of people new to the technology. This video is a launchpad for anyone just coming into IP infrastructures whether because their company is starting or completing an IP project or because people are starting to ask the question “Should we go IP too?”.

Keycode Media’s Steve Dupaix starts with an overview of how SMPTE’s suite of standards called ST 2110 differs from other IP-based video and audio technologies such as NDI, SRT, RIST and Dante. The key takeaways are that NDI provides compressed video with a low delay of around 100ms with a suite of free tools to help you get started. SRT and RIST are similar technologies that are usually used to get AVC or HEVC video from A to B getting around packet loss, something that NDI and ST 2110 don’t protect for without FEC. This is because SRT and RIST are aimed at moving data over lossy networks like the internet. Find out more about SRT in this SMPTE video. For more on NDI, this video from SMPTE and VizRT gives the detail.

 

 

ST 2110’s purpose is to get high quality, usually lossless, video and audio around a local area network originally being envisaged as a way of displacing baseband SDI and was specced to work flawlessly in live production such as a studio. It brings with it some advantages such as separating the essences i.e. video, audio, timing and ancillary data are separate streams. It also brings the promise of higher density for routing operations, lower-cost infrastructure since the routers and switches are standard IT products and increased flexibility due to the much-reduced need to move/add cables.

Robert Erickson from Grass Valley explains that they have worked hard to move all of their product lines to ‘native IP’ as they believe all workflows will move IP whether on-premise or in the cloud. The next step, he sees is enabling more workflows that move video in and out of the cloud and for that, they need to move to JPEG XS which can be carried in ST 2110-20. Thomas Edwards from AWS adds their perspective agreeing that customers are increasingly using JPEG XS for this purpose but within the cloud, they expect the new CDI which is a specification for moving high-bandwidth traffic like 2110-20 streams of uncompressed video from point to point within the cloud.

John Mailhot from Imagine Communications is also the chair of the VSF activity group for ground-cloud-cloud-ground. This aims to harmonise the ways in which vendors provide movement of media, whatever bandwidth, into and out of the cloud as well as from point to point within. From the Imagine side, he says that ST 2110 is now embedded in all products but the key is to choose the most appropriate transport. In the cloud, CDI is often the most appropriate transport within AWS and he agrees that JPEG XS is the most appropriate for cloud<->ground operations.

The panel takes a moment to look at the way that the pandemic has impacted the use of video over IP. As we heard earlier this year, the New York Times had been waiting before their move to IP and the pandemic forced them to look at the market earlier than planned. When they looked, they found the products which they needed and moved to a full IP workflow. So this has been the theme and if anything has driven, and will continue to drive, innovation. The immediate need provided the motivation to consider new workflows and now that the workflow is IP, it’s quicker, cheaper and easier to test new variation. Thomas Edwards points out that many of the current workflows are heavily reliant on AVC or HEVC despite the desire to use JPEG XS for the broadcast content. For people at home, JPEG XS bandwidths aren’t practical but RIST with AVC works fine for most applications.

Interoperability between vendors has long been the focus of the industry for ST 2110 and, in John’s option, is now pretty reliable for inter-vendor essence exchanges. Recently the focus has been on doing the same with NMOS which both he and Robert report is working well from recent, multi-vendor projects they have been involved in. John’s interest is working out ways that the cloud and ground can find out about each other which isn’t a use case yet covered in AMWA’s NMOS IS-04.

The video ends with a Q&A covering the following:

  • Where to start in your transition to IP
  • What to look for in an ST 2110-capable switch
  • Multi-Level routing support
  • Using multicast in AWS
  • Whether IT equipment lifecycles conflict with Broadcast refresh cycles
  • Watch now!
    Speakers

    John Mailhot John Mailhot
    CTO & Director of Product Management, Infrastructure & Networking,
    Imagine Communications
    Ciro Noronha Ciro Noronha
    Executive Vice-President of Engineering,
    Cobalt Digital
    Thomas Edwards Thomas Edwards
    Principal Solutions Architect & Evangelist,
    Amazon Web Services
    Robert Erickson Robert Erickson
    Strategic Account Manager Sports and Venues,
    Grass Valley
    Steve Dupaix Steve Dupaix
    Senior Account Executive,
    Key Code Media

    Video: Cloud Services for Media and Entertainment – Processing, Playout and Distribution

    What are the options for moving your playout, processing and distribution into the cloud? What will the workflows look like and what are the options for implementing them? This video covers the basics, describes many of the functions available like subtitle generation and QC then leads you through to harnessing machine learning,

    SMPTE’s New York section has brought together Evan Statton and Liam Morrison from AWS, Alex Emmermann from Telestream, Chris Ziemer & Joe Ashba from Imagine Communications and Rick Phelps from Brklyn Media to share their knowledge on the topic. Rick kicks off proceedings with a look at the principles of moving to the cloud. He speaks about the need to prepare your media before the move by de-duplicating files, getting the structure and naming correct and checking your metadata is accurate. Whilst deduplicating data reduces your storage costs, another great way to do this is to store in IMF format. IMF, the Interoperable Media Format, is related to MXF and stores essences separately. By using pointers to the right media, new versions of files can re-use media from other files. This further helps reduce storage costs.

     

     

    Rick finishes by running through workflow examples covering INgest, Remote Editing using PCoIP, Playout and VoD before running through the pros and cons of Public, Private and Hybrid cloud.

    Next on the rosta are Chris & Joe outlining their approach to playout in the cloud. They summarise the context and zoom in to look at linear channels and their Versio product. An important aspect of bringing products to the cloud, explains Joe, is to ensure you optimise the product to take advantage of the cloud. Where a software solution on-prem will use servers running the storage, databases, proxy generation and the like, in the cloud you don’t want to simply set up EC2 instances to run these same services. Rather, you should move your database into AWS’s database service, use AWS storage and use a cloud-provided proxy service. This is when the value is maximised.

    Alex leads with his learnings about calculating the benefits of cloud deployment focussing on the costs surrounding your server. You have to calculate the costs of the router port it’s connected to and the rest of the network infrastructure. Power and aircon is easy to calculate but don’t forget, Alex says, about the costs of renting the space in a data centre and the problems you hit when you have to lease another cage because you have become full. Suddenly and extra server has led to a two-year lease on datacentre space. He concludes by outlining Telestream’s approach to delivering transcode. QC, playback and stream monitoring in their Telestream Cloud offering.

    Evan Statton talks about the reasons that AWS created CDI and they merged the encoding stages for DTH transmission and OTT into one step. These steps came from customers’ wishes to simplify cloud worksflows or match their on-prem experiences. JPEG-XS, for isntance, is there to ensure that ultra low-latency video can flow in and out of AWS with CDI allowing almost zero delay, uncompressed video to flow within the cloud. Evan then looks through a number of workflows: Playout, stadium connectivity, station entitlement and ATSC 3.0.

    Liam’s presenation on machine learning in the cloud is the last of this section meeting. Liam focuses he comments and demos on machine learning for video processing. He explains how ML fits into the Articifical Intelligence banner and looks to where the research sector is heading. Machine learning is well suited to the cloud because of the need to have big GPU-heavy servers to manage large datasets and high levels of compute. the G4 series of EC2 servers is singled out as the machine learning instances of choice.

    Liam shows demos of super resolution and frame interpolation the later being used to generate slow motion clips, increasing the framerate of videos, improving the smoothness of stop-motion animations and more. Bringing this together, he finishes by showing some 4K 60fps videos created from ancient black and white film clips.

    The extensive Q&A looks at a wide range of topics:
    The need for operational change management since however close you get the cloud workflows to match what your staff are used to, there will be issues adjusting to the differences.
    How to budget due to the ‘transactional’ nature of AWS cloud microservices
    Problems migrating TV infrastructure to the cloud
    How big the variables are between different workflow designs
    When designing cloud workflows, what are the main causes of latency? When fighting latency what are the trade-offs?
    How long ML models for upconverting or transcoding take finish training?

    Watch now!
    Speakers

    Liam Morrison Liam Morrison
    Principal Architect, Machine Learning,
    Amazon Web Services (AWS)
    Alex Emmermann Alex Emmermann
    Cloud Business Development,
    Telestream
    Joe Ashba Joe Ashba
    Senior Solutions Architect,
    Imagine Communications
    Chris Ziemer Chris Ziemer
    VP Strategic Accounts & Partnerships,
    Imagine Communications
    Rick Phelps Rick Phelps
    Founder,
    Brklyn Media
    Evan Statton Evan Statton
    Principal Architect,
    Amazon Web Services (AWS)
    Ed DeLauter Moderator: Ed DeLauter