Video: Cloud Services for Media and Entertainment: Production and Post-Production

My content producers and broadcasters have been forced into the cloud. Some have chosen remote controlling their on-prem kit but many have found that the cloud has brought them benefits beyond simply keeping their existing workflows working during the pandemic.

This video from SMPTE’s New York section looks at how people moved production to the cloud and how they intend to keep it there. The first talk from WarnerMedia’s Greg Anderson discussing the engineering skills needed to be up to the task concluding that there are more areas of knowledge in play than one engineer can bring to the table from the foundational elements such as security, virtulisation nad networking, to DevOps skills like continuous integration and development (CI/CD), Active Directory and databases.

The good news is that whichever of the 3 levels of engineer that Greg introduces, from beginner to expert, the entry points are pretty easy to access to start your journey and upskilling. Within the company, Greg says that leaders can help accelerate the transition to cloud by allowing teams a development/PoC account which provides a ‘modest’ allowance each month for experimentation, learning and prooving ideas. Not only does that give engineers good exposure to cloud skills, but it gives managers experience in modelling, monitoring and analysing costs.

Greg finishes by talking through their work with implementing a cloud workflow for HBO MAX which is currently on a private cloud and on the way to being in the public cloud. The current system provides for 300 concurrent users doing Edit, Design, Engineering and QC workflows with asset management and ingest. They are looking to the public cloud to consolidate real estate and standardise the tech stack amongst many other drivers outlined by Greg.

Scott Bounds Architect at Microsoft Azure talks about content creation in the cloud. The objectives for Azure is to allow worldwide collaboration, speed up the time to market, allow scaling of content creation and bring improvements in security, reliability and access of data.

This starts for many by using hybrid workflows rather than a full switch to the cloud. After all, Scott says that rough cut editing, motion graphics and VFX are all fairly easy to implement in the cloud whereas colour grading, online and finishing are still best for most companies if they stay on-prem. Scott talks about implementing workstations in the cloud allowing GPU-powered workstations to be used using the remote KVM technology PCoIP to connect in. This type of workflow can be automated using Azure scripting and Terraform.

John Whitehead is part of the New York Times’ Multimedia Infrastructure Engineering team which have recently moved their live production to the cloud. Much of the output of the NYT is live events programming such as covering press conferences. John introduces their internet-centric microservices architecture which was already being worked on before the pandemic started.

The standard workflow was to have a stream coming into MCR which would then get routed to an Elemental encoder for sending into the cloud and distributed with Fastly. To be production-friendly they had created some simple-to-use web frontends for routing. For full-time remote production, John explains they wanted to improve their production quality by adding a vision mixer, graphics and closed captions. John details the solution they chose which comprised cloud-first solutions rather than running windows in the cloud.

The NYT was pushed into the cloud by Covid, but it was felt to be low risk and something they were considering doing anyway. The pandemic forced them to consider that perhaps the technologies they were waiting for had already arrived and ended up saving on Capex and received immediate returns on their investment.

Finishing up the presentations is Anshul Kapoor from Google Cloud who presents market analysis on the current state of cloud adoption and the market conditions. He says that one manifestation of the current crisis is that new live-events content is reduced if not postponed which is making people look to their archives. Some people have not yet done their archiving process, whilst some already have a digital archive. Google and other cloud providers can offer vast scale in order to process and manage archives but also machine learning in order to process, make sense and make searchable all the content.

The video ends with an extensive Q&A with the presenters.

Watch now!
Speakers

Greg Anderson Greg Anderson
Senior Systems Engineer,
WarnerMedia
Scott Bounds Scott Bounds
Media Cloud Architect,
Microsoft
John Whitehead John Whitehead
Senior Engineer, Multimedia Infrastructure Engineering,
New York Times
Anshul Kapoor Anshul Kapoor
Business Development,
Google Cloud

Video: Versatile Video Coding (VVC)

MPEG’s VVC is the next iteration along from HEVC (H.265). Whilst there are other codecs being finalised such as EVC and LCEVC, this talk looks at how VVC builds on HEVC, but also lends its hand to screen content and VR becoming a more versatile codec than HEVC, meeting the world’s changing needs. For an overview of these emerging codecs, this interview covers them all.

VVC is a joint project between ITU-T and MPEG (AKA ISO/IEC). Its aim is to create a 50% encoding efficiency in bitrate for the same quality picture, with the emphasis on higher resolutions, HDR and 10-bit video. At the same time, acknowledging that optimising codecs on natural video is no longer the core requirement for a lot of people. Its versatility comes from being able to encode screen content, independent sub-picture encoding, scalable encoding among others.

Gary Sullivan from Microsoft Technology & Research talks us through what all this means. He starts by outlining the case for a new codec, particularly the reach for another 50% bitrate saving which may come at further computational cost. Gary points out that, as video use continues to increase, anything that can be done to significantly reduce bitrates will either drive down costs or allow people to use video in better ways.

Any codec is a set of tools all working together to create the final product. Some tools are not always needed, say if you are running on a lower-power system, allowing the codec to be tuned for the situation. Gary puts up a list of some of the tools in VVC, many of which are an evolution of the same tool in HEVC, and highlights a few to give an insight into the improvements under the hood.

Gary’s pick of the big hitters in the tool-set are the Adaptive Loop Filter which reduces artefacts and prediction errors, affine motion compensation which provides better motion compensation, triangle partitioning mode which is a high-computation improvement in intra prediction, bi-directional optical flow (BIO) for motion prediction, intra-block copy which is useful for screen content where an identical block is found elsewhere in the same frame.

Gary highlights SCC, Screen Content Coding, which was in HEVC but not in the base profile, this has changed for VVC so all VVC implementations will have SCC whereas very few HEVC implementations do. Reference Picture Resampling (RPR) allows changing resolution from picture to picture where pictures can be stored at a different resolution from the current picture. And independent sub-pictures which allow parts of the video frame to be re-arranged or only for only one region to be decoded. This works well for VR, video conferencing and allows the creation of composite videos without intermediate decoding.

As usual, doing more thinking about how to compress a picture brings further computational demands. MPEG’s LCEVC is the standards body’s way of fighting against this, as notable bitrate improvements are possible even for low-power devices. With VVC, versatility is the aim, however. Decoders see a 60% increase in decode complexity. Whilst MPEG specifications are all about the decoder – hence allowing a lot of ongoing innovation in encoding techniques – current examples are about 8 or 9 times slower. Performance is better for screen content and on higher resolutions. Whilst the coding part of VVC is mature, versatility is still being worked on but the aim is to publishing within about 2 months.

The video finishes with a Q&A that covers implementing DASH into a low-latency video workflow. How CMAF will be specified to use VVC. Live workflows which Gary explains always come after the initial file-based work and is best understood after the first attempts at encoder implementations, noting that hardware lags by 2 years. He goes on to explain that chipmakers need to see the demand. At the moment, there is a lot of focus from implementors on AV1 by implementors, not to mention EVC, so the question is how much demand can be generated.

This talk is based on talk from Benjamin Bross originally given to an ITU workshop (PDF), then presented at Mile High Video by Benjamin and was updated by Gary for this conversation with the Seattle Video Tech community.

Bitmovin has an article highlighting many of the improvements in VVC written by Christian Feldmann who has given many talks on both AV1 and VVC.

Watch now!

Speakers

Gary Sullivan Gary Sullivan
Microsoft Technology & Research

Webinar: Enabling intelligent media and entertainment

This webinar brings together Support Partners and Microsoft to explain the term ‘intelligent cloud’ and how this can help creative teams produce higher quality, more innovative content by augmenting human ingenuity, manage content better and grow audiences while increasing advertising and subscription revenue.

The panel will cover:
– Haivision’s SRT Hub, intelligent media routing and cloud-based workflows
– Highlights from partners such as Avid, Telestream and Wowza.
– New production workflows for remote live production, sports and breaking news.
– Connected production: A process that helps with production collaboration and management, removing traditional information and creative silos which exist today, while driving savings and efficiencies from script to screen.

Register now!
Speakers

Jennifer Cooper Jennifer Cooper
Global Head, Media Industry Strategy,
Microsoft
Trent Collie Trent Collie
Senior Partner Development Manager,
Microsoft
Harry Grinling Harry Grinling
Chief Executive Office,
Support Partners
Lutful Khandker Lutful Khandker
Principal SDE Lead,
Microsoft

Video: WAVE (Web Application Video Ecosystem) Update

With wide membership including Apple, Comcast, Google, Disney, Bitmovin, Akamai and many others, the WAVE interoperability effort is tackling the difficulties web media encoding, playback and platform issues utilising global standards.

John Simmons from Microsoft takes us through the history of WAVE, looking at the changes in the industry since 2008 and WAVE’s involvement. CMAF represents an important milestone in technology recently which is entwined with WAVE’s activity backed by over 60 major companies.

The WAVE Content Specification is derived from the ISO/IEC standard, “Common media application format (CMAF) for segmented media”. CMAF is the container for the audio, video and other content. It’s not a protocol like DASH, HLS or RTMP, rather it’s more like an MPEG 2 transport stream. CMAF nowadays has a lot of interest in it due to its ability to deliver very low latency streaming of less than 4 seconds, but it’s also important because it represents a standardisation of fMP4 (fragmented MP4) practices.

The idea of standardising on CMAF allows for media profiles to be defined which specify how to encapsulate certain codecs (AV1, HEVC etc.) into the stream. Given it’s a published specification, other vendors will be able to inter-operate. Proof of the value of the WAVE project is the 3 amendments that John mentions issued from MPEG on the CMAF standard which have come directly from WAVE’s work in validating user requirements.

Whilst defining streaming is important in terms of helping in-cloud vendors work together and in allowing broadcasters to more easily build systems, it’s vital the decoder devices are on board too, and much work goes into the decoder-device side of things.

On top of having to deal with encoding and distribution, WAVE also specifies an HTML5 APIs interoperability with the aim of defining baseline web APIs to support media web apps and creating guidelines for media web app developers.

This talk was given at the Seattle Video Tech meetup.

Watch now!
Slides from the presentation
Check out the free CTA specs

Speaker

John Simmons John Simmons
Media Platform Architect,
Microsoft