Video: A Cloudy Future For Post Production

Even before the pandemic, post-production was moving into the cloud. With the mantra of bringing your apps to the media, remote working was coming to offices and now it’s also coming to homes. As with any major shift in an industry, it will suit some people earlier than others so while we’re in this transition, it’s work taking some time to ask why people are doing this, why some people are not and what problems are still left unsolved. For a wider context on the move to remote editing, watch this video from IET Media.

This video sees Key Code Media CTO,Jeff Sengpiehl talking to Bebop Technology’s Michael Kammes, Ian McPherson from AWS and Ian Main from Teradici. After laying the context for the discussion, he asks the panel why consumer cloud solutions aren’t suitable for professionals. Michael picks this up first by picking on consumer screen sharing solutions which are optimised for getting a task done and they don’t offer the fidelity and consistency you need for media workflows. When it comes to storage at the consumer level, the cost usually prevents investment in the hardware which would give the low-latency, high-capacity storage which is needed for many professional video formats. Ian then adds that security plays a much bigger role in professional workflows and the moment you bring down some assets to your PC, you’re extending the security boundary into your consumer software and to your house.

 

 

The security topic features heavily in this conversation and Michael talks about the Trusted Partner Network who are working on a security specification which, it is hoped, will be a ‘standard’ everyone can work to in order to show a product or solution is secure. The aim is to stop every company having their own thick document detailing their security needs because that means each vendor has to certify themselves time and time again against similar demands but which are all articulated differently and therefore defended differently. Ian explains that cloud providers like AWS provide better physical security than most companies could manage and offer security tools for customers to secure their solution. Many are hoping to form their workflows around the Movielabs 2030 vision which recommends ways to move content through the supply chain with security and auditing in mind.

“What’s stopping people from adopting the cloud for post-production?”, poses Jeff. Cost is one reason people are adopting the cloud and one reason others aren’t. Not dissimilar to the ‘IP’ question in other parts of the supply chain, at this point in the technology’s maturity, the cost savings are most tangible to bigger companies or those with particularly high demands for flexibility and scalability. For a smaller operation, there may simply not be enough benefit to justify the move. Particularly as it would mean adopting tools that take time to learn so, even if temporary, slow down an editor’s ability to deliver a project in the time they’re used to. But on top of that, there’s the issue of cost uncertainty. It’s easy to say how much storage will cost in the cloud, but when you’re using dynamic amounts of computation and moving data in and out of the cloud, estimating your costs becomes difficult and in a conservative industry this uncertainty can form part of a blocker to adoption.

Starting to take questions from the audience, Ian outlines some of the ways to get terabytes of media quickly into the cloud whilst Michael explains his approach to editing with proxies to at least get you started or even for the whole process. Conforming to local, hi-res media may still make sense out of the cloud or you have time to upload the files whilst the project is underway. There’s a brief discussion on the rise of availability of Macs for cloud workflows and a discussion about the difficulty, but possibility, of still having a high-quality monitoring feed on a good monitor even if your workstation is totally remote.

Watch now!
Speakers

Ian Main Ian Main
Technical Marketing Principal,
Teradici Corporation
Ian McPherson Ian McPherson
Head of Global Business Development,
Media Supply Chain
Michael Kammes Michael Kammes
VP Marketing & Business Development,
BeBop Technology
Jeff Sengpiehl Jeff Sengpiehl
CTO
Key Code Media

Video: The ROI of Deploying Multiple Codecs

Adding a new codec to your streaming service is a big decision. It seems inevitable that H.264 will be around for a long time and that new codecs won’t replace it, but just take their share of the market. In the short term, this means your streaming service may also need to deliver H.264 and your new codec which will add complexity and increase CDN storage requirements. What are the steps to justifying a move to a new codec and what’s the state of play today?

In this Streaming Media panel, Jan Ozer is joined by Facebook’s Colleen Henry, Amnon Cohen-Tidhar from Cloudinary and Anush Moorthy from Netflix talk about their experiences with new codecs and their approach to new Codecs. Anush starts by outlining the need to consider decoder support as a major step to rolling out a new codec. The topic of decoder support came up several times during this panel in discussing the merits of hardware versus software decoding. Colleen points out that running VP9 and VVC is possible, but some members of the panel see a benefit in deploying hardware – sometimes deploying on devices like smart TVs, hardware decoding is a must. When it comes to supporting third party devices, we hear that logging is vitally important since when you can’t get your hands on a device to test with, this is all you have to help improve the experience. It’s best, in Facebook’s view, to work closely with vendors to get the most out of their implementations. Amnon adds that his company is working hard to push forward improved reporting from browsers so they can better indicate their capabilities for decoding.

 

 

Colleen talks about the importance of codec switching to enhance performance at the bottom end of the ABR ladder with codecs like AV1 with H264 at the higher end. This is a good compromise between the computation needed for AV1 and giving the best quality at very low bitrates. But Anush points out that storage will increase when you start using two codecs, particularly in the CDN so this needs to be considered as part of the consideration of onboarding new codecs. Dropping AV1 support at higher bitrates is an acknowledgement that we also need to consider the cost of encoding in terms of computation.

The panel briefly discusses the newer codecs such as MPEG VVC and MPEG LCEVC. Colleen sees promise in VVC in as much as it can be decoded in software today. She also says good things about LCEVC suggesting we call it an enhancement codec due to the way it works. To find out more about these, check out this SMPTE talk. Both of these can be deployed as software decoders which allow for a way to get started while hardware establishes itself in the ecosystem.

Colleen discusses the importance of understanding your assets. If you have live video, your approach is very different to on-demand. If you are lucky enough to have an asset that is getting millions upon millions of views, you’ll want to compress every bit out of that, but for live, there’s a limit to what you can achieve. Also, you need to understand how your long-tail archive is going to be accessed to decide how much effort your business wants to put into compressing the assets further.

The video comes to a close by discussing the Alliance of Open Media’s approach to AV1 encoders and decoders, discussing the hard work optimising the libAV1 research encoder and the other implementations which are ready for production. Colleen points out the benefit of webassembly which allows a full decoder to be pushed into the browser and the discussion ends talking about codec support for HDR delivery technologies such as HDR10+.

Watch now!
Speakers

Colleen Henry Colleen Henry
Cobra Commander of Facebook Video Special Forces.
Anush Moorthy Anush Moorthy
Manager, Video & Imagine Encoding
Netflix
Amnon Cohen-Tidhar Amnon Cohen-Tidhar
Senior Director or Video Architecture,
Cloudinary
Jan Ozer Moderator: Jan Ozer
Principal, Stremaing Learning Center
Contributing Editor, Streaming Media

Video: Deterministic Video Switching in IP Networks

The broadcast industry spent a lot of time getting synchronous cuts working in analogue and SDI. Now IP is being used more and more, there’s a question to be asked about whether video switching should be done in the network itself or at the video level within the receiver. Carl Ostrom from the VSF talks us through the pros and cons of video switching within the network itself along with Brad Gilmer

First off, switching video at a precise point within the stream is known as ‘deterministic switching’. The industry has become used to solid-state crosspoint switching which can be precisely timed so that the switch happens within the vertical blanking interval of the video providing a hitless switch. This isn’t a hitless switch in the meaning of SMPTE ST 2022-7 which allows kit to switch from one identical stream to another to deal with packet loss, this is switching between two different streams with, typically, different content. With the move to ST 2110, we have the option of changing the destination of packets on the fly which can achieve this same switching with the benefit of saving bandwidth. For a receiving device to do a perfect switch, it would need to be receiving both the original video and next video simultaneously, doubling the incoming bandwidth. Not only does this increase the bandwidth, but it can also lead to uneven bandwidth.

 

 

Carl’s open question to the webinar attendees is whether network switching is needed and invites Thomas Edwards from the audience to speak. Thomas has previously done a lot of work proposing switching techniques and has also demonstrated that the P4 programming language for switches can actually successfully manipulate SMPTE ST 2110 traffic in real-time as seen in this demo. Thomas comments that bandwidth within networks built for 2110 doesn’t seem to a problem so subscribing to two streams is working well. We hear further comments regarding network-based switching and complexity. possibly also driving up the costs of the switches themselves. Make before break can also be a simpler technology to fault find when a problem occurs.

Watch now!
Speakers

Carl Ostrom Carl Ostrom
Vice President,
VSF
Brad Gilmer Brad Gilmer
Executive Director, Video Services Forum
Executive Director, Advanced Media Workflow Association (AMWA)

Video: Solving the 8K distribution Challenge

With the Tokyo Olympics less than 2 weeks away, 8K is back in focus. NHK have famously been key innovators and promoters of 8K for many years, have launched an 8K channel on satellite and will be broadcasting the games in 8K. That’s all very well, but is 8K a viable broadcast format for other public and commercial broadcasters? One problem for 8K is how to get it to people. Whilst there are plenty of bandwidth problems to contend with during production, all of that will be for nought if we can’t get it to the customer.

This panel, run by the 8K Association in conjunction with SMPTE, looks to new codecs to help reduce the burden on connectivity whether RF or networks. The feeling is that HEVC just can’t deliver practical bandwidths, so what are the options? The video starts with Bill Mandel from Samsung introducing the topics of HDR display using HDR10+, streaming with CMAF and bandwidth. Bill discusses future connectivity improvements which should come into play and then looks at codec options.

 

 

Bill and Stephan Wenger give their view on the codecs which were explained in detail in this SMPTE deep dive video so do take a look at the article for more context. AV1 is the first candidate for 8K distribution that many think of since it is known to have better compression than HEVC and is even seeing some hardware support in TVs and is being trialled by YouTube. However, the trailer is 50Mbps and therefore not suitable for many connections. Looking at better performance, MPEG’s EVC is a potential candidate which offers continued improvement over AV1 and a better licensing model than HEVC. Stephan’s view on codecs is that users really don’t care what the codec is, they just need the service to work. He points towards VVC, the direct successor to HEVC, as a way forward for 8K since it delivers 40 to 50% bandwidth reduction opening up the possibility of a 25Mbps video channel. Noa published MPEG standard, the market awaits patent information and vendor implementations.

Stephan talks about MPEG’s LCEVC standard which has similarities to Samsung’s Scalenet which Bill introduced. The idea is to encode at a lower resolution and use upscaling to get the desired resolution using AI/machine learning to make the scaling look good and, certainly in the case of LCEVC, a low-bandwidth stream of enhancement data which adds in key parts of video, such as edges, which would otherwise be lost. Stephan says that he awaits implementations in the market to see how well this works. Certainly, taking into account LCEVC’s ability to produce compression using less computation, it may be an important way to bring 8K to some devices and STBs.

The discussion is rounded off by Mickael Raulet, CTO of ATEME who talks us through an end-to-end test broadcast done using VVC. This was delivered by satellite to set top boxes and over streaming with a UHD channel at 15Mbps. His take-away from the experience is that VVC is a viable option for broadcasters and 8K and may be possible with using EVC’s main profile. The video finishes with a Q&A covering:

  • Codecs for live video
  • The pros and cons of scaling in codecs
  • Codec licensing
  • Multiple generational encoding degeneration

     
     

    Watch now!
    Speakers

    Bill Mandel Bill Mandel
    VP, Industry Relations,
    Samsung Research America
    Mickaël Raulet Mickaël Raulet
    CTO,
    ATEME
    Chris Chinnock
    Executive Director,
    8K Association
    Stephan Wenger Stephan Wenger
    Senior Director, IP & Standards,
    Tencent