Video: The ROI of Deploying Multiple Codecs

Adding a new codec to your streaming service is a big decision. It seems inevitable that H.264 will be around for a long time and that new codecs won’t replace it, but just take their share of the market. In the short term, this means your streaming service may also need to deliver H.264 and your new codec which will add complexity and increase CDN storage requirements. What are the steps to justifying a move to a new codec and what’s the state of play today?

In this Streaming Media panel, Jan Ozer is joined by Facebook’s Colleen Henry, Amnon Cohen-Tidhar from Cloudinary and Anush Moorthy from Netflix talk about their experiences with new codecs and their approach to new Codecs. Anush starts by outlining the need to consider decoder support as a major step to rolling out a new codec. The topic of decoder support came up several times during this panel in discussing the merits of hardware versus software decoding. Colleen points out that running VP9 and VVC is possible, but some members of the panel see a benefit in deploying hardware – sometimes deploying on devices like smart TVs, hardware decoding is a must. When it comes to supporting third party devices, we hear that logging is vitally important since when you can’t get your hands on a device to test with, this is all you have to help improve the experience. It’s best, in Facebook’s view, to work closely with vendors to get the most out of their implementations. Amnon adds that his company is working hard to push forward improved reporting from browsers so they can better indicate their capabilities for decoding.



Colleen talks about the importance of codec switching to enhance performance at the bottom end of the ABR ladder with codecs like AV1 with H264 at the higher end. This is a good compromise between the computation needed for AV1 and giving the best quality at very low bitrates. But Anush points out that storage will increase when you start using two codecs, particularly in the CDN so this needs to be considered as part of the consideration of onboarding new codecs. Dropping AV1 support at higher bitrates is an acknowledgement that we also need to consider the cost of encoding in terms of computation.

The panel briefly discusses the newer codecs such as MPEG VVC and MPEG LCEVC. Colleen sees promise in VVC in as much as it can be decoded in software today. She also says good things about LCEVC suggesting we call it an enhancement codec due to the way it works. To find out more about these, check out this SMPTE talk. Both of these can be deployed as software decoders which allow for a way to get started while hardware establishes itself in the ecosystem.

Colleen discusses the importance of understanding your assets. If you have live video, your approach is very different to on-demand. If you are lucky enough to have an asset that is getting millions upon millions of views, you’ll want to compress every bit out of that, but for live, there’s a limit to what you can achieve. Also, you need to understand how your long-tail archive is going to be accessed to decide how much effort your business wants to put into compressing the assets further.

The video comes to a close by discussing the Alliance of Open Media’s approach to AV1 encoders and decoders, discussing the hard work optimising the libAV1 research encoder and the other implementations which are ready for production. Colleen points out the benefit of webassembly which allows a full decoder to be pushed into the browser and the discussion ends talking about codec support for HDR delivery technologies such as HDR10+.

Watch now!

Colleen Henry Colleen Henry
Cobra Commander of Facebook Video Special Forces.
Anush Moorthy Anush Moorthy
Manager, Video & Imagine Encoding
Amnon Cohen-Tidhar Amnon Cohen-Tidhar
Senior Director or Video Architecture,
Jan Ozer Moderator: Jan Ozer
Principal, Stremaing Learning Center
Contributing Editor, Streaming Media

Video: Solving the 8K distribution Challenge

With the Tokyo Olympics less than 2 weeks away, 8K is back in focus. NHK have famously been key innovators and promoters of 8K for many years, have launched an 8K channel on satellite and will be broadcasting the games in 8K. That’s all very well, but is 8K a viable broadcast format for other public and commercial broadcasters? One problem for 8K is how to get it to people. Whilst there are plenty of bandwidth problems to contend with during production, all of that will be for nought if we can’t get it to the customer.

This panel, run by the 8K Association in conjunction with SMPTE, looks to new codecs to help reduce the burden on connectivity whether RF or networks. The feeling is that HEVC just can’t deliver practical bandwidths, so what are the options? The video starts with Bill Mandel from Samsung introducing the topics of HDR display using HDR10+, streaming with CMAF and bandwidth. Bill discusses future connectivity improvements which should come into play and then looks at codec options.



Bill and Stephan Wenger give their view on the codecs which were explained in detail in this SMPTE deep dive video so do take a look at the article for more context. AV1 is the first candidate for 8K distribution that many think of since it is known to have better compression than HEVC and is even seeing some hardware support in TVs and is being trialled by YouTube. However, the trailer is 50Mbps and therefore not suitable for many connections. Looking at better performance, MPEG’s EVC is a potential candidate which offers continued improvement over AV1 and a better licensing model than HEVC. Stephan’s view on codecs is that users really don’t care what the codec is, they just need the service to work. He points towards VVC, the direct successor to HEVC, as a way forward for 8K since it delivers 40 to 50% bandwidth reduction opening up the possibility of a 25Mbps video channel. Noa published MPEG standard, the market awaits patent information and vendor implementations.

Stephan talks about MPEG’s LCEVC standard which has similarities to Samsung’s Scalenet which Bill introduced. The idea is to encode at a lower resolution and use upscaling to get the desired resolution using AI/machine learning to make the scaling look good and, certainly in the case of LCEVC, a low-bandwidth stream of enhancement data which adds in key parts of video, such as edges, which would otherwise be lost. Stephan says that he awaits implementations in the market to see how well this works. Certainly, taking into account LCEVC’s ability to produce compression using less computation, it may be an important way to bring 8K to some devices and STBs.

The discussion is rounded off by Mickael Raulet, CTO of ATEME who talks us through an end-to-end test broadcast done using VVC. This was delivered by satellite to set top boxes and over streaming with a UHD channel at 15Mbps. His take-away from the experience is that VVC is a viable option for broadcasters and 8K and may be possible with using EVC’s main profile. The video finishes with a Q&A covering:

  • Codecs for live video
  • The pros and cons of scaling in codecs
  • Codec licensing
  • Multiple generational encoding degeneration


    Watch now!

    Bill Mandel Bill Mandel
    VP, Industry Relations,
    Samsung Research America
    Mickaël Raulet Mickaël Raulet
    Chris Chinnock
    Executive Director,
    8K Association
    Stephan Wenger Stephan Wenger
    Senior Director, IP & Standards,
  • Video: The Status of 8K and Light Field / Holographic Development

    8K is the next step in the evolution of resolution but as we saw with 4K, it’s about HDR, wide colour gamut and higher frame rates, too. This video looks at the real-world motivations to use 8K and glimpses the work happening now to take imaging even further into light field and holography.

    Broadcast has always been about capturing the best quality video, using that quality to process the video and then deliver to the viewer. Initially, this was used to improve green-screen/chromakey effects and sharp, quality video is still important in any special effects/video processing. But with 8K you can deliver a single camera feed which could be cut up into two, three or more HD feeds looking like 3 different cameras. Pan-and-scan isn’t new, but it has more flexibility taken from an 8K raster. But perhaps the main ‘day 1’ benefit to 8K is for future-proofing – acquiring the highest fidelity content for as-yet-unknown uses later down the line.

    Chris Chinnock from the 8K association explains that 8K is in active use in Japan both at the upcoming Olympics but also in a permanent channel, BS8K which transmits on satellite at 80Mb/s. Dealing with such massive bitrates, Chris explains, puts 8K finding the same pain points at 4K did seven years ago. For file-based workflows, he continues, these have largely been solved though on the broadcast side, challenges remain. The world of codecs has moved on a lot since then with the addition of LCEVC, VVC, EVC, AVS3 and others which promise to help bring 8K distribution to the home down to a more manageable 25Mb/s or below.



    Originating 8K material is not hard in as much as the cameras exist and the workflows are possible. Many high budget films are being acquired at this resolution but the fact is that getting enough 8K for a whole channel is not practical and so upscaling content to 8K is a must. Recent advances in machine learning-based upscaling have revolutionised the quality you can expect over traditional techniques.

    Finishing off on 8K, Chris points out that a typical 8K video format takes 30Gbps uncompressed which is catered for easily by HDMI 2.1, DisplayPort 1.4a and Thunderbolt. 8K TVs are already available and current investment into Chinese G10.5 fabs shows that more 65″ and 75″ will be on the market.

    Changing topic, Chris looks at generating immersive content for either light field displays or holographic displays. There are a number of ways to capture a real-life scene but all of them involve using many cameras and a lot of data. You can avoid the real world and using a games engine such as Unity or Unreal but these have the same limitations as they do in computer games; they can look simultaneously amazing and unrealistic. Whatever you do, getting the data from A to B is a difficult task and a simple video encoder won’t cut it. There’s a lot of metadata involved in immersive experiences and, in the case of point fields, there is no conventional video involved. This is why Chris is part of an MPEG group working on future capabilities of MPEG-I aiming to identify requirements for MPEG and other standards bodies, recommending distribution architectures and getting a standard representation for immersive media.

    The ITMF, Immersive Technology Media Format, is a suggested container that can hold computer graphics, volumetric information, light field arrays and AI/computational photography. This feeds into a decoder that only takes what it needs out of the file/stream depending on whether it’s a full holographic display or something simpler.

    Chris finishes his presentation explaining the current state of immersive displays, the different types and who’s making them today.

    Watch now!

    Chris Chinnock
    Executive Director,
    8K Association

    The New Video Codec Landscape – VVC, EVC, HEVC, LCEVC, AV1 and more

    In the penultimate look back at the top articles of 2020, we recognise the continued focus on new codecs. Let’s not shy away from saying 2020 was generous giving us VVC, LCEVC and EVC from MPEG. AV1 was actually delivered in 2018 with an update (Errata 1) in 2019. However, the industry has avidly tracked the improved speeds of the encoder and decoder implementations.
    Lastly, no codec discussion has much relevance without comparing to AV1, HEVC and VP9.

    So with all these codecs spinning around it’s no surprise that one of the top views of 2020 was a video entitled “VVC, EVC, LCEVC, WTF? – An update on the next hot codecs from MPEG”. This video was from 2019 and since these have all been published now, this extensive roundup from SMPTE is a much better resource to understand these codecs in detail and in context with their predecessors.

    Click here to read the article and watch the video.

    The article explains many of the features of the new codecs: both how they work and also why there are three. Afterall, if VVC is so good, why release EVC? We learn that they optimise for different features such as computation, bitrate and patent licensing among other aspects.


    Sean McCarthy Sean McCarthy
    Director, Video Strategy and Standards,
    Dolby Laboratories
    Walt Husak Walt Husak
    Director, Image Technologies,
    Dolby Laboratories