Video: The ROI of Deploying Multiple Codecs

Adding a new codec to your streaming service is a big decision. It seems inevitable that H.264 will be around for a long time and that new codecs won’t replace it, but just take their share of the market. In the short term, this means your streaming service may also need to deliver H.264 and your new codec which will add complexity and increase CDN storage requirements. What are the steps to justifying a move to a new codec and what’s the state of play today?

In this Streaming Media panel, Jan Ozer is joined by Facebook’s Colleen Henry, Amnon Cohen-Tidhar from Cloudinary and Anush Moorthy from Netflix talk about their experiences with new codecs and their approach to new Codecs. Anush starts by outlining the need to consider decoder support as a major step to rolling out a new codec. The topic of decoder support came up several times during this panel in discussing the merits of hardware versus software decoding. Colleen points out that running VP9 and VVC is possible, but some members of the panel see a benefit in deploying hardware – sometimes deploying on devices like smart TVs, hardware decoding is a must. When it comes to supporting third party devices, we hear that logging is vitally important since when you can’t get your hands on a device to test with, this is all you have to help improve the experience. It’s best, in Facebook’s view, to work closely with vendors to get the most out of their implementations. Amnon adds that his company is working hard to push forward improved reporting from browsers so they can better indicate their capabilities for decoding.

 

 

Colleen talks about the importance of codec switching to enhance performance at the bottom end of the ABR ladder with codecs like AV1 with H264 at the higher end. This is a good compromise between the computation needed for AV1 and giving the best quality at very low bitrates. But Anush points out that storage will increase when you start using two codecs, particularly in the CDN so this needs to be considered as part of the consideration of onboarding new codecs. Dropping AV1 support at higher bitrates is an acknowledgement that we also need to consider the cost of encoding in terms of computation.

The panel briefly discusses the newer codecs such as MPEG VVC and MPEG LCEVC. Colleen sees promise in VVC in as much as it can be decoded in software today. She also says good things about LCEVC suggesting we call it an enhancement codec due to the way it works. To find out more about these, check out this SMPTE talk. Both of these can be deployed as software decoders which allow for a way to get started while hardware establishes itself in the ecosystem.

Colleen discusses the importance of understanding your assets. If you have live video, your approach is very different to on-demand. If you are lucky enough to have an asset that is getting millions upon millions of views, you’ll want to compress every bit out of that, but for live, there’s a limit to what you can achieve. Also, you need to understand how your long-tail archive is going to be accessed to decide how much effort your business wants to put into compressing the assets further.

The video comes to a close by discussing the Alliance of Open Media’s approach to AV1 encoders and decoders, discussing the hard work optimising the libAV1 research encoder and the other implementations which are ready for production. Colleen points out the benefit of webassembly which allows a full decoder to be pushed into the browser and the discussion ends talking about codec support for HDR delivery technologies such as HDR10+.

Watch now!
Speakers

Colleen Henry Colleen Henry
Cobra Commander of Facebook Video Special Forces.
Anush Moorthy Anush Moorthy
Manager, Video & Imagine Encoding
Netflix
Amnon Cohen-Tidhar Amnon Cohen-Tidhar
Senior Director or Video Architecture,
Cloudinary
Jan Ozer Moderator: Jan Ozer
Principal, Stremaing Learning Center
Contributing Editor, Streaming Media

Video: CMAF with ByteRange – A Unified & Efficient Solution for Low Latency Streaming

Apple’s LL-HLS protocol is the most recent technology offering to deliver low-latency streams of just 2 or 3 seconds to the viewer. Before that, CMAF which is built on MPEG DASH also enabled low latency streaming. This panel with Ateme, Akamai and THEOplayer asks how they both work, their differences and also maps out a way to deliver both at once covering the topic from the perspective of the encoder manufacturer, the CDN and the player client.

We start with ATEME’s Mickaël Raulet who outline’s CMAF starting with its inception in 2016 with Microsoft and Apple. CMAF was published in 2018 and most recently received detailed guidelines for low latency best practice in 2020 from the DASH Industry Forum. He outlines that the idea of CMAF is to build on DASH to find a single way of delivering both DASH and HLS using once set of media. THe idea here is to minimise hits on the cache as well as storage. Harnessing the ISO BMFF CMAF adds on the ability to break chunks in to fragments opening up the promise of low latency delivery.

 

 

Mickaël discusses the methods of getting hold of these short fragments. If you store the fragments separately, then you double your storage as 4 fragments make up a whole segment. So it’s better to have all the fragments written as a segment. We see that Byterange requests are the way forward whereby the client asks the server to start delivering a file from a certain number of bytes into the file. We can even request this ahead of time, using a preload hint, so that the server can push this data when it’s ready.

Next we hear from Akamai’s Will Law who examines how Apples LL-HLS protocol can work within the CDN to provide either CMAF for LL-HLS from the same media files. He uses the example of a 4-second segments with four second-long parts. A standard latency player would want to download the whole 4-second segment where as a LL-HLS player would want the parts. DASH, has similar requirements and so Will focusses on how to bring all of these requirements down into the mimum set of files needed which he calls a ‘common cache footprint’ using CMAF.

He shows how byterange requests work, how to structure them and explains that, to help with bandwidth estimation, the server will wait until the whole of the byterange is delivered before it sends any data thus allowing the client to download a wire speed. Moreover a single request can deliver the rest of the segments meaning 7 requests get collapsed into 1 or 2 requests which is an important saving for CDNs working at scale. It is possible to use longer GOPs for a 4-second video clip than for 1-second parts, but for this technique to work, it’s important to maintain the same structure within the large 4-second clip as in the 1-second parts.

THEOplayer’s Pieter-Jan Speelmans takes the floor next explaining his view from the player end of the chain. He discusses support for LL-HLS across different platforms such as Android, Android TV, Roku etc. and concludes that there is, perhaps surprisingly, fairly wide support for Apple’s LL-HLS protocol. Pieter-Jan spends some time building on Will’s discussion about reducing request numbers for browsers, CORS checking can increase cause extra requests to be needed when using Byterange requests. For implementing ABR, it’s important to understand how close you are to the available bandwidth. Pieter-Jan says that you shouldn’t only use the download time to determine throughput, but also metadata from the player to get as an exact estimate as possible. We also hear about dealing with subtitles which can need to be on screen longer than the duration of any of the parts or even of the segment length. These need to be adapted so that they are shown repeatedly and each chunk contains the correct information. This can lead to flashing on re-display so, as with many things in modern players, needs to be carefully and intentionally dealt with to ensure the correct user experience.

The last part of the video is a Q&A which covers:

  • Use of HTTP2 and QUIC/HTTP3
  • Dynamic Ad Insertion for low latency
  • The importance of playlist blocking
  • Player synchronisation with playback rate adjustment
  • Player analytics
  • DRM insertion problems at low-latency

    Watch now!
    Speakers

    Will Law Will Law
    Chief Architect, Edge Technology Group,
    Akamai
    Mickaël Raulet Mickaël Raulet
    CTO,
    ATEME
    Pieter-Jan Speelmans Pieter-Jan Speelmans
    CTO & Founder,
    THEOPlayer
  • Video: Distribution in a Covid-19 World

    A look at the impacts of Covid-19 from the perspective of Disney+ and ESPN+. In this talk Eric Klein from Disney Streaming Services gives his view on the changes and learnings he saw as Covid hit and as it continues. He first comments on the increase in ‘initial streams’ as the lockdowns hit with Austria topping the list with a 44% increase of time spent streaming within a just a 48-hour period and in the US, Comcast has reported an uptick of 38% in general streaming and web video consumption. Overall fixed broadband networks tended to do better with the peaks than mobile broadband, whereas mobile internet which is quite common in Italy was observed to be suffering.

    Distribution in a Covid-19 World from Streaming Video Alliance on Vimeo.

    Content providers played their part to help with the congestion in adjusting to the situation by altering video profiles and changing starting bitrates as part of an industry-wide response. And it’s this element of everybody playing their part which seems to be the secret sauce behind Eric’s statement that “the internet is more resilient than everybody thought”. Eric goes on to point out that such networks are designed to deal with these situations as the first question is always “what’s your peak traffic going to be”. Whilst someone’s estimates may be off, the point is that networks are provisioned for peaks so when many peak forecasts come to pass, their average is usually within the network’s capabilities. The exceptions come on last-mile links which are much more fixed than provisioning of uplink ports and router backplane bandwidth within datacentres.

    Eric points out the benefits of open caching, a specification in development within the Streaming Video Alliance. Open caching allows for an interoperable way of delivering files into ISP, modelled around popular data, so that services can cache data much closer to customers. By doing this, Eric points to data which has shown an ability to deliver an up to 15% increase in bandwidth as well as a 30% decrease in ‘customer-impacting events.

    This session ends with a short Q&A

    Watch now!
    Speakers

    Eric Klein Eric Klein
    Co-Chair, Open Caching Workgroup, Streaming Video Alliance,
    Director, Content Distribution, Disney Streaming Services
    Jason Thibeault Moderator: Jason Thibeault
    Executive Director,
    Streaming Video Alliance

    Video: Examining the OTT Technology Stack

    This video looks at the whole streaming stack asking what’s now, what trends are coming to the fore and how are things going to be done better in the future? Whatever part of the stack you’re optimising, it’s vital to have a way to measure the QoE (Quality of Experience) of the viewer. In most workflows, there is a lot of work done to implement redundancy so that the viewer sees no impact despite problems happening upstream.

    The Streaming Video Alliance’s Jason Thibeault diggs deeper with Harmonic’s Thierry Fautier, Brenton Ough from Touchstream, SSIMWAVE’s Hojatollah Yeganeh and Damien Lucas from Ateme.

    Talking about Codecs, Thierry makes the point that only 7% of devices can currently support AV1 and with 10 billion devices in the world supporting AVC, he sees a lot of benefit in continuing to optimise this rather than waiting for VVC support to be commonplace. When asked to identify trends in the marketplace, moving to the cloud was identified as a big influencer that is driving the ability to scale but also the functions themselves. Gone are the days, Brenton says, that vendors ‘lift and shift’ into the cloud. Rather, the products are becoming cloud-native which is a vital step to enable functions and products which take full advantage of the cloud such as being able to swap the order of functions in a workflow. Just-in-time packaging is cited as one example.

    Examining the OTT Technology Stack from Streaming Video Alliance on Vimeo.

    Other changes are that server-side ad insertion (SSAI) is a lot better in the cloud and sub partitioning of viewers, where you do deliver different ads to different people, is more practical. Real-time access to CDN data allowing you near-immediate feedback into your streaming process is also a game-changer that is increasingly available.

    Open Caching is discussed on the panel as a vital step forward and one of many areas where standardisation is desperately needed. ISPs are fed up, we hear, of each service bringing their own caching box and it’s time that ISPs took a cloud-based approach to their infrastructure and enabled multiple use servers, potentially containerised, to ease this ‘bring your own box’ mentality and to take back control of their internal infrastructure.

    HDR gets a brief mention in light of the Euro soccer championships currently on air and the Japan Olympics soon to be. Thierry says 38% of Euro viewership is over OTT and HDR is increasingly common, though SDR is still in the majority. HDR is more complex than just upping the resolution and requires much more care over which screen it’s watched. This makes adopting HDR more difficult which may be one reason that adoption is not yet higher.

    The discussion ends with a Q&A after talking about uses for ‘edge’ processing which the panel agrees is a really important part of cloud delivery. Processing API requests at the edge, doing SSAI or content blackouts are other examples of where the lower-latency response of edge compute works really well in the workflow.

    Watch now!
    Speakers

    Thierry Fautier Thierry Fautier
    VP Video Strategy.
    Harmonic Inc.
    Damien Lucas Damien Lucas
    CTO,
    Ateme
    Hojatollah Yeganeh Hojatollah Yeganeh
    Research Team Lead
    SSIMWAVE
    Brenton Ough Brenton Ough
    CEO & Co-Founder,
    Touchstream
    Jason Thibeault Moderator: Jason Thibeault
    Executive Director,
    Streaming Video Alliance