Video: Outlook on the future codec landscape

VVC has now been released, MPEG’s successor to HEVC. But what is it? And whilst it brings 50% bitrate savings over HEVC, how does it compare to other codecs like AV1 and the other new MPEG standards? This primer answers these questions and more.

Christian Feldmann from Bitmovin starts by looking at four of the current codecs, AVC, HEVC, VP9 and AV1. VP9 isn’t often heard about in traditional broadcast circles, but it’s relatively well used online as it’s supported on Android phones and brings bitrate savings over AVC. Google use VP9 on Youtube for compatible players and see a higher retention rate. Netflix and Twitch also use it. AV1 is also in use by the tech giants, though its use outside of those who built it (Netflix, Facebook etc.) is not yet apparent. Christian looks at the compatibility of the codecs, hardware decoding, efficiency and cost.

Looking now at the other upcoming MPEG codecs, Christian examines MPEG-5 Essential Video Coding (EVC) which has two profiles: Baseline and Main. The baseline profile only uses technologies which are old enough to be outside of patent claims. This allows you to use the codec without the concern that you may be asked for a fee from a patent holder who comes out of the woodwork. The main profile, however, does have patented technology and performs better. Businesses which wish to use this codec can then pay licences but if an unexpected patent holder appears, each individual tool in the codec can be disabled, allowing you to protect continue using, albeit without that technology. Whilst it is a shame that patents are so difficult to account for, this shows MPEG has taken seriously the situation with HEVC which famously has hundreds of licensable patents with over a third of eligible companies not part of a patent pool. EVC performs 32% better than AVC using the baseline profile and 25% better than HEVC with the main profile.

Next under the magnifying glass is Low Complexity Enhancement Video Coding (LCEVC). We’ve already heard about this on The Broadcast Knowledge from Guido, CEO of V-Nova who gave a deeper look at Demuxed 2019 and more recently at Streaming Media West. Whilst those are detailed talks, this is a great overview of the technology which is actually a hybrid approach to encoding. It allows you to take any existing codec such as AVC, AV1 etc. and put LCEVC on top of it. Using both together allows you to run your base encoder at a lower resolution (say HD instead of UHD) and then deliver to the decoder this low-resolution encode plus a small stream of enhancement information which the decoder uses to bring the video back up to size and add back in the missing detail. The big win here, as the name indicates, is that this method is very flexible and can take advantage of all sorts of available computing power in embedded technology as and in servers. In set-top boxes, parts of the SoC which aren’t used can be put to use. In phones, both the onboard HEVC decoding chip and the CPU can be used. It’s also useful in for automated workflows as the base codec stream is smaller and hence easier to decode, plus the enhancement information concentrates on the edges of objects so can be used on its own by AI/machine learning algorithms to more readily analyse video footage. Encoding time drops by over a third for AVC and EVC.

Now, Christian looks at the codec-du-jour, Versatile Video Codec (VVC), explaining that its enhancements over HEVC come not just from bitrate improvements but techniques which better encode screen content (i.e. computer games), allow for better 360 degree video and reduce delay. Subjective results show up to 50% improvement. For more detail on VVC, check out this talk from Microsoft’s Gary Sullivan.

The talk finishes with answers so audience questions: Which will be the winner, what future device & hardware support will be and which is best for real-time streaming.

Watch now!
Speakers

Christian Feldmann Christian Feldmann
Team lead, Encoding,
Bitmovin

Video: Futuristic Codecs and a Healthy Obsession with Video Startup Time

These next 12 months are going to see 3 new MPEG standards being released. What does this mean for the industry? How useful will they be and when can we start using them? MPEG’s coming to the market with a range of commercial models to show it’s learning from the mistakes of the past so it should be interesting to see the adoption levels in the year after their release. This is part of the second session of the Vienna Video Tech Meetup and delves into startup time for streaming services.

In the first talk, Dr. Christian Feldmann explains the current codec landscape highlighting the ubiquitous AVC (H.264), UHD’s friend, HEVC (H.265), and the newer VP9 & AV1. The latter two differentiate themselves by being free to used and are open, particularly AV1. Whilst slow, both the latter are seeing increasing adoption in streaming, but no one’s suggesting that AVC isn’t still the go-to codec for most online streaming.

Christian then introduces the three new codecs, EVC (Essential Video Coding), LCEVC (Low-Complexity Enhancement Video Coding) and VVC (Versatile Video Coding) all of which have different aims. We start by looking at EVC whose aim is too replicate the encoding efficiency of HEVC, but importantly to produce a royalty-free baseline profile as well as a main profile which improves efficiency further but with royalties. This will be the first time that you’ve been able to use an MPEG codec in this way to eliminate your liability for royalty payments. There is further protection in that if any of the tools is found to have patent problems, it can be individually turned off, the idea being that companies can have more confidence in deploying the new technology.

The next codec in the spotlight is LCEVC which uses an enhancement technique to encode video. The aim of this codec is to enable lower-end hardware to access high resolutions and/or lower bitrates. This can be useful in set-top boxes and for online streaming, but also for non-broadcast applications like small embedded recorders. It can achieve a light improvement in compression over HEVC, but it’s well known that HEVC is very computationally heavy.

LCEVC reduces computational needs by only encoding a lower resolution version (say, SD) of the video in a codec of your choice, whether that be AVC, HEVC or otherwise. The decoder will then decode this and upscale the video back to the original resolution, HD in this example. This would look soft, normally, but LCEVC also sends enhancement data to add back in the edges and detail that would have otherwise been lost. This can be done in CPU whilst the other decoding could be done by the dedicated AVC/HEVC hardware and naturally encoding/decoding a quarter-resolution image is much easier than the full resolution.

Lastly, VVC goes under the spotlight. This is the direct successor to HEVC and is also known as H.266. VVC naturally has the aim of improving compression over HEVC by the traditional 50% target but also has important optimisations for more types of content such as 360 degree video and screen content such as video games.

To finish this first Vienna Video Tech Meetup, Christoph Prager lays out the reasons he thinks that everyone involved in online streaming should obsess about Video Startup Time. After defining that he means the time between pressing play and seeing the first frame of video. The longer that delay, the assumption is that the longer the wait, the more users won’t bother watching. To understand what video streaming should be like, he examines Spotify’s example who have always had the goal of bringing the audio start time down to 200ms. Christophe points to this podcast for more details on what Spotify has done to optimise this metric which includes activating GUI elements before, strictly speaking, they can do anything because the audio still hasn’t loaded. This, however, has an impact of immediacy with perception being half the battle.

“for every additional second of startup delay, an additional 5.8% of your viewership leaves”

Christophe also draws on Akamai’s 2012 white paper which, among other things, investigated how startup time puts viewers off. Christophe also cites research from Snap who found that within 2 seconds, the entirety of the audience for that video would have gone. Snap, of course, to specialise in very short videos, but taken with the right caveats, this could indicate that Akamai’s numbers, if the research was repeated today, may be higher for 2020. Christophe finishes up by looking at the individual components which go towards adding latency to the user experience: Player startup time, DRM load time, Ad load time, Ad tag load time.

Watch now!
Speakers

Christian Feldmann Dr. Christian Feldmann
Team Lead Encoding,
Bitmovin
Christoph Prager Christoph Prager
Product Manager, Analytics
Bitmovin
Markus Hafellner Markus Hafellner
Product Manager, Encoding
Bitmovin

Video: Reducing peak bandwidth for OTT

‘Flattening the curve’ isn’t just about dealing with viruses, we learn from Will Law. Rather, this is one way to deal with network congestion brought on by the rise in broadband use during the global lockdown. This and other key ways such as per-title encoding and removing the top tier are just two other which are explored in this video from Akamai and Bitmovin.

Will Law starts the talk explaining why congestion happens in a world where ABR (adaptive bitrate streaming) is supposed to deal with this. With Akamai’s traffic up by around 300%, it’s perhaps not a surprise there’s a contest for bandwidth. As not all traffic is a video stream, congestion will still happen when fighting with other, static, data transfers. However deeper than that, even with two ABR streams, the congestion protocol in use has a big impact as will shows with a graph showing Akamai’s FastTCP and BBR where BBR steals all the bandwidth rather than ‘playing fair’.

Using a webpage constructed for the video, Will shows us a baseline video playback and the metrics associated with it such as data transferred and bitrate which he uses to demonstrate the different benefits of bitrate production techniques. The first is covered by Bitmovin’s Sean McCarthy who explains Bitmovin’s per-title encoding technology. This approach ensures that each asset has encoder settings tuned to get the best out of the content whilst reducing bandwidth as opposed to simply setting your encoder to a fairly-high, safe, static bitrate for all content no matter how complex it is. Will shows on the demo that the bitrate reduces by over 50%.

Swapping codecs is an obvious way to reduce bandwidth. Unlike per-title encoding which is transparent to the end-user, using AV1, VP9 or HEVC requires support by the final device. Whilst you could offer multiple versions of your assets to make sure you still cover all your players despite fragmentation, this has the downside of extra encoding costs and time.

Will then looks at three ways to reduce bandwidth by stopping the highest-bitrate rendition from being used. Method one is to manually modify the manifest file. Method two demonstrates how to do so using the Bitmovin player API, and method three uses the CDN itself to manipulate the manifests. The advantage of doing this in the CDN is because this allows much more flexibility as you can use geolocation rules, for example, to deliver different manifests to different locations.

The final method to reduce peak bandwidth is to use the CDN to throttle download speed of the stream chunks. This means that while you may – if you are lucky – have the ability to download at 100Mbps, the CDN only delivers 3- or 5-times the real-time bitrate. This goes a long way to smoothing out the peaks which is better for the end user’s equipment and for the CDN. Seen in isolation, this does very little, as the video bitrate and the data transferred remain the same. However, delivering the video in this much more co-operative way is much more likely to cause knock-on problems for other traffic. It can, of course, be used in conjunction with the other techniques. The video concludes with a Q&A.

Watch now!
Speakers

Will Law Will Law
Chief Architect,
Akamai
Sean McCarthy Sean McCarthy
Technical Product Marketing Manager,
Bitmovin

Video: Advanced Video Coding Standards AVC

Whilst the encoding landscape is shifting, AVC (AKA H.264) still dominates many areas of video distribution so, for many, understanding what’s under the hood opens up a whole realm of diagnostics and fault finding that wouldn’t be possible without. Whilst many understand that MPEG video is built around I, B and P frames, this short talk offers deeper details which helps how it behaves both when it’s working well and otherwise.

Christian Timmerer, co-founder of Bitmovin, starts his lesson on AVC with the summary of improvements in AVC over the basic MPEG 2 model people tend to learn as a foundation. Improvements such as variable block size motion compensation, multiple reference frames and improved adaptive entropy coding. We see that, as we would expect the input can use 4:2:0 or 4:2:2 chroma sub-sampling as well as full 4:4:4 representation with 16×16 macroblocks for luminance (8×8 for chroma in 4:2:0). AVC can handle Pictures split into several slices which are self-contained sequences of macroblocks. Slices themselves can then be grouped.

Intra-prediction is the next topic where by an algorithm uses the information within the slice to predict a macroblock. This prediction is then subtracted from the actual block and coded thereby reducing the amount of data that needs to be transferred. The decoder can make the same prediction and reconstruct the full block from the data provided.

The next sections talk about motion prediction and the different sizes of macroblocks. A macroblock is a fixed area on the picture which can be described by a mixture of some basic patterns but the more complex the texture in the block, the more patterns need to be combined to recreate it. By splitting up the 16×16 block, we can often find a simpler way to describe the 8×8 or 8×16 shapes than if they had to encompass a whole 16×16 block.

 

B-frames are fairly well understood by many, but even if they are unfamiliar to you, Christian explains the concept whereby B-frames provide solely motion information of macroblocks both from frames before and after. This allows macroblocks which barely change to be ‘moved around the screen’ so to speak with minimal changes other than location. Whilst P and I frames provide new macroblocks, B-frames are intended just to provide this directional information. Christian explains some of the nuances of B-frame encoding including weighted prediction.

Quantisation is one of the most important parts of the MPEG process since quantisation is the process by which information is removed and the codec becomes lossy. Thus the way this happens, and the optimisations possible are key so Christian covers the way this happens before explaining the deblocking filter available. After splitting the picture up into so many macroblocks which are independently processed, edges between the blocks can become apparent so this filter helps smooth any artefacts to make them more pleasing to the eye. Christian finishes talking about AVC by exploring entropy encoding and thinking about how AVC encoding can and can’t be improved by adding more memory and computation to the encoder.

Watch now!
Speaker

Christian Timmerer Christian Timmerer
CIO & Cofounder, Bitmovin
Associate Professor, Universität Klagenfurt