Video: Benjamin Bross and Adam Wieckowski on Fraunhofer HHI, VVC, and Compression

VVC was finalised in mid-2020 after five years of work. AVC’s still going strong and is on its 26th version, so it’s clear there’s still plenty of work ahead for those involved in VVC. Heavily involved in AVC, HEVC and now VVC is the Fraunhofer Heinrich Hertz Institute (HHI) who are patent holders in all three and for VVC they are, for the first time, developing a free, open-source encoder and decoder for the standard.

In this video from OTTVerse.com, Editor Krishna Rao speaks to Benjamin Bross and Adam Więckowsk both from Fraunhofer HHI. Benjamin has previously been featured on The Broadcast Knowledge talking at Mile High Video about VVC which would be a great video to check out if you’re not familiar with this new codec given before its release.

They start by discussing how the institute is supported by the German government, money received from its patents and similar work as well as the companies who they carry out research for. One benefit of government involvement is that all the papers they produce are made free to access. Their funding model allows them the ability to research problems very deeply which has a number of benefits. Benjamin points out that their research into CABAC which is a very efficient, but complex entropy encoding technique. In fact, at the time they supported introducing it into AVC, which remember is 19 years old, it was very hard to find equipment that would use it and certainly no computers would. Fast forward to today and phones, computers and pretty much all encoders are able to take advantage of this technique to keep bitrates down so that ability to look ahead is beneficial now. Secondly, giving an example in VVC, Benjamin explains they looked at using machine learning to help optimise one of the tools. This was shown to be too difficult to implement but could be replaced by matrix multiplication which and was implemented this way. This matrix multiplication, he emphasises, wouldn’t have been able to be developed without having gone into the depths of this complex machine learning.

Krishna suggests there must be a lot of ‘push back’ from chip manufacturers, which Benjamin acknowledges though, he says people are just doing their jobs. It’s vitally important, he continues, for chip manufacturers to keep chip costs down or nothing would actually end up in real products. Whilst he says discussions can get quite heated, the point of the international standardisation process is to get the input at the beginning from all the industries so that the outcome is an efficient, implementable standard. Only by achieving that does everyone benefit for years to come.e

The conversation then moves on to the open source initiative developing VVenC and VVdeC. These are separate from the reference implementation VTM although the reference software has been used as the base for development. Adam and Benjamin explain that the idea of creating these free implementations is to create a standard software which any company can take to use in their own product. Reference implementations are not optimised for speed, unlike VVenC and VVdeC. Fraunhofer is expecting people to take this software and adapt it for, say 360-degree video, to suit their product. This is similar to x264 and x265 which are open source implementations of AVC and HEVC. Public participation is welcomed and has already been seen within the Github project.

Adam talks through a slide showing how newer versions of VVenC have increased speed and bitrate with more versions on their way. They talk about how some VVC features can’t really be seen from normal RD plots giving the example of open vs closed GOP encoding. Open GOP encoding can’t be used for ABR streaming, but with VVC that’s now a possibility and whilst it’s early days for anyone having put the new type of keyframes through their paces which enable this function, they expect to start seeing good results.

The conversation then moves on to encoding complexity and the potential to use video pre-processing to help the encoder. Benjamin points out that whilst there is an encode increase to get to the latest low bitrates, to get to the best HEVC can achieve, the encoding is actually quicker. Looking to the future, he says that some encoding tools scale linearly and some exponentially. He hopes to use machine learning to understand the video and help narrow down the ‘search space’ for certain tools as it’s the search space that is growing exponentially. If you can narrow that search significantly, using these techniques becomes practical. Lastly, they say the hope is to get VVenC and VVdeC into FFmpeg at which point a whole suite of powerful pre- and post- filters become available to everyone.

Watch now!
Full transcript of the video
Speakers

Benjamin Bross Benjamin Bross
Head of Video Coding Systems Group,
Fraunhofer Heinrich Hertz Institute (HHI)
Adam Więckowski Adam Więckowski
Research Assistant
Fraunhofer HHI
Krishna Rao Vijayanagar Moderator: Krishna Rao Vijayanagar
Editor,
OTTVerse.com

Video: MPEG-5 Essential Video Coding (EVC) Standard

Learning from the patent miss-steps of HEVC, MPEG have released MPEG-5 EVC which brings bitrate savings, faster encoding and clearer licencing terms including a royalty-free implementation. The hope being that with more control over exposure to patent risk, companies large and small will adopt EVC as they improve and launch streaming services now and in the future.

At Mile High Video 2020, Kiho Choi introduced the MPEG 5 Essential Video Coding. Naturally, the motivation to produce a new codec was partly based on the continued need to reduce video bitrates. With estimates of the video traffic share on the internet, both now and in the future all hovering between 75% and 90% any reduction in bitrate will have a wide benefit, best exemplified by Netflix and Facebook’s decision to reduce the bitrate at the top of their ABR ladder during the pandemic which impacted the quality available to viewers. The unspoken point of this talk is that if the top rung used EVC, viewers wouldn’t notice a drop in quality.

The most important point about EVC, which is in contrast to the MPEG/ISO co-defined standard form last year, VVC, is that it provides businesses a lot of control over their exposure to patent royalties. It’s no secret that much HEVC adoption has been hampered by the risk that large users could be approached for licencing fees. Whilst it has made its way into Apple devices, which is no minimal success, big players like ESPN won’t have anything to do with it. EVC tackles this problem in two ways. One is to have a baseline profile which provides bitrate savings over its predecessors but uses a combination of technologies which are either old enough to not be eligible for royalty payments or that have been validated as free to use. Companies should, therefore, be able to use this level of codec without any reasonable concern over legal exposure. Moreover, the main profile which does use patentable technologies allows for each individual part of the profile to be switched off meaning anyone encoding EVC has control, assuming the vendor makes this possible, over which technologies they are using and hence their exposure to risk. Kiho points out that this business-requirements-first approach is new and in contrast to many codecs.

Kiho highlights a number of the individual tools within both the baseline and main codecs which provide the bitrate savings before showing us the results of the objective and subjective testing. Within the EVC docs, the testing methodology is spelt out to allow EVC to be compared against predecessors AVC and HEVC. The baseline codec shows an improvement of 38% against 1080p60 material and 35% for UHD material compared to AVC doing the same tasks yet it achieves a quicker encoder (less compute needed) and the decode is approximately the same. The main profile, being more efficient is compared against HEVC which is, itself, around 50% more efficient than AVC. Against HEVC, Kiho says, EVC main profile produces an improvement of around 30% encoding gain for UHD footage and 25% for 1080p60 footage. Encoding is close to 5x longer and decoder is around 1.5x longer than HEVC.

Kiho finishes by summarising subjective testing of SDR and HDR videos which show that, in contrast to the objective savings which are calculated by computers, in practice perceived quality is higher and enables a higher bitrate reduction, a phenomenon which has been seen in other codec comparisons such LCEVC. SDR results show a 50% encoding gain for 4K and 30% for 1080p60 against AVC. Against HEVC, the main profile is able to deliver 50% coding gains for 4K content and 40% for 1080p60. For HDR, the main profile provides an approximately 35% encoding gain for both 1080p60 and 4k.

Watch now!
Speakers

Kiho Choi Kiho Choi
Senior Engineer & Technical Lead for Multimedia Standards at Samsung Electronics
Lead Editor of MPEG5 Part 1 Essential Video Coding

Video: Cloud Encoding – Overview & Best Practices

There are so many ways to work in the cloud. You can use a monolithic solution which does everything for you which is almost guaranteed by its nature to under-deliver on features in one way or another for any non-trivial workflow. Or you could pick best-of-breed functional elements and plumb them together yourself. With the former, you have a fast time to market and in-built simplicity along with some known limitations. With the latter, you may have exactly what you need, to the standard you wanted but there’s a lot of work to implement and test the system.

Tom Kuppinen from Bitmovin joins Christopher Olekas from SSIMWAVE and host of this Kirchner Waterloo Video Tech talk on cloud encoding. After the initial introduction to ‘middle-aged’ startup, Bitmovin, Tom talks about what ‘agility in the cloud’ means being cloud-agnostic. This is the, yet unmentioned, elephant in the room for broadcasters who are so used to having extreme redundancy. Whether it’s the BBC’s “no closer than 70m” requirement for separation of circuits or the standard deployment methodology for systems using SMPTE’s ST 2110 which will have two totally independent networks, putting everything into one cloud provider really isn’t in the same ballpark. AWS has availability zones, of course, which is one of a number of great ways of reducing the blast radius of problems. But surely there’s no better way of reducing the impact of an AWS problem than having part of your infrastructure in another cloud provider.

Bitmovin have implementations in Azure, Google Cloud and AWS along with other cloud providers. In this author’s opinion, it’s a sign of the maturity of the market that this is being thought about, but few companies are truly using multiple cloud providers in an agnostic way; this will surely change over the next 5 years. For reliable and repeatable deployments, API control is your best bet. For detailed monitoring, you will need to use APIs. For connecting together solutions from different vendors, you’ll need APIs. It’s no surprise that Bitmovin say they program ‘API First’; it’s a really important element to any medium to large deployment.

 

 

When it comes to the encoding itself, per-title encoding helps reduce bitrates and storage. Tom explains how it analyses each video and chooses the best combination parameters for the title. In the Q&A, Tom confirms they are working on implementing per-scene encoding which promises more savings still.

To add to the complexity of a best-of-breed encoding solution, using best-of-breed codecs is part and parcel of the value. Bitmovin were early with AV1 and they support VP9 and HEVC. They can also distribute the encoding so that it’s encoded in parallel by as many cores as needed. This was their initial offering for AV1 encoding which was spread over more than 200 cores.

Tom talks about how the cloud-based codecs can integrate into workflows and reveals that HDR conversion, instance pre-warming, advanced subtitling support and AV1 improvements are on the roadmap while leads on to the Q&A. Questions include whether it’s difficult to deploy on multiple clouds, which HDR standards are likely to become the favourites, what the pain points are about live streaming and how to handle metadata.

Watch now!
Speakers

Tom Kuppinen Tom Kuppinen
Senior Sales Engineer,
Bitmovin
Moderator: Christopher Olekas
Senior Software Engineer,
SSIMWAVE Inc.

Video: Scaling Video with AV1!

A nuanced look at AV1. If we’ve learnt one thing about codecs over the last year or more, it’s that in the modern world pure bitrate efficiency isn’t the only game in town. JPEG 2000 and, now, JPEG XS, have always been excused their high bitrate compared to MPEG codecs because they deliver low latency and high fidelity. Now, it’s clear that we also need to consider the computational demand of codec when evaluating which to use in any one situation.

John Porterfield welcomes Facebook’s David Ronca to understand how AV1’s arriving on the market. David’s the director of Facebook’s video processing team, so is in pole position to understand how useful AV1 is in delivering video to viewers and how well it achieves its goals. The conversation looks at how to encode, the unexpected ways in which AV1 performs better than other codecs and the state of the hardware and software decoder ecosystem.

David starts by looking at the convex hull, explaining that it’s a way of encoding content multiple times at different resolutions and bitrates and graphing the results. This graph allows you to find the best combination of bitrate and resolution for a target quality. This works well, but the multiple encodes burdens the decision with a lot of extra computation to get the best set of encoding parameters. As proof of its effectiveness, David cites a time when a 200kbps max target was given for and encoder of video plus audio. The convex hull method gave a good experience for small screens despite the compromises made in encoding fidelity. The important part is being flexible on which resolution you choose to encode because by allowing the resolution to drift up or down as well as the bitrate, higher fidelity combinations can be found over keeping the resolution fixed. This is called per-title encoding and was pioneered by Netflix as discussed in the linked talk, where David previously worked and authored this blog post on the topic.

It’s an accepted fact that encoder complexity increases for every generation. Whilst this makes sense, particularly in the standard MPEG line where MPEG 2 gave way to AVC which gave way to HEVC which is now being superseded by VVC all of which achieved an approximately 50% compression improvement at the cost of a ten-fold computation increase. But David contends that this buries the lede. Whilst it’s true that the best (read: slowest) compression improves by 50% and has a 10% complexity increase, it’s often missed that at the other end of the curve, one of the fastest settings of the newer codec can now match the best of the old codec with a 90% reduction in computation. For companies working in the software world encoding, this is big news. David demonstrates this by graphing the SVT-AV1 encoder against the x265 HEVC encoder and that against x264.

David touches on an important point, that there is so much video encoding going on in the tech giants and distributed around the world, that it’s important for us to keep reducing the complexity year on year. As it is now, with the complexity increasing with each generation of encoder, something has to give in the future otherwise complexity will go off the scale. The Alliance for Open Media’s AV1 has something to say on the topic as it’s improved on HEVC with only a 5% increase in complexity. Other codecs such as MPEG’s LCEVC also deliver improved bitrate but at lower complexity. There is a clear environmental impact from video encoding and David is focused on reducing this.

AOM is also fighting the commercial problem that codecs have. Companies don’t mind paying for codecs, but they do mind uncertainty. After all, what’s the point in paying for a codec if you still might be approached for more money. Whilst MPEG’s implementation of VVC and EVC aims to give more control to companies to help them control their risk, AOM’s royalty-free codec with a defence fund against legal attacks, arguably, gives the most predictable risk of all. AOM’s aim, David explains, is to allow the web to expand without having to worry about royalty fees.

Next is some disappointing news for AV1 fans. Hardware decoder deployments have been delayed until 2023/24 which probably means no meaningful mobile penetration until 2026/27. In the meantime the very good dav1d decoder and also gav1 are expected to fill the gap. Already quite fast, the aim is for them to be able to do 720p60 decoding for average android devices by 2024.

Watch now!
Speakers

David Ronca David Ronca
Director, Video Encoding,
Facebook
John Porterfield
Freelance Video Webcast Producer and Tech Evangelist
JP’sChalkTalks YouTube Channel