Video: Bit-Rate Evaluation of Compressed HDR using SL-HDR1

HDR video can look vastly better than standard dynamic range (SDR), but much of our broadcast infrastructure is made for SDR delivery. SL-HDR1 allows you to deliver HDR over SDR transmission chains by breaking down HDR signals into an SDR video plus enhancement metadata which describes how to reconstruct the original HDR signal. Now part of the ATSC 3.0 suite of standards, people are asking the question whether you get better compression using SL-HDR1 or compressing HDR directly.

HDR works by changing the interpretation of the video samples. As human sight has a non-linear response to luminance, we can take the same 256 or 1024 possible luminance values and map them to brightness so that where the eye isn’t very sensitive, only a few values are used, but there is a lot of detail where we see well. Humans perceive more detail at lower luminosity, so HDR devotes a lot more of the luminance values to describing that area and relatively few at high brightness where specular highlights tend to be. HDR, therefore, has the benefit of not only increasing the dynamic range but actually provides more detail in the lower light areas than SDR.

Ciro Noronha from Cobalt has been examining the question of encoding. Video encoders are agnostic to dynamic range. Since HDR and SDR only define the meaning of the luminance values, the video encoder sees no difference. Yet there have been a number of papers saying that sending SL-HDR1 can result in bitrate savings over HDR. SL-HDR1 is defined in ETSI TS 103 433-1 and included in ATSC A/341. The metadata carriage is done using SMPTE ST 2108-1 or carried within the video stream using SEI. Ciro set out to do some tests to see if this was the case with technology consultant Matt Goldman giving his perspective on HDR and the findings.

Ciro tested with three types of Tested 1080p BT.2020 10-bit content with the AVC and HEVC encoders set to 4:2:0, 10-bit with a 100-frame GOP. Quality was rated using PSNR as well as two special types of PSNR which look at distortion/deviation from the CIE colour space. The findings show that AVC encode chains benefit more from SL-HDR1 than HEVC and it’s clear that the benefit is content-dependent. Work remains to be done now to connect these results with verified subjective tests. With LCEVC and VVC, MPEG has seen that subjective assessments can show up to 10% better results than objective metrics. Additionally, PSNR is not well known for correlating well with visual improvements.

Watch now!
Speakers

Ciro Noronha Ciro Noronha
Executive Vice President of Engineering, Cobalt Digital
President, Rist Forum
Matthew Goldman Matthew Goldman
Technology Consultant

Video: 2020 Video Redefined and the Pandemic

Life’s certainly changed during 2020 and 2021, so far, is only cementing those changes. Who are the winners and who are the losers? Jon Giegengack joins John Porterfield to discuss the work Entertainment Research’s doing to understand the changing market.

I think we’re all aware that the pandemic is ravaging some market sectors and even some whole economies. At the same time, staying at home is allowing some families who are still in work, to save money. Jon explains that their polling shows that around a quarter of US consumers have dropped a service, whereas around a third have added one which mirrors the mixed stories we hear of lost jobs juxtaposed against ‘super savers’ who are investing their new-found wealth.

Jon’s view is that one key change that will last long beyond the pandemic is this adoption of streaming platforms. Premium video on demand is what people are interested in and is only buoyed by people’s investments in TVs during the pandemic, laptops, mobile devices etc. Furthermore, the pandemic has forced the hand of companies to move forward with their home distribution plans. Warner Brothers, for example, will be releasing their new films both at the cinema and on HBO Max at the same time at no extra cost to the subscribers. Whilst they may change their approach in 2022, this will have brought forward their plans and may also encourage others to do similar. It’s also another motivation for people to invest in their own home-viewing environment which will, in turn, encourage them to double down on their interest in viewing theatrical releases at home.

People do care about quality. They are forgiving when the quality isn’t there, but research shows that the majority of video watched on Netflix is done on a TV which is a big shift from its early days of streaming. Jon’s research shows that second screens tend to be used for YouTube-style videos and that time spent watching there doesn’t reduce hour-for-hour time in front of the TV.

This sounds like it’s great news all round. But the research shows that in the US it’s Netflix which is the main beneficiary of this change racking up a 49% increase in subscribers with Disney+, Hulu and Prime coming after. For TV providers, the news isn’t so good. vMPVDs such as YouTube Live and Hulu Live saw a 50% decrease and conventional TV cable/satellite providers saw a 32% drop.

Lastly, John discussed the impact on the content itself by the content where presenters have had to find ways of delivering TV from home taking many leaves from YouTubers to make sure they and their surroundings look good. This homely feel has been appreciated in some programmes leaving viewers with a closer connection to the presenters which may leave the door open to continuing some parts of programming like this in the future.

Watch now!
Speakers

Jon Giegengack Jon Giegengack
Principal,
Hub Entertainment Research
John Porterfield John Porterfield
Streaming Technology Evangelist,
JP’sChalkTalks

Video: Comparison of EVC and VVC against HEVC and AV1

AV1’s royalty-free status continues to be very appealing, but in raw compression is it losing ground now to the newer codecs such as VVC? EVC has also introduced a royalty-free model which could also detract from AV1’s appeal and certainly is an improvement over HEVC’s patent debacle. We have very much moved into an ecosystem of patents rather than the MPEG2/AVC ‘monoculture’ of the 90s within broadcast. What better way to get a feel for the codecs but to put them to the test?

Dan Grois from Comcast has been looking at the new codecs VVC and EVC against AV1 and HEVC. VVC and EVC were both released last year and join LCEVC as the three most recent video codecs from MPEG (VVC was a collaboration between MPEG and ITU). In the same way, HEVC is known as H.265, VVC can be called H.266 and it draws its heritage from the HEVC too. EVC, on the other hand, is a new beast whose roots are absolutely shared with much of MPEG’s previous DCT-based codecs, but uniquely it has a mode that is totally royalty-free. Moreover, its high-performant mode which does include patented technology can be configured to exclude any individual patents that you don’t wish to use thus adding some confidence that businesses remain in control of their liabilities.

Dan starts by outlining the main features of the four codecs discussing their partitioning methods and prediction capabilities which range from inter-picture, intra-picture and predicting chroma from the luma picture. Some of these techniques have been tackled in previous talks such as this one, also from Mile High Video and this EVC overview and, finally, this excellent deep dive from SMPTE in to all of the codecs discussed today plus LCEVC.

Dan explains the testing he did which was based on the reference encoder models. These are encoders that implement all of the features of a codec but are not necessarily optimised for speed like a real-world implementation would be. Part of the work delivering real-world implementations is using sophisticated optimisations to get the maths done quickly and some is choosing which parts of the standard to implement. A reference encoder doesn’t skimp on implementation complexity, and there is seldom much time to optimise speed. However, they are well known and can be used to benchmark codecs against each other. AV1 was tested in two configurations since

AV1 needs special treatment in this comparison. Dan explains that AV1 doesn’t have the same approach to GOPs as MPEG so it’s well known that fixing its QP will make it inefficient, however, this is what’s necessary for a fair comparison so, in addition to this, it’s also run in VBR mode which allows it to use its GOP structure to the full such as AV1’s invisible frames which carry data which can be referenced by other frames but which are never actually displayed.

The videos tested range from 4K 10bit down to low resolution 8 bit. As expected VVC outperforms all other codecs. Against HEVC, it’s around 40% better though carrying with it a factor of 10 increase in encoding complexity. Note that these objective metrics tend to underrepresent subjective metrics by 5-10%. EVC consistently achieved 25 to 30% improvements over HEVC with only 4.5x the encoder complexity. As expected AV1’s fixed QP mode underperformed and increased data rate on anything which wasn’t UHD material but when run in VBR mode managed 20% over HEVC with only a 3x increase in complexity.

Watch now!
Speaker

Dan Grois Dan Grois
Principal Researcher,
Comcast

Video: Netflix – Delivering better video encodes for legacy devices

With over 139 million paying customers, Netflix is very much in the bandwidth optimisation game. It keeps their costs down, it keeps customers’ costs down for those on metered tariffs and a lower bitrate keeps the service more responsive.

As we’ve seen on The Broadcast Knowledge over the years, Netflix has tried hard to find new ways to encode video with Per-Title encoding, VMAF and, more recently, per-shot encoding as well as moving to more efficient codecs such as AV1.

 

Mariana Afonso from Netflix discusses what do you do with devices that decode the latest encoders either because they are too old or can’t get certification? Techniques such as per-title encoding work well because they are wholly managed in the encoder. Whereas with codecs such as AV1, the decoder has to support it too, meaning it’s not as widely applicable an optimisation.

As per-title encoding was developed within Netflix before they got their VMAF metric finished, it still uses PSNR, explains Mariana. This means there is still an opportunity to bring down bitrates by using VMAF. Because VMAF more accurately captures how the video looks, it’s able to lead optimisation algorithms better and shows gains in tests.

Better than per-title is per-chunk. The per-chunk work done modulates the average target bitrate from chunk to chunk. This avoids over-allocating bits for low-complexity scenes and results in a more consistent quality by 6 to 16%.

Watch now!
Speaker

Mariana Alfonso Mariana Afonso
Research Scientist, Video Algorithms,
Netflix