If there’s any talk that cuts through the AV1 hype, it must be this one. The talk from the @Scale conference starts by re-introducing AV1 and AoM but then moves quickly on to encoding techniques and the toolsets now available in AV1.
Starting by looking at the evolution from VP9 to AV1, Google engineer Yue Chen looks at:
AV1 has strong backing from tech giants but is still seldom seen in the wild. Find out what the plans are for the future with Google’s Debargha Mukherjee.
Debargha’s intent in this talk is simple: to frame a description of what AV1 can do and is doing today in terms of the history of the codec and looking forward to the future and a potential AV2.
The talk starts by demonstrating the need for better video codecs not least of which is the statistic that by 2021, 81% of the internet’s traffic is expected to be video. But on top of that, there is a frustration with the slow decade-long refresh process which is traditional for video codecs. In order to match the new internet landscape with fast-evolving services, it seemed appropriate to have a codec which not only delivered better encoding but also saw a quicker five-year refresh cycle.
As a comparison to the royalty-free AV1, Debargha then looks at VP9 it is deployed. Further more, VP10 who’s development was stopped and diverted into the AV1 effort which is then the topic for the next part of the talk; the Alliance for Open Media, the standardisation process and then a look at some of the encoding tools available to archive the stated aims.
To round off the description of what’s presently happening with AV1 trials of VP9, HEVC and AV1 are shown demonstrating AV1s ability to improve compression for a certain quality. Bitmovin and Facebook’s tests are also highlighted along with speed tests.
Looking, now, to the future, the talk finishes by explaining the future roadmap for hardware decoding and other expected milestones in the coming years plus the software work such as SVT-AV1 and DAV1D for optimised encoding and decoding. With the promised five-year cycle, we need to look forward now to AV2 and Debargha discusses what it might be and what it would need to achieve.
With the advent of digital video, the people in the middle of the broadcast chain have little do to with colour for the most part. Yet those in post production, acquisition and decoding/display are finding it life more and more difficult as we continue to expand colour gamut and deliver on new displays.
Google’s Steven Robertson takes us comprehensively though the challenges of colour from the fundamentals of sight to the intricacies of dealing with REC 601, 709, BT 2020, HDR, YUV transforms and all the mistakes people make in between.
An approachable talk which gives a great overview, raises good points and goes into detail where necessary.
An interesting point of view is that colour subsampling should die. After all, we’re now at a point where we could feed an encoded with 4:4:4 video and get it to compress the colour channels more than the luminance channel. Steven says that this would generate more accurate colour than by stripping it of a fixed amount of data like 4:2:2 subsampling does.
Given at Brightcove HQ as part of the San Francisco Video Tech meet-ups.
SNMP has long been widely used in the broadcast industry and is a great example of the industry using a technology which is there but has never quite satisfied all the needs not least security. Here, Rob Shakir and Carl Lebsack from Google explain their dissatisfaction with SNMP and tell of the system, gRPC, Google has written and implemented in response to stream telemetry at a high frequency. As larger facilities move to uncompressed essences over IP, this should solve a number of issues for the broadcast industry.
This talk given at NANOG 73 covers:
The requirement for time-accurate data collection
The need for finer granularity
Inability of SNMP to contain large amounts of data