Video: Pervasive video deep-links

Google have launched a new initiative allowing publishers to highlight key moments in a video so that search results can jump straight to that moment. Whether you have a video that looks at 3 topics, one which poses questions and provides answers or one which has a big reveal and reaction shots, this could help increase engagement.

The plan is the content creators tell Google about these moments so Paul Smith from theMoment.tv takes to the stage at San Francisco Video Tech to explain how. After looking at a live demo, Paul takes a dive into the webpage code that makes it happen. Hidden in the tag, he shows the script which has its type set to application/ld+json. This holds the metadata for the video as a whole such as the thumbnail URL and the content URL. However it also then defines the highlighted ‘parts’ of the video with URLs for those.

Whiles the programme is currently limited to a small set of content publishers, everyone can benefit from these insights on google video search. It will also look at YouTube descriptions in which some people give links to specific times such as different tracks in a music mix, and bring those into the search results.

Paul looks at what this means for website and player writers. On suggestion is the need to scroll the page to the correct video and make the different videos on a page clearly signposted. Paul also looks towards the future at what could be done to better integrate with this feature. For example updating the player UI to see and create moments or improve the ability to seek to sub-second accuracy. Intriguingly he suggests that it may be advantageous to synchronise segment timings with the beginning of moments for popular video. Certainly food for thought.

Watch now!
Speaker

Paul Smith Paul Smith
Founder,
theMoment.tv

Video: A Technical Overview of AV1

If there’s any talk that cuts through the AV1 hype, it must be this one. The talk from the @Scale conference starts by re-introducing AV1 and AoM but then moves quickly on to encoding techniques and the toolsets now available in AV1.

Starting by looking at the evolution from VP9 to AV1, Google engineer Yue Chen looks at:

  • Extended Reference Frames
  • Motion Vector Prediction
  • Dynamic Motion Vector Referencing
  • Overlapped Block Motion Compensation
  • Masked Compound Prediction
  • Warped Motion Compensation
  • Transform (TX) Coding, Kernels & Block Partitioning
  • Entropy Coding
  • AV1 Symbol Coding
  • Level-map TX Coefficient Coding
  • Restoration and Post-Processing
  • Constrained Dire. Enhancement Filtering
  • In-loop restoration & super resolution
  • Film Grain Synthesis

The talk finishes by looking at Compression Efficiency of AV1 against both HEVC (x.265) & VP9 (libvpx) then coding complexity in terms of speed plus what’s next on the roadmap!

Watch now!

Speaker

Yue Chen Yue Chen
Senior AV1 Engineer,
Google

Video: The Past, Present and Future of AV1

AV1 has strong backing from tech giants but is still seldom seen in the wild. Find out what the plans are for the future with Google’s Debargha Mukherjee.

Debargha’s intent in this talk is simple: to frame a description of what AV1 can do and is doing today in terms of the history of the codec and looking forward to the future and a potential AV2.

The talk starts by demonstrating the need for better video codecs not least of which is the statistic that by 2021, 81% of the internet’s traffic is expected to be video. But on top of that, there is a frustration with the slow decade-long refresh process which is traditional for video codecs. In order to match the new internet landscape with fast-evolving services, it seemed appropriate to have a codec which not only delivered better encoding but also saw a quicker five-year refresh cycle.

As a comparison to the royalty-free AV1, Debargha then looks at VP9 it is deployed. Further more, VP10 who’s development was stopped and diverted into the AV1 effort which is then the topic for the next part of the talk; the Alliance for Open Media, the standardisation process and then a look at some of the encoding tools available to archive the stated aims.

To round off the description of what’s presently happening with AV1 trials of VP9, HEVC and AV1 are shown demonstrating AV1s ability to improve compression for a certain quality. Bitmovin and Facebook’s tests are also highlighted along with speed tests.

Looking, now, to the future, the talk finishes by explaining the future roadmap for hardware decoding and other expected milestones in the coming years plus the software work such as SVT-AV1 and DAV1D for optimised encoding and decoding. With the promised five-year cycle, we need to look forward now to AV2 and Debargha discusses what it might be and what it would need to achieve.

Watch now!
Speaker

Debargha Mukherjee Debargha Mukherjee
Principal Software Engineer,
Google

Video: Colour

With the advent of digital video, the people in the middle of the broadcast chain have little do to with colour for the most part. Yet those in post production, acquisition and decoding/display are finding it life more and more difficult as we continue to expand colour gamut and deliver on new displays.

Google’s Steven Robertson takes us comprehensively though the challenges of colour from the fundamentals of sight to the intricacies of dealing with REC 601, 709, BT 2020, HDR, YUV transforms and all the mistakes people make in between.

An approachable talk which gives a great overview, raises good points and goes into detail where necessary.

An interesting point of view is that colour subsampling should die. After all, we’re now at a point where we could feed an encoded with 4:4:4 video and get it to compress the colour channels more than the luminance channel. Steven says that this would generate more accurate colour than by stripping it of a fixed amount of data like 4:2:2 subsampling does.

Given at Brightcove HQ as part of the San Francisco Video Tech meet-ups.

Watch now!

Speaker

Steven Robertson Steven Robertson
Software Engineer,
Google