Video: Colour

With the advent of digital video, the people in the middle of the broadcast chain have little do to with colour for the most part. Yet those in post production, acquisition and decoding/display are finding it life more and more difficult as we continue to expand colour gamut and deliver on new displays.

Google’s Steven Robertson takes us comprehensively though the challenges of colour from the fundamentals of sight to the intricacies of dealing with REC 601, 709, BT 2020, HDR, YUV transforms and all the mistakes people make in between.

An approachable talk which gives a great overview, raises good points and goes into detail where necessary.

An interesting point of view is that colour subsampling should die. After all, we’re now at a point where we could feed an encoded with 4:4:4 video and get it to compress the colour channels more than the luminance channel. Steven says that this would generate more accurate colour than by stripping it of a fixed amount of data like 4:2:2 subsampling does.

Given at Brightcove HQ as part of the San Francisco Video Tech meet-ups.

Watch now!

Speaker

Steven Robertson Steven Robertson
Software Engineer,
Google

Video: Per-title Encoding at Scale

MUX is a very pro-active company pushing forward streaming technology. At NAB 2019 they have announced Audience Adaptive Encoding which is offers encodes tailored to both your content but also the typical bitrate of your viewing demographic. Underpinning this technology is machine learning and their Per-title encoding technology which was released last year.

This talk with Nick Chadwick looks at what per-title encoding is, how you can work out which resolutions and bitrates to encode at and how to deliver this as a useful product.

Nick takes some time to explain MUX’s ‘convex hulls’ which give a shape to the content’s performance at different bitrates and helps visualise the optimum encoding parameters the content. Moreover we see that using this technique, we see some surprising circumstances when it makes sense to start at high resolutions, even for low bitrates.

Looking then at how to actually work out on a title-by-title basis, Nick explains the pros and cons of the different approaches going on to explain how MUX used machine learning to generate the model they created to make this work.

Finishing off with an extensive Q&A, this talk is a great overview on how to pick great encoding parameters, manually or otherwise.

Watch now!

Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux Inc.

Video: Running live video with FFmpeg

San Francisco Video Tech welcomes Haluk Ucar talking about live video streaming. How do you encode multiple resolutions/bitrates efficiently on CPUs and maximise the amount of channels? Is there value in managing multiple encodes centrally? How can we manage the balance between CPU use and VQ?

Haluk discusses a toolset for Adaptive Decisions and looks at Adaptive Segment Decisions. Here he discusses the relationship between IDR frames and frequent Scene Changes.

Haluk covers a lot and finishes with a Q&A. So if you have an interest in Live Streaming, then Watch Now!

Speaker

Haluk Ucar Haluk Ucar
Director of Engineering,
IDT

Video: VP9 Transcoding for Live eSports Broadcast

VP9 is a well-known codec, but it hasn’t seen many high-profile, live deployments which makes Twitch’s move to deliver their platform using VP9 in preference over AVC all the more interesting.

Here, Yueshi Shen from Twitch, explains the rationale for VP9 by explaining the scale of Twitch and looking at their AVC bitrate demands. He explains the patent issues with HEVC and VP9 then looks at decoder support across devices and platforms. Importantly, encoder implementation is examined leading to Twitch’s choice of FPGA to provide live encoding.

Yueshi then looks at the potential of AV1 to Switch_Frame to provide low-latency broadcast at scale.

Watch now!

Speaker

Yueshi Shen Yueshi Shen
Principal (Level 7) Research Engineer & Engineering Manager,
Twitch