Video: Per-title Encoding at Scale

MUX is a very pro-active company pushing forward streaming technology. At NAB 2019 they have announced Audience Adaptive Encoding which is offers encodes tailored to both your content but also the typical bitrate of your viewing demographic. Underpinning this technology is machine learning and their Per-title encoding technology which was released last year.

This talk with Nick Chadwick looks at what per-title encoding is, how you can work out which resolutions and bitrates to encode at and how to deliver this as a useful product.

Nick takes some time to explain MUX’s ‘convex hulls’ which give a shape to the content’s performance at different bitrates and helps visualise the optimum encoding parameters the content. Moreover we see that using this technique, we see some surprising circumstances when it makes sense to start at high resolutions, even for low bitrates.

Looking then at how to actually work out on a title-by-title basis, Nick explains the pros and cons of the different approaches going on to explain how MUX used machine learning to generate the model they created to make this work.

Finishing off with an extensive Q&A, this talk is a great overview on how to pick great encoding parameters, manually or otherwise.

Watch now!

Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux Inc.

Video: Best Practices for Advanced Software Encoder Evaluations

Streaming Media East brings together Beamr, Netflix BAMTECH Media and SSIMWAVE to discuss the best ways to evaluate software encoders and we see there is much overlap with hardware encoder evaluation, too.

The panel gets into detail covering:

  • Test Design
  • Choosing source sequences
  • Rate Control Modes
  • Bit Rate or Quality Target Levls
  • Offline (VOD) vs Live (Linear)
  • Discrete vs. Multi-resolution/Bitrate
  • Subjective vs. objective measurements
  • Encoding Efficiency vs Performance
  • Video vs Still frames
  • PSNR Tuning
  • Evaluation at Encode Resolution Vs Display Resolution

Watch now for this comprehensive ‘How To’

Speakers

Anne Aaron Dr. Anne Aaron
Director of Video Algorithms,
Netflix
Scott Labrozzi Scott Labrozzi
VP Video Processing, Core Media Video Processing,
BAMTECH Media
Zhou Wang Dr. Zhou Wang
Chief Science Officer,
SSIMWAVE
Tom Vaughan Moderator: Tom Vaughan
VP Strategy,
Beamr

Video: Video-Evaluation Best Practices: Testing & Optimisation Techniques

In this webcast, AWS Elemental dives into a topic other companies don’t discuss: The video encoding techniques some vendors use to sway judgment during proof-of-concept demos.

Improve your discerning eye and your analysis acumen, and obtain essential knowledge:

  • Methods and best practices for evaluating video quality
  • Examples of scenes which look totally different, but have the same metrics score
  • Important considerations for accurate evaluation of video results
  • Subjective vs. Objective results
  • How encoder settings affect output and how you can fine tune them for the best results
  • What is Adaptive Quantization and how can it improve your video output quality
  • Encoding techniques that impress in shootouts but might not be practical in production

Video encoding comparisons should inform, not fool. Learn what you need to know, before you start the process.

Speakers:

Dan Gehred Dan Gehred
Solutions Marketing Manager for Compression, AWS ElementalDan Gehred, Solutions Marketing Manager for Compression at AWS Elemental, is responsible for product marketing for all compression software products. Dan has over 15 years of experience building and marketing digital media applications.
Dan Germain Dan Germain
Sr. Product Manager Compression – VOD, AWS ElementalWith over 30 years of industry experience in the international media marketplace, Dan is passionate about technology, developing new standards, promoting positive change and business approaches to enhanced profitability.

Video: How to Fine Tune your Adaptive Encoding Groups with Objective Quality Metrics


In this on-demaind video, Streaming Learning Center’s Jan Ozer explains objective metrics to us and how they can be used to build better ABR ladders.

Choosing the number of streams in an adaptive group and configuring them is usually a subjective, touchy-feely exercise, with no way to really gauge the effectiveness and efficiency of the streams. However, by measuring stream quality via metrics such as PSNR, SSIMplus, and VQM, you can precisely assess the quality delivered by each stream and its relevancy to the adaptive group.

This presentation identifies several key objective quality metrics, teaches how to apply them, and provides an objective framework for analyzing which streams are absolutely required in your adaptive group and their optimal configuration.

Watch now!