Video: Hardware Transcoding Solutions For The Cloud

Hardware encoding is more pervasive with Intel’s Quick Sync embedding CUDA GPUs inside GPUs plus NVIDIA GPUs have MPEG NVENC encoding support so how does it compare with software encoding? For HEVC, can Xilinx’s FPGA solution be a boost in terms of quality or cost compared to software encoding?

Jan Ozer has stepped up to the plate to put this all to the test analysing how many real-time encodes are possible on various cloud computing instances, the cost implications and the quality of the output. Jan’s analytical and systematic approach brings us data rather than anecdotes giving confidence in the outcomes and the ability to test it for yourself.

Over and above these elements, Jan also looks at the bit rate stability of the encodes which can be important for systems which are sensitive to variations such services running at scale. We see that the hardware AVC solutions perform better than x264.

Jan takes us through the way he set up these tests whilst sharing the relevant ffmpeg commands. Finally he shares BD plots and example images which exemplify the differences between the codecs.

Watch now!
Download the slides
Speaker

Jan Ozer Jan Ozer
Principal, Streaming Learning Center
Contributing Editor, Streaming Media

Video: Adaptive Bit Rate video delivery (MPEG-DASH)

MPEG-DASH has been in increasing use for many years and while the implementations and versions continue to improve and add new features, the core of its function remains the same and is the topic of this talk.

For anyone looking for an introduction to multi-bitrate streaming, this talk from Thomas Kernen is a great start as he charts the way streaming has progressed from the initial ‘HTTP progressive download’ to dynamic streaming which adapts to your bandwidth constraints.

Thomas explains the way that players and servers talk and deliver files and summarises the end-to-end distribution ecosystem. He covers the fact that MPEG DASH standardises the container description information, captioning and other aspects. DRM is available through the common encryption scheme.

MPD files, the manifest text files, which are the core of MPEG-DASH are next under the spotlight. Thomas talks us through the difference between Media Presentations, Periods, Representations and Segment Info. We then look at the ability to use the ISO BMFF format or MPEG-2 TS like HLS.

The DASH Industry Forum, DASH-IF, is an organisation which promotes the use of DASH within businesses which means that not only do they do work in spreading the word of what DASH is and how it can be helpful, but they also support interoperability. DASH264 is also the output from the DASH-IF and Thomas describes how this specification of using DASH helps with interoperability.

Buffer bloat is still an issue today which is a phenomenon where for certain types of traffic, the buffers upstream and locally in someone’s network can become perpetually full resulting in increased latency in a stream and, potentially, instability. Thomas looks briefly at this before moving on to HEVC.

At the time of this talk, HEVC was still new and much has happened to it since. This part of the talk gives a good introduction to the reasons that HEVC was brought into being and serves as an interesting comparison for the reasons that VVC, AV1, EVC and other codecs today are needed.

For the latest on DASH, check out the videos in the list of related posts below.

Watch now!
Speaker

Thomas Kernen Thomas Kernen
Staff Staff Architect, NVIDIA
Co-Chair SMPTE 32M Technology Committee, SMPTE
Formerly Technical Leader, Cisco,

On-Demand Webinar: AI for Media and Entertainment

In this webinar, visual effects and digital production company Digital Domain will share their experience developing AI-based toolsets for applying deep learning to their content creation pipeline. AI is no longer just a research project but also a valuable technology that can accelerate labor-intensive tasks, giving time and control back to artists.

The webinar starts with a brief overview of deep learning and dive into examples of convolutional neural networks (CNNs), generative adversarial networks (GANS), and autoencoders. These examples will include flavors of neural networks useful for everything from face swapping and image denoising to character locomotion, facial animation, and texture creation.

By attending this webinar, you will:

  • Get a basic understanding of how deep learning works
  • Learn about research that can be applied to content creation
  • See examples of deep learning–based tools that improve artist efficiency
  • Hear about Digital Domain’s experience developing AI-based toolsets

Watch Now!

Add Presenter 1's Head Shot Image URL (ex: http://info.nvidianews.com/rs/156-OFN-742/images/dan_m.jpg)
DOUG ROBLEM
Senior Director of Software R&D, Digital Domain
Add Presenter 2's Head Shot Image URL (ex: http://info.nvidianews.com/rs/156-OFN-742/images/dan_m.jpg)
RICK CHAMPAGNE
Global Media and Entertainment Strategy and Marketing, NVIDIA
Add Presenter 3's Head Shot Image URL (ex: http://info.nvidianews.com/rs/156-OFN-742/images/dan_m.jpg)
RICK GRANDY
Senior Solutions Architect, Professional Visualization, NVIDIA
Add Presenter 4's Head Shot Image URL (ex: http://info.nvidianews.com/rs/156-OFN-742/images/dan_m.jpg)
GARY BURNETT
Solutions Architect, Professional Visualization, NVIDIA