Video: CDN Trends in FPGAs & GPUs

As technology continues to improve, immersive experiences are all the more feasible. This video looks at how the CDNs can play their part in enabling technologies which seem to rely on fast, local, compute. However, as with many internet services, low latency is very important.

Greg Jones from Nvidia and Nehal Mehta form Intel give us the lowdown in this video on what’s happening today to enable low-latency CDNs and what the future might look like. Intel, owners of FPGA makers Altera, and Nvidia are both interested in how their products can be of as much service at the edge as in the core datacentres.

Greg is involved in XR development at Nvidia. ‘XR’ is a term which refers to an outcome rather than any specific technology. Ostensibly ‘eXtended’ reality, it includes some VR, some augmented reality and anything else which helps improve the immersive experience. Greg explains that the importance of getting the ‘motion to photon’ delay to within 20ms. CDNs can play a role in this by moving compute to the edge. This tracks with current trends on wanting to reduce backhaul, edge computation is already on the rise.

Greg also touches on recent power improvements on newer GPUs. Similar to what we heard the other day from Gerard Phillips from Arista who said that switch manufacturers were still using technology that CPU’s were on several years ago meaning there’s plenty in the bank for speed increases over the coming years. According to Greg, the same is true for GPUs. Moreover, it’s important to compare compute per watt rather than doing it in absolute terms.

Nehal Mehta explains that, in the same way that GPUs can offload certain tasks from the CPU, so do FPGAs. At scale, this can be critical for tasks like deep packet inspection, encryption or even dynamic ad insertion at the edge,

The second half of video looks at what’s happening during the pandemic. Nehal explains that need for encryption has increased and Greg sees that large engineering functions are now, or many are soon likely to be, done in the cloud. Greg sees XR as going a long way to helping people collaborate around a large digital model and may help to reduce travel.

The last point made is regarding video conferencing all day long leaving people wanting “more meaningful interactions”. We are seeing attempts at richer and richer meeting experiences, both with and without XR.
Watch now!
Speakers

Greg Jones Greg Jones
Global Business Development, XR
NVIDIA
Nehal Mehta Nehal Mehta
Direcotr Visiual Cloud, CDN Segment,
Intel
Tim Siglin Moderator: Tim Siglin
Founding Executive Director,
Help Me Stream

Video: Hardware Transcoding Solutions For The Cloud

Hardware encoding is more pervasive with Intel’s Quick Sync embedding CUDA GPUs inside GPUs plus NVIDIA GPUs have MPEG NVENC encoding support so how does it compare with software encoding? For HEVC, can Xilinx’s FPGA solution be a boost in terms of quality or cost compared to software encoding?

Jan Ozer has stepped up to the plate to put this all to the test analysing how many real-time encodes are possible on various cloud computing instances, the cost implications and the quality of the output. Jan’s analytical and systematic approach brings us data rather than anecdotes giving confidence in the outcomes and the ability to test it for yourself.

Over and above these elements, Jan also looks at the bit rate stability of the encodes which can be important for systems which are sensitive to variations such services running at scale. We see that the hardware AVC solutions perform better than x264.

Jan takes us through the way he set up these tests whilst sharing the relevant ffmpeg commands. Finally he shares BD plots and example images which exemplify the differences between the codecs.

Watch now!
Download the slides
Speaker

Jan Ozer Jan Ozer
Principal, Streaming Learning Center
Contributing Editor, Streaming Media

On-Demand Webinar: AI for Media and Entertainment

In this webinar, visual effects and digital production company Digital Domain will share their experience developing AI-based toolsets for applying deep learning to their content creation pipeline. AI is no longer just a research project but also a valuable technology that can accelerate labor-intensive tasks, giving time and control back to artists.

The webinar starts with a brief overview of deep learning and dive into examples of convolutional neural networks (CNNs), generative adversarial networks (GANS), and autoencoders. These examples will include flavors of neural networks useful for everything from face swapping and image denoising to character locomotion, facial animation, and texture creation.

By attending this webinar, you will:

  • Get a basic understanding of how deep learning works
  • Learn about research that can be applied to content creation
  • See examples of deep learning–based tools that improve artist efficiency
  • Hear about Digital Domain’s experience developing AI-based toolsets

Watch Now!

Add Presenter 1's Head Shot Image URL (ex: http://info.nvidianews.com/rs/156-OFN-742/images/dan_m.jpg)
DOUG ROBLEM
Senior Director of Software R&D, Digital Domain
Add Presenter 2's Head Shot Image URL (ex: http://info.nvidianews.com/rs/156-OFN-742/images/dan_m.jpg)
RICK CHAMPAGNE
Global Media and Entertainment Strategy and Marketing, NVIDIA
Add Presenter 3's Head Shot Image URL (ex: http://info.nvidianews.com/rs/156-OFN-742/images/dan_m.jpg)
RICK GRANDY
Senior Solutions Architect, Professional Visualization, NVIDIA
Add Presenter 4's Head Shot Image URL (ex: http://info.nvidianews.com/rs/156-OFN-742/images/dan_m.jpg)
GARY BURNETT
Solutions Architect, Professional Visualization, NVIDIA