Low latency streaming is always a compromise, but what can be done to keep QOE high?
This on-demand webinar looks at CMAF and presents some real-world data on this low latency technique. The webinar starts by explaining that CMAF is a low-latency streaming technology similar to HLS and other streaming protocols where the idea is to deliver the video as small files. Olivier and Alain from Harmonic explain how this is done and look at some of the trade-offs and compromises that are needed and introduce techniques to keep QOE high. They also look at deployment in cloud vs. on premise.
Pieter-Jan Speelmans talks about play tradeoffs and optimisations within the player. CMAF allows the buffer to be reduced and whilst a bad network may mean you buffer is similar to ‘normal’, but in good networks, this buffer can be brought down significantly. He also talks about how ABR switching is impacted by GOP length even in CMAF.
Viaccess-Orca explains how DRM works with CMAF and looks at some of the challenges including licences acquisition time and overloading licence servers at the beginning of events. Akamai’s Will Law explains some benefits of CMAF and the near-real-time of chunk-based transfer (HTTP 1.1) and how downloading chunks at full speed leads to problems when the same broadband link is used by several clients.
There are lots of good talks on CMAF, but this is one of the few which talks about CMAF not as theory, but as is deployable today.
Thursday February 7th, 10am PST / 1pm EST / 18:00 GMT Now available on-demand!
There is so much talk about HDR, wide colour gamut (WCG), ‘Better Pixels’ and all the TVs seem to interpolate motion up to 100Hz or above, that it’s good to stop and check we know why all of this matters – and crucially when it doesn’t.
SMPTE’s new ‘Essential Technology Concepts Webcasts’ are here to help and for the first webcast, David Long will look at the fundamentals of colour, contrast and motion in terms of what we actually see.
This promises to be a great talk and, the chances are, even people who ‘know it already’ will be reminded of a thing or two!
Experts from Amazon Web Services and AWS Elemental highlight requirements to create a frame accurate live-to-VOD workflow with image recognition using AWS Elemental Delta and AWS cloud services. CTOs, engineering managers, product and program managers receive knowledge that can be applied to their own video workflows.
In this webinar, visual effects and digital production company Digital Domain will share their experience developing AI-based toolsets for applying deep learning to their content creation pipeline. AI is no longer just a research project but also a valuable technology that can accelerate labor-intensive tasks, giving time and control back to artists.
The webinar starts with a brief overview of deep learning and dive into examples of convolutional neural networks (CNNs), generative adversarial networks (GANS), and autoencoders. These examples will include flavors of neural networks useful for everything from face swapping and image denoising to character locomotion, facial animation, and texture creation.
By attending this webinar, you will:
Get a basic understanding of how deep learning works
Learn about research that can be applied to content creation
See examples of deep learning–based tools that improve artist efficiency
Hear about Digital Domain’s experience developing AI-based toolsets