Adaptive bitrate, ABR, is vital in effective delivery of video to the home where bandwidth varies over time. It requires creating several different renditions of your content at various bitrates, resolutions and even frame rate. These multiple encodes put a computational burden on the transcode stage.
Lowell Winger explains ways of optimising ABR encodes to reduce the computation needed to create these different versions. He explains ways to use encoding decisions from one version and use them in other encodes. This has a benefit of being able to use decisions made on high-resolution versions – which are benefiting from high definition to inform the decision in detail – on low-resolution content where the decision would otherwise be made with a lot less information.
This talk is the type of deep dive into encoding techniques that you would expect from the Video Engineering Summit which happens at Streaming Media East.
Brightcove, an online video hosting platform with its own video player, has a lot of experience of delivery over the CDN. We saw yesterday the principles that the player, and to an extent the server, can use to deal with changing network (and to an extent changing client CPU usage) by going up and down through the ABR ladder. However this talk focusses on how the CDN in the middle complicates matters as it tries its best to get the right chunks in the right place at the right time.
How often are there ‘cache misses’ where the right file isn’t already in place? And how can you predict what’s necessary?
Yuriy even goes in to detail about how to work out when HEVC deployment makes sense for you. After all, even if you do deploy HEVC – do you need to do it for all assets? And if you do only deploy for some assets, how do you know which? Also, when does it make sense to deploy CMAF? In this talk, we hear the answers.
Streaming on the net relies on delivering video at a bandwidth you can handle. Called ‘Adaptive Bitrate’ or ABR, it’s hardly possible to think of streaming without it. While the idea might seem simple initially – just send several versions of your video – it quickly gets nuanced.
Streaming experts Streamroot take us through how ABR works at Streaming Media East from 2016. While the talk is a few years old, the facts are still the same so this remains a useful talk which not only introduces the topic but goes into detail on how to implement ABR.
The most common streaming format is HLS which relies on the player downloading the video in sections – small files – each representing around 3 to 10 seconds of video. For HLS and similar technologies, the idea is simply to allow the player, when it’s time to download the next part of the video, to choose from a selection of files each with the same video content but each at a different bitrate.
Allowing a player to choose which chunk it downloads means it can adapt to changing network conditions but does imply that each file has contain exactly the same frames of video else there would be a jump when the next file is played. So we have met our first complication. Furthermore, each encoded stream needs to be segmented in the same way and in MPEG, where you can only cut files on I-frame boundaries, it means the encoders need to synchronise their GOP structure giving us our second complication.
These difficulties, many more and Streamroot’s solutions are presented by Erica Beavers and Nikolay Rodionov including experiments and proofs of concept they have carried out to demonstrate the efficacy.
Multicast ABR is a mix of two very beneficial technologies which are seldom seen together. ABR – Adaptive Bitrate – allows a player to change the bitrate of the video and audio that it’s playing to adapt to changing network conditions. Multicast is a network technology which efficiently sends a video stream over the network without duplicating bandwidth.
ABR has traditionally been deployed for chunk-based video like HLS where each client downloads its own copy of the video in blocks of several seconds in length. This means that you bandwidth you use to distribute your video increases by one thousand times if 1000 people play your video.
Multicast works with live streams, not chunks, but allows the bandwidth use for 1000 players to increase – in the best case – by 0%.
Here, the panelists look at the benefits of combining multicast distribution of live video with techniques to allow it to change bitrate between different quality streams.
This type of live streaming is actually backwards compatible with old-style STBs since the video sent is a live transport stream, it’s possible to deliver that to a legacy STB using a converter in the house at the same time as delivering a better, more modern delivery to other TVs and devices.
It thus also allows pure-streaming providers to compete with conventional broadcast cable providers and can also result in cost savings in equipment provided but also in bandwidth used.
There’s lots to unpack here, which is why the Streaming Video Alliance have put together this panel of experts.
There are two ways to stream video online, either pushing from the server to the device like WebRTC, MPEG transport streams and similar technologies, or allowing the receiving device to request chunks of the stream which is how the majority of internet streaming is done – using HLS and similar formats.
Chunk-based streaming is generally seen as more scalable of these two methods but suffers extra latency due to buffering several chunks each of which can represent between 1 and, typically, 10 seconds of video.
CMAF is one technology here to change that by allowing players to buffer less video. How does this achieve this? An, perhaps more important, can it really cut costs? Iraj Sodagar from NexTreams is here to explain how in this talk from Streaming Media West, 2018.
A brief history of CMAF (Common Media Format)
The core technologies (ISO BMFF, Codecs, captions etc.)
Thursday 27th September 2018, 19:00 BST / 11am PT / 2pm ET
Encoding and transcoding are at the heart of every video service and solution, and the codec and format landscape has never been more crowded. Publishers are wringing the most efficiency out of H.264 while making the move to HEVC/H.265 and AV1—and keeping an eye on other proprietary codecs. On top of all that are considerations like video optimization, bitrate ladders, and per-title encoding.
Join this expert panel as they discuss the latest in encoding and transcoding, including the following:
The state of the art in encoding efficiency in 2018
How per-title encoding and machine learning can increase quality and decrease delivery costs
How to build flexible and cost-effective encoding solution
The latest developments in video encoding platforms and infrastructure
The benefits of contribution to distribution encoding and transcoding
The next big advances in encoding and transcoding, including AV1
Comparing AV1, VP9, HEVC and H.264 is quite a task, but Streaming Media’s Jan Ozer is here to take us through it. From MPEG royalties to VP9 browser compatibility, from the AV1 roadmap to HEVC-enabled HLS, this is a comprehensive look at real world usage of the top four codecs.
This is a key topic because many content distributors and aggregators still use H.264 as their primary, if not exclusive, codec, but the bandwidth savings promised by newer, more powerful codecs are alluring. Those considering a switch must evaluate at least three options: HEVC, VP9, and AV1.
In this session, codec specialist Jan Ozer evaluates the quality of these codecs and compares them to H.264. Learn how much bandwidth you can save with each, and how the newer codecs compare from quality and implementation perspectives.
In this on-demaind video, Streaming Learning Center’s Jan Ozer explains objective metrics to us and how they can be used to build better ABR ladders.
Choosing the number of streams in an adaptive group and configuring them is usually a subjective, touchy-feely exercise, with no way to really gauge the effectiveness and efficiency of the streams. However, by measuring stream quality via metrics such as PSNR, SSIMplus, and VQM, you can precisely assess the quality delivered by each stream and its relevancy to the adaptive group.
This presentation identifies several key objective quality metrics, teaches how to apply them, and provides an objective framework for analyzing which streams are absolutely required in your adaptive group and their optimal configuration.
On demand webinar from AWS Elemental covering some streaming basics.
In this webcast, you will:
• Learn how to create and deliver video over the internet
• Understand video codecs, containers, popular delivery methods and content delivery networks
• Consider methods, including adaptive bitrate streaming, that enable high-quality video to be delivered to a wide range of internet-connected devices
• Learn about the latest trends in video compression and delivery
Increasing smartphone subscriptions and data volumes per subscription are driving rapid growth in mobile data traffic, much of which is video content. According to multiple industry reports, these trends will continue for the near future, and by 2020, 75 percent of the world’s mobile data traffic will be video according to the Cisco Visual Networking Index. Technical and business leaders at organizations that aim to expand offerings using video need to understand the complexities of delivering premium viewing experiences to consumers.
What latency can we expect from different online streaming technologies both current and upcoming? How can we achieve low latency at scale? Jamie Sherry and Mike Talvensarri from Wowzer take us through the status quo and answer questions from the audience at Streaming Media East.