Video: Bandwidth Prediction in Low-Latency Chunked Streaming

How can we overcome one of the last, big, problems in making CMAF generally available: making ABR work properly.

ABR, Adaptive Bitrate is a technique which allows a video player to choose what bitrate video to download from a menu of several options. Typically, the highest bitrate will have the highest quality and/or resolution, with the smallest files being low resolution.

The reason a player needs to have the flexibility to choose the bitrate of the video is mainly due to changing network conditions. If someone else on your network starts watching some video, this may mean you can no longer download video quick enough to keep watching in full quality HD and you may need to switch down. If they stop, then you want your player to switch up again to make the most of the bitrate available.

Traditionally this is done fairly simply by measuring how long each chunk of the video takes to download. Simply put, if you download a file, it will come to you as quickly as it can. So measuring how long each video chunk takes to get to you gives you an idea of how much bandwidth is available; if it arrives very slowly, you know you are close to running out of bandwidth. But in low-latency streaming, your are receiving video as quickly as it is produced so it’s very hard to see any difference in download times and this breaks the ABR estimation.

Making ABR work for low-latency is the topic covered by Ali in this talk at Mile High Video 2019 where he presents some of the findings from his recently published paper which he co-authored with, among others, Bitmovin’s Christian Timmerer and which won the DASH-IF Excellence in DASH award.

He starts by explaining how players currently behave with low-latency ABR showing how they miss out on changing to higher/lower renditions. Then he looks at the differences on the server and for the player between non-low-latency and low-latency streams. This lays the foundation to discuss ACTE – ABR for Chunked Transfer Encoding.

ACTE is a method of analysing bandwidth with the assumption that some chunks will be delivered as fast as the network allows and some won’t be. The trick is detecting which chunks actually show the network speed and Ali explains how this is done and shows the results of their evaluation.

Watch now!

Speaker

Ali C. Begen Ali C. Begen
Technical Consultant and
Computer Science Professor

Video: What is Happening with IMF?

IMF is an interchange format designed for post-production/studios versioning requirements. It reduces storage required for multi-version projects but also provides for a standard way of exchanging metadata between companies.

Annie Chang covers the history briefly of IMF showing what it was aiming to achieve. IMF has been standardised through SMPTE as ST 2067 and has gained traction within the industry hence the continued interest in extending the standard. As with all modern standards, this has been created to be extensible, so Annie gives details on what is being added to it and where these endeavours have got to.
 

Watch now!

Speaker

Annie Chang Annie Chang
VP, Creative Technologies,
Universal Pictures

Video: An Overview of the ISO Base Media File Format

ISO BMFF a standardised MPEG media container developed from Apple’s Quicktime and is the basis for cutting edge low-latency streaming as much as it is for tried and trusted mp4 video files. Here we look into why we have it, what it’s used for and how it works.

ISO BMFF provides a structure to place around timed media streams whilst accommodating the metadata we need for professional workflows. Key to its continued utility is its extensible nature allowing additional abilities to be added as they are developed such as adding new codecs and metadata types.

ATSC 3.0’s streaming mechanism MMT is based on ISO BMFF as well as the low-latency streaming format CMAF which shows that despite being over 18 years old, the ISO BMFF container is still highly relevant.

Thomas Stockhammer is the Director of Technical Standards at Qualcomm. He explains the container format in structure and origin before explaining why it’s ideal for CMAF’s low-latency streaming use case, finishing off with a look at immersive media in ISO BMFF.

Watch now!

Speaker

Thomas Stockhammer Thomas Stockhammer
Director Technical Standards,
Qualcomm

Video: Into the Depths: The Technical Details behind AV1

As we wait for the dust to settle on this NAB’s AV1 announcements hearing who’s added support for AV1 and what innovations have come because of it, we know that the feature set is frozen and that some companies will be using it. So here’s a chance to go in to some of the detail.

AV1 is being created by the AOM, the Alliance for Open Media, of which Mozilla is a founding member. The IETF is considering it for standardisation under their NetVC working group and implementations have started. On The Broadcast Knowledge, we have seen explanations from Xiph.org, one of the original contributors to AV1. We’ve seen how it fares against HEVC with Ian Trow and how HDR can be incorporated in it from Google and Warwick University. For a complete list of all AV1 content, have a look here.

Now, we join Nathan Egge who talks us through many of the different tools within AV1 including one which often captures the imagination of people; AV1’s ability to remove film grain ahead of encoding and then add back in synthesised grain on playback. Nathan also looks ahead in the Q&A talking about integration into RTP, WebRTC and why Broadcasters would want to use AV1.

Watch now!

Speaker

Nathan Egge Nathan Egge
Video Codec Engineer,
Mozilla