IMF is an interchange format designed for post-production/studios versioning requirements. It reduces storage required for multi-version projects but also provides for a standard way of exchanging metadata between companies.
Annie Chang covers the history briefly of IMF showing what it was aiming to achieve. IMF has been standardised through SMPTE as ST 2067 and has gained traction within the industry hence the continued interest in extending the standard. As with all modern standards, this has been created to be extensible, so Annie gives details on what is being added to it and where these endeavours have got to.
ISO BMFF a standardised MPEG media container developed from Apple’s Quicktime and is the basis for cutting edge low-latency streaming as much as it is for tried and trusted mp4 video files. Here we look into why we have it, what it’s used for and how it works.
ISO BMFF provides a structure to place around timed media streams whilst accommodating the metadata we need for professional workflows. Key to its continued utility is its extensible nature allowing additional abilities to be added as they are developed such as adding new codecs and metadata types.
ATSC 3.0’s streaming mechanism MMT is based on ISO BMFF as well as the low-latency streaming format CMAF which shows that despite being over 18 years old, the ISO BMFF container is still highly relevant.
Thomas Stockhammer is the Director of Technical Standards at Qualcomm. He explains the container format in structure and origin before explaining why it’s ideal for CMAF’s low-latency streaming use case, finishing off with a look at immersive media in ISO BMFF.
As we wait for the dust to settle on this NAB’s AV1 announcements hearing who’s added support for AV1 and what innovations have come because of it, we know that the feature set is frozen and that some companies will be using it. So here’s a chance to go in to some of the detail.
Now, we join Nathan Egge who talks us through many of the different tools within AV1 including one which often captures the imagination of people; AV1’s ability to remove film grain ahead of encoding and then add back in synthesised grain on playback. Nathan also looks ahead in the Q&A talking about integration into RTP, WebRTC and why Broadcasters would want to use AV1.
Zhou Wang explains how to compare HEVC & AVC with AV1 and shares his findings. Using various metrics such as VMAF, PSNR and SSIMPlus he explores the affects of resolution on bitrate savings and then turns his gaze to computation complexity.
This talk was given at the Mile High Video conference in Denver CO, 2018.