8K is the next step in the evolution of resolution but as we saw with 4K, it’s about HDR, wide colour gamut and higher frame rates, too. This video looks at the real-world motivations to use 8K and glimpses the work happening now to take imaging even further into light field and holography.
Broadcast has always been about capturing the best quality video, using that quality to process the video and then deliver to the viewer. Initially, this was used to improve green-screen/chromakey effects and sharp, quality video is still important in any special effects/video processing. But with 8K you can deliver a single camera feed which could be cut up into two, three or more HD feeds looking like 3 different cameras. Pan-and-scan isn’t new, but it has more flexibility taken from an 8K raster. But perhaps the main ‘day 1’ benefit to 8K is for future-proofing – acquiring the highest fidelity content for as-yet-unknown uses later down the line.
Chris Chinnock from the 8K association explains that 8K is in active use in Japan both at the upcoming Olympics but also in a permanent channel, BS8K which transmits on satellite at 80Mb/s. Dealing with such massive bitrates, Chris explains, puts 8K finding the same pain points at 4K did seven years ago. For file-based workflows, he continues, these have largely been solved though on the broadcast side, challenges remain. The world of codecs has moved on a lot since then with the addition of LCEVC, VVC, EVC, AVS3 and others which promise to help bring 8K distribution to the home down to a more manageable 25Mb/s or below.
Originating 8K material is not hard in as much as the cameras exist and the workflows are possible. Many high budget films are being acquired at this resolution but the fact is that getting enough 8K for a whole channel is not practical and so upscaling content to 8K is a must. Recent advances in machine learning-based upscaling have revolutionised the quality you can expect over traditional techniques.
Finishing off on 8K, Chris points out that a typical 8K video format takes 30Gbps uncompressed which is catered for easily by HDMI 2.1, DisplayPort 1.4a and Thunderbolt. 8K TVs are already available and current investment into Chinese G10.5 fabs shows that more 65″ and 75″ will be on the market.
Changing topic, Chris looks at generating immersive content for either light field displays or holographic displays. There are a number of ways to capture a real-life scene but all of them involve using many cameras and a lot of data. You can avoid the real world and using a games engine such as Unity or Unreal but these have the same limitations as they do in computer games; they can look simultaneously amazing and unrealistic. Whatever you do, getting the data from A to B is a difficult task and a simple video encoder won’t cut it. There’s a lot of metadata involved in immersive experiences and, in the case of point fields, there is no conventional video involved. This is why Chris is part of an MPEG group working on future capabilities of MPEG-I aiming to identify requirements for MPEG and other standards bodies, recommending distribution architectures and getting a standard representation for immersive media.
The ITMF, Immersive Technology Media Format, is a suggested container that can hold computer graphics, volumetric information, light field arrays and AI/computational photography. This feeds into a decoder that only takes what it needs out of the file/stream depending on whether it’s a full holographic display or something simpler.
Chris finishes his presentation explaining the current state of immersive displays, the different types and who’s making them today.