Video: The Status of 8K and Light Field / Holographic Development

8K is the next step in the evolution of resolution but as we saw with 4K, it’s about HDR, wide colour gamut and higher frame rates, too. This video looks at the real-world motivations to use 8K and glimpses the work happening now to take imaging even further into light field and holography.

Broadcast has always been about capturing the best quality video, using that quality to process the video and then deliver to the viewer. Initially, this was used to improve green-screen/chromakey effects and sharp, quality video is still important in any special effects/video processing. But with 8K you can deliver a single camera feed which could be cut up into two, three or more HD feeds looking like 3 different cameras. Pan-and-scan isn’t new, but it has more flexibility taken from an 8K raster. But perhaps the main ‘day 1’ benefit to 8K is for future-proofing – acquiring the highest fidelity content for as-yet-unknown uses later down the line.

Chris Chinnock from the 8K association explains that 8K is in active use in Japan both at the upcoming Olympics but also in a permanent channel, BS8K which transmits on satellite at 80Mb/s. Dealing with such massive bitrates, Chris explains, puts 8K finding the same pain points at 4K did seven years ago. For file-based workflows, he continues, these have largely been solved though on the broadcast side, challenges remain. The world of codecs has moved on a lot since then with the addition of LCEVC, VVC, EVC, AVS3 and others which promise to help bring 8K distribution to the home down to a more manageable 25Mb/s or below.

 

 

Originating 8K material is not hard in as much as the cameras exist and the workflows are possible. Many high budget films are being acquired at this resolution but the fact is that getting enough 8K for a whole channel is not practical and so upscaling content to 8K is a must. Recent advances in machine learning-based upscaling have revolutionised the quality you can expect over traditional techniques.

Finishing off on 8K, Chris points out that a typical 8K video format takes 30Gbps uncompressed which is catered for easily by HDMI 2.1, DisplayPort 1.4a and Thunderbolt. 8K TVs are already available and current investment into Chinese G10.5 fabs shows that more 65″ and 75″ will be on the market.

Changing topic, Chris looks at generating immersive content for either light field displays or holographic displays. There are a number of ways to capture a real-life scene but all of them involve using many cameras and a lot of data. You can avoid the real world and using a games engine such as Unity or Unreal but these have the same limitations as they do in computer games; they can look simultaneously amazing and unrealistic. Whatever you do, getting the data from A to B is a difficult task and a simple video encoder won’t cut it. There’s a lot of metadata involved in immersive experiences and, in the case of point fields, there is no conventional video involved. This is why Chris is part of an MPEG group working on future capabilities of MPEG-I aiming to identify requirements for MPEG and other standards bodies, recommending distribution architectures and getting a standard representation for immersive media.

The ITMF, Immersive Technology Media Format, is a suggested container that can hold computer graphics, volumetric information, light field arrays and AI/computational photography. This feeds into a decoder that only takes what it needs out of the file/stream depending on whether it’s a full holographic display or something simpler.

Chris finishes his presentation explaining the current state of immersive displays, the different types and who’s making them today.

Watch now!
Speakers

Chris Chinnock
Executive Director,
8K Association

Video: Workflow Evolution Within the CDN

The pandemic has shone a light on CDNs as they are the backbone of much of what we do with video for streaming and broadcast. CDNs aim to scale up in a fast, sophisticated way so you don’t have to put in the research to achieve this yourself. This panel from the Content Delivery Summit sees Dom Robinson bringing together Jim Hall from Fastly with Akamai’s Peter Chave, Ted Middleton from Amazon and Paul Tweedy from BBC Design + Engineering.

The panel discusses the fact that although much video conferencing traffic being WebRTC isn’t supported, there are a lot of API calls that are handled by the CDN. In fact, over 300 trillion API calls were made to Amazon last year. Zoom and other solutions do have an HLS streaming option that has been used and can benefit from CDN scaling. Dom asks whether people’s expectations have changed during the pandemic and then we hear from Paul as he talks a little about the BBC’s response to Covid.

 

 

THE CTA’s Common Media Client Data standard, also known as CTA 5004, is a way for a video player to pass info back to the CDN. In fact, this is so powerful that it can provide highly granular real-time reports for customers but also enables hints to be handed back from the players so the CDNs can pre-fetch content that is likely to be needed. Furthermore, having a standard for logging will be great for customers who are multi-CDN and need a way to match logs and analyse their system in its entirety. This work is also being extended, under a separate specification to be able to look upstream in a CDN workflow to understand the status of other systems like edge servers.

The panel touches on custom-made analytics, low latency streaming such as Apples LL-HLS and why it’s not yet been adopted, current attempts in the wild to bring HLS latency down, Edge computing and piracy.

Watch now!
Speakers

Peter Chave Peter Chave
Principal Architect,
Akamai Technologies
Paul Tweedy Paul Tweedy
Lead Architect, Online Technology Group,
BBC Design + Engineering
Ted Middleton Ted Middleton
Global Leader – Specialized Solution Architects, Edge Services
Amazon
Jim Hall Jim Hall
Principal Sales Engineer,
Fastly
Dom Robinson Moderator: Dom Robinson
Director and Creative Firestarter, id3as
Contributing Editor, StreamingMedia.com, UK

Video: A Frank Discussion of NMOS

What NMOS isn’t is almost as important as what NMOS actually is when it comes to defining a new project implementing SMPTE ST 2110. Written by AMWA, NMOS is a suite of open specifications which help control media flow hence the name: Network Media Open Specifications. Typically NMOS specifications are used alongside the ST 2110 standards but in this hype-free panel, we hear that 2110 isn’t the only application of NMOS.

AMWA Executive Director Brad Gilmer introduces this ‘frank’ panel with Imagine’s John Mailhot explaining the two meanings ‘NMOS’ has. The first is the name of the project we have just introduced in this article. The second is as shorthand for the two best-known specifications created by the project, IS-04 and IS-05. Together, these allow new devices to register their availability to the rest of the system and to receive instructions regarding sending media streams. There are plenty of other specifications which are explained in this talk of which two more are mentioned later in this video: IS-08 which manages audio channel mapping and IS-09 which allows new devices to get a global configuration to automatically find out facts like their PTP domain.

 

 

Security is “important and missing previously,” says Jed Deame from Nextera but explains that since NMOS is predominantly a specification for HTTP API calls, there is nothing to stop this from happening as HTTPS or another protocol as long as it provides both encryption and authorisation. The panel then explores the limits of the scope of NMOS. For security, its scope is to secure the NMOS control traffic, so doesn’t stretch to securing the media transport or, say, PTP. Furthermore, for NMOS as a whole, it’s important to remember it defines control and not more than control. Brad says, though, that even this scope is ambiguous as where does the concept of ‘control’ stop? Is a business management system control? What about scheduling of media? Triggering playback? There have to be limited.

Imagine Communications’ John Mailhot explores the idea of control asking how much automation, and hence NMOS-style control, can help realise one of the promises of IP which is to reduces costs by speeding up system changes. Previously, Brad and John explain, changing a studio from doing NFL to doing NHL may take up to a month of rewiring and reprogramming. Now that rewiring can be done in software, John contends that the main task is to make sure the NMOS is fully-fledged enough to allow interoperable enumeration, configuration and programming of links within the system. The current specifications are being reinforced by ‘modelling’ work whereby the internal logical blocks of equipment, say an RGB gain control, can be advertised to the network as a whole rather than simply advertising a single ‘black box’ like an encoder. Now it’s possible to explain what pre and post-processing is available.

Another important topic explored by NVIDIA’s Richard Hastie and Jeremy Nightingale from Macnica, is the use of NMOS specifications outside of ST 2110 installations. Richard says that NVIDIA is using NMOS in over 200 different locations. He emphasises its use for media whether that be HEVC, AV1 or 2110. As such, he envisages it being used by ‘Twitch streamers’ no doubt with the help of the 2110-over-WAN work which is ongoing to find ways to expose NMOS information over public networks. Jeremy’s interest is in IPMX for ProAV where ‘plug and play’ as well as security are two of the main features being designed into the package.

Lastly, there’s a call out to the tools available. Since NMOS is an open specification project, the tools are released as Open Source which companies being encouraged to use the codebase in products or for testing. Not only is there a reference client, but Sony and BBC have released an NMOS testing tool and EasyNMOS provides a containerised implementation of IS-04 and IS-05 for extremely quick deployments of the toolset.

Watch now!
Speakers

Brad Gilmer Brad Gilmer
Executive Director, Video Services Forum
Executive Director, Advanced Media Workflow Association (AMWA)
John Mailhot John Mailhot
CTO Networking & Infrastructure
Jed Deame Jed Deame
CEO,
Nextera Video
Richard Hastie Richard Hastie
Senior Sales Director,
NVIDIA
Jeremy Nightingale
President
Macnica Americas, Inc.

Video: Live Media Production – The Ultimate End Game

A lot of our time on this website is devoted to understanding the changes we are going through now, but we don’t adopt technology for the sake of it. Where’s this leading and what work is going on now to forge our path? Whilst SMPTE ST 2110 and the associated specifications aren’t yet a mature technology in that sense SDI, we’re past the early adopter phase and we can see which of the industry’s needs aren’t yet met.

Andy Rayner from Nevion is here to help us navigate the current technology space and understand the future he and Nevion envision. The beginning of the video shows the big change in process from the workflows of the 90s where the TV station moved to sports events to now where we bring the event to the broadcaster in the form of a light connectivity truck turning up and deploying cameras at the event leaving most people either at home or back at base doing the production there. Andy has been involved in a number of implementations enabling this such as at Discovery’s Eurosport where the media processing is done in two locations separate from the production rooms around Europe.

 

 

Generalising around the Discovery case study, Andy shows a vision of how many companys will evolve their workflows which includes using 5G, public and private clouds as appropriate and including control surfaces being at home. To get there, Andy lays out the work within AMWA and SMPTE creating the specifications and standards that we need. He then shows how the increasing use of IT in live production, the already IT-based NLE workflows are able to integrate much better.

Looking to the future, Andy explains the work ongoing to specify a standard way of getting video into and out of the cloud including specifying a way of carrying 2110 on the WAN, helping RIST and formalising the use of JPEG XS. Andy anticipates a more standardised future where a best of breed system is possible down to individual logical components like ‘video keyer’ and ‘logo insertion’ could be done by separate software but which seamlessly integrate. Lastly, Andy promises us that work is underway to improve timing within 2110 and 2110-associated workflows.

Watch now!
Speaker

Andy Rayner Andy Rayner
Chief Technologist
Nevion