Video: Digital Storage During a Pandemic for Media and Entertainment

The pandemic has had two effects on storage demand. By reducing the amount of new content created, it lessened demand in the short term, but in driving people to move to remote workflows the forecast for storage demand has increased significantly. This SMPTE San Francisco section meeting explores all aspects of demand from on-site, cloud and the mix between HDDs, solid-state and even persistent memory.

Tom Coughlin’s talk starts 16 minutes into this video looking at demand for storage requirements globally which we see are 50-100% higher in 2020 when we saw demand peak at 79 Exabytes of storage compared to 2019. Tom outlines, next, the features of storage technologies ranging from hard drives through SAS, NVMe up to memory channel leading to two graphics which help show how the faster memory costs more per gigabyte and how storage capacity increases, unfortunately, as access speed decreases. As such, Tom concludes, bulk storage is still dominated by hard drives which are still advancing with HDD capacities of 50TB being forecast for 2026.

Tom talks about NVMe-based storage being the future and discusses chips as small as 16mmx20mm. Not only that but he discusses how NVMe-over-fabric where NVMe as a protocol can be used in a networking context to allow low-latency access to storage over network interfaces, whether ethernet, Infiniband or others.



The next innovation discussed is the merging of computation. with storage. In order to keep computational speeds increasing, and in part to address power concerns, there has been an increase recently in creating task-specific chips to offload important tasks from CPUs since CPUs are not increasing in raw processing power at the rate they used to. This has been part of the reason that ‘Computational Storage’ has been born with FPGAs on the storage available to do specific processing on data before it’s handed off the computer. Tom takes us through the meanings of a Computational Storage Drive, Processor and Computational Storage arrays.

The next topic for Tom is the drivers behind increased storage requirements in broadcast for the future. We’re already moving to UHD with a view to onboarding 8K. Tom points out a 16K proof of concept showing there’s a lot of scope for higher bitrate feeds. Average shot ratios remain high, partly because of reality TV, but whatever the reason, this drives storage need. However, a bigger factor is the number of cameras. With multi-camera video, 3D video, free-viewpoint video (where a stadium is covered in cameras allowing you to choose (and interpolate) your own shot, as well as volumetric video which can easily get to 17Gb/s, there are so many reasons for storage demands to increase.

Tom talks about the motivations for cloud storage and the use cases for which moving to the cloud works. For instance, often it’s for data that will only ever need to go to the cloud i.e. for delivery to the consumer. Cloud rendering is another popular upload-heavy use for the cloud as well as keeping disaster recovery copies of data. Cloud workflows have become popular for dealing with peaks. Generally known as hybrid operating, this allows most processing to be done on-premise with lower latency and flat costs. When the facility needs more than it can provide, this can ‘burst’ up to the cloud.

The talk concludes with a look at storage share both for the tape market and the HDD/solid-state market leading on to an extensive Q&A and discussion including input from MovieLabs’ Jim Hellman

Watch now!

Thomas Coughlin Tom Coughlin
Coughlin Associates

Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group

Webinar: AI Enable Your Aging LTO-5/6 Archives

Date: 29th Nov 2018, 16:00 or 22:00 GMT

Media production is at an all-time high—with no signs of slowing down. By 2020, storage capacity requirements are predicted to increase by over 300%.

This explosive growth has facilities facing mounting issues sure to impact their media assets across every storage tier—especially when a significant percentage of their data is currently living on aging LTO-5/6 tapes.

Join on Thursday, November 29th to learn how to prepare your organization for the future by leveraging the power of AI combined with the latest LTO and disk-based storage technology.

In this webinar, you’ll learn

• Smart migration strategies for moving from LTO-5/6 to LTO-7/8
• How to use AI to eliminate countless hours of metadata logging and create a searchable database of your online, nearline, and archived media
• Ways to improve accessibility and organization across your entire storage infrastructure

Register today to reserve your seat!

Presented by StorageDNA and Studio Network Solutions

Video: Exploring Image Corruption in the Workflow, and how to Stop this from Happening

Corrupted data is a fact of life. Yet LTO tape Systems, SAN, NAS, Object Store, RAM, and WAN Optimizers all have configurations available to protect the fidelity of image contents in the workflow. Most serious is an archive scenario where content may sit untouched for a long duration and any corruption remains undetected for extended periods of time.

This talk from SMPTE Technical Conference 2017 by Keith Hogan covers the problems with hashes, looks at where errors can get introduced and ways to mitigate problems.

Depending on the path a video frame takes through the workflow, it will be treated to a varying set of protection technologies, like RAID, erasure coding, ECC Memory, and parity checking. Even on the network, errors can be introduced but checksums don’t always work.

To overcome the uncertainty of associated with how these methods ensure fidelity, the industry employs failure detection at each stage of the workflow (generally MD5 checksums). Keith discusses the protection mechanisms provided or employed by each workflow element and how frame corruption can occur, even when all of the protection technologies are working as designed. Finishing with a method for providing protection to images at the frame level using Forward Error Correction, such that there is uniformity of protection for images applied throughout the workflow, Keith shows that media errors may be recovered in most cases without having to access a backup copy.

Watch now