The pandemic has had two effects on storage demand. By reducing the amount of new content created, it lessened demand in the short term, but in driving people to move to remote workflows the forecast for storage demand has increased significantly. This SMPTE San Francisco section meeting explores all aspects of demand from on-site, cloud and the mix between HDDs, solid-state and even persistent memory.
Tom Coughlin’s talk starts 16 minutes into this video looking at demand for storage requirements globally which we see are 50-100% higher in 2020 when we saw demand peak at 79 Exabytes of storage compared to 2019. Tom outlines, next, the features of storage technologies ranging from hard drives through SAS, NVMe up to memory channel leading to two graphics which help show how the faster memory costs more per gigabyte and how storage capacity increases, unfortunately, as access speed decreases. As such, Tom concludes, bulk storage is still dominated by hard drives which are still advancing with HDD capacities of 50TB being forecast for 2026.
Tom talks about NVMe-based storage being the future and discusses chips as small as 16mmx20mm. Not only that but he discusses how NVMe-over-fabric where NVMe as a protocol can be used in a networking context to allow low-latency access to storage over network interfaces, whether ethernet, Infiniband or others.
The next innovation discussed is the merging of computation. with storage. In order to keep computational speeds increasing, and in part to address power concerns, there has been an increase recently in creating task-specific chips to offload important tasks from CPUs since CPUs are not increasing in raw processing power at the rate they used to. This has been part of the reason that ‘Computational Storage’ has been born with FPGAs on the storage available to do specific processing on data before it’s handed off the computer. Tom takes us through the meanings of a Computational Storage Drive, Processor and Computational Storage arrays.
The next topic for Tom is the drivers behind increased storage requirements in broadcast for the future. We’re already moving to UHD with a view to onboarding 8K. Tom points out a 16K proof of concept showing there’s a lot of scope for higher bitrate feeds. Average shot ratios remain high, partly because of reality TV, but whatever the reason, this drives storage need. However, a bigger factor is the number of cameras. With multi-camera video, 3D video, free-viewpoint video (where a stadium is covered in cameras allowing you to choose (and interpolate) your own shot, as well as volumetric video which can easily get to 17Gb/s, there are so many reasons for storage demands to increase.
Tom talks about the motivations for cloud storage and the use cases for which moving to the cloud works. For instance, often it’s for data that will only ever need to go to the cloud i.e. for delivery to the consumer. Cloud rendering is another popular upload-heavy use for the cloud as well as keeping disaster recovery copies of data. Cloud workflows have become popular for dealing with peaks. Generally known as hybrid operating, this allows most processing to be done on-premise with lower latency and flat costs. When the facility needs more than it can provide, this can ‘burst’ up to the cloud.
The talk concludes with a look at storage share both for the tape market and the HDD/solid-state market leading on to an extensive Q&A and discussion including input from MovieLabs’ Jim Hellman
Watch now!
Speaker
|
Tom Coughlin
President,
Coughlin Associates
|