Video: Digital Storage During a Pandemic for Media and Entertainment

The pandemic has had two effects on storage demand. By reducing the amount of new content created, it lessened demand in the short term, but in driving people to move to remote workflows the forecast for storage demand has increased significantly. This SMPTE San Francisco section meeting explores all aspects of demand from on-site, cloud and the mix between HDDs, solid-state and even persistent memory.

Tom Coughlin’s talk starts 16 minutes into this video looking at demand for storage requirements globally which we see are 50-100% higher in 2020 when we saw demand peak at 79 Exabytes of storage compared to 2019. Tom outlines, next, the features of storage technologies ranging from hard drives through SAS, NVMe up to memory channel leading to two graphics which help show how the faster memory costs more per gigabyte and how storage capacity increases, unfortunately, as access speed decreases. As such, Tom concludes, bulk storage is still dominated by hard drives which are still advancing with HDD capacities of 50TB being forecast for 2026.

Tom talks about NVMe-based storage being the future and discusses chips as small as 16mmx20mm. Not only that but he discusses how NVMe-over-fabric where NVMe as a protocol can be used in a networking context to allow low-latency access to storage over network interfaces, whether ethernet, Infiniband or others.

 

 

The next innovation discussed is the merging of computation. with storage. In order to keep computational speeds increasing, and in part to address power concerns, there has been an increase recently in creating task-specific chips to offload important tasks from CPUs since CPUs are not increasing in raw processing power at the rate they used to. This has been part of the reason that ‘Computational Storage’ has been born with FPGAs on the storage available to do specific processing on data before it’s handed off the computer. Tom takes us through the meanings of a Computational Storage Drive, Processor and Computational Storage arrays.

The next topic for Tom is the drivers behind increased storage requirements in broadcast for the future. We’re already moving to UHD with a view to onboarding 8K. Tom points out a 16K proof of concept showing there’s a lot of scope for higher bitrate feeds. Average shot ratios remain high, partly because of reality TV, but whatever the reason, this drives storage need. However, a bigger factor is the number of cameras. With multi-camera video, 3D video, free-viewpoint video (where a stadium is covered in cameras allowing you to choose (and interpolate) your own shot, as well as volumetric video which can easily get to 17Gb/s, there are so many reasons for storage demands to increase.

Tom talks about the motivations for cloud storage and the use cases for which moving to the cloud works. For instance, often it’s for data that will only ever need to go to the cloud i.e. for delivery to the consumer. Cloud rendering is another popular upload-heavy use for the cloud as well as keeping disaster recovery copies of data. Cloud workflows have become popular for dealing with peaks. Generally known as hybrid operating, this allows most processing to be done on-premise with lower latency and flat costs. When the facility needs more than it can provide, this can ‘burst’ up to the cloud.

The talk concludes with a look at storage share both for the tape market and the HDD/solid-state market leading on to an extensive Q&A and discussion including input from MovieLabs’ Jim Hellman

Watch now!
Speaker

Thomas Coughlin Tom Coughlin
President,
Coughlin Associates

Video: Low-Earth Orbit Satellites

Low-latency internet connectivity from anywhere would be transformational for many use cases. Recently we’ve heard a lot about 5G because its low latency connectivity promise but the reach of each cell, particularly the high-bandwidth cells is severely range limited. Low Earth Orbiting satellites offer both low ping times but also global coverage. This explains why there seems to be a gold rush in the industry with so many companies working to launch fleets of thousands of satellites.

Starting this SMPTE Toronto Section video, Michael Martin from Metercor sets the scene and looks at SpaceX’s Starlink service. There are currently around 6000 satellites available at the moment, only 40% of which are fully functional. Many are impaired and, for instance, in elliptical orbits. As a reminder, Michael looks at Geostationary satellites, the best-known satellites in broadcast circles since they stay in the same position in the sky allowing for static dishes. Remembering that orbital speed is defined by your distance from the centre of Earth, geostationary satellites have a distance of approximately 36,000km which requires 120ms just for light to travel that distance meaning the best latency to such satellite is around 250ms which is not considered a great internet ping time. GPS satellites are closer in and orbit in only 12 hours. It’s the low-earth orbit, LEO, satellites that are close enough to provide low-latency network connectivity.

Starlink has satellites planned for orbits 550km and 1100km high which are only 1.8-4ms away for light. This means data can take approximately 10ms to get from you to its destination if going via satellites in the 1100km orbital shell. These shells are used to keep satellites away from each other to avoid collisions. On top of low-latency internet links, Starlink aims to deliver downlink speeds of 1Gbps when fully deployed and has already been seen to deliver over 300Mbps.

 

 

Other upcoming solutions are Kuiper, backed by Amazon, which is aiming for 3200 satellites delivering around 400Mbps of downlink speed. OneWeb which has already deployed many satellites and has secured funding to complete its satellite fleet. China has announced its intention to deliver 10,000 satellites to orbit and Samsung has intimated they intend a 2030 launch of a fleet of satellites. Whilst their intention is not fully clear, it may be linked with the plans for 6G which are already been worked on.

Michael finishes his section with a look at some use cases like using LEO satellites to provide a downlink into a 5G system for a remote village or small town. This would free locations from requiring fibre both for delivery of the internet to the ISP, but also for delivery around the town which would be 5G based. First responders are another example of people who would benefit from always-on, any-location connectivity. Michaels last point is that although we can be very interested in how LEOs deliver the service we want, most people still only care about accessing the content they are interested; a good internet connection should be transparent to them.

The second presentation in the video is from Telesat whose Telesat Lightspeed constellation of LEO satellites whose satellites are planned for launch next year with trials starting in 2023 and full-service available Q4 2024. The fleet will feature inter-satellite communication via laser links and will initially focus on serving Canada before becoming global. Its aim is to become ‘virtual fibre’ tying in with Michael’s point about connectivity being invisible to users.

Ted Baker and Thomas Detlor take us through the envisaged use cases such as backhaul, government, backup/disaster recovery and connectivity to ships and oil platforms. They highlight the broadcast use cases of linking stadia, buildings and creating backup links with their layer 2 service on which you can build your layer 3 network.

They finish by looking at a checklist comparing all of the present and upcoming services, installation methods, upcoming equipment for up/downlinks and some comments on how Telesat is different to Starlink. The session finishes with a Q&A covering many items including the concern of in-orbit crashes and rain fade.

Watch now!
Speakers

Michael Martin Michael Martin
Vice President of Technology,
Metercor, Inc.
Thomas Detlor Thomas Detlor
Product Analyst,
Telesat
Ted Baker Ted Baker
Account Manager, Broadcast,
Telesat
Thomas Portois Thomas Portois
Senior Product Manger,
Telesat

Troy English Moderator: Troy English
CTO,
Ross Video Ltd.

Video: Public Internet Transport of Live Broadcast Video – SRT, NDI and RIST for Compressed Video

Getting video over the internet and around the cloud has well-established solutions, but not only are they continuing to evolve, they are still new to some. This video looks at workflows that are possible teaming up SRT, RIST and NDI by getting a glimpse into projects that have gone live in 2020. We also get a deeper look at RIST’s features with a Q&A.

This video from SMPTE’s New York section starts with Bryan Nelson from Alpha Video who’s been involved in many cloud-based NDI projects many of which also use SRT to get in and out of the cloud. NDI’s a lightly compressed, low-delay codec suitable for production and works well on 1GbE networks. Not dependant on multicast, it’s a technology that lends itself to cloud-based production where it’s found many uses. Bryan looks at a number of workflows that are also enabled by the Sienna production system which can use many video formats including NDI.

For more information on SRT and RIST, have a look at this SMPTE video outlining how they work and the differences. For a deeper dive into NDI, this SMPTE webinar with VizRT explains how its works and also gives demos of the same software that Bryan uses. To get a feel for how NDI fits in with live production compared to SMPTE’s uncompressed ST 2110, this IBC Panel discussion ‘Where can SMPTE ST 2110 and NDI Co-exist’? explores the topic further.

Bryan’s first example is the 2020 NFL draft is first up which used remote contribution on iPhones streaming using SRT. All streams were aggregated in AWS and converted to NDI feeding NDI multiviewers and routed. These were passed down to on-prem NDI processors which used HP ProLiant servers to output as SDI for handoff to other broadcast workflows. The router could be controlled by soft panels but also hardware panels on-prem. Bryan explores an extension to this idea where multiple cloud domains can be used, with NDI being the handoff between them. In one cloud system, VizRT vision mixing and graphics can be added with multiviewers and other outputs being sent via SRT to remote directors, producers etc. Another cloud system could be controlled by a third party with other processing ahead of then being sent to side and being decoded to SDI on-prem. This can be totally separate to acquisition from SDI & NDI with cameras located elsewhere. SRT & NDI become the mediators between this decentralised production environment.

Bryan finishes off by talking about remote NLE monitoring and various types of MCR monitoring. NLE editing is made easy through NDI integration within Adobe Premiere and Avid Media Composer. It’s possible to bring all of these into a processing engine and move them over the public internet for viewing elsewhere via Apple TV or otherwise.

 

 

Ciro Noronha from Cobalt Digital takes the last half of the video to talk about RIST. In addition to the talks mentioned above, Ciro recently gave a talk exploring the many RIST use cases. A good written overview of RIST can be found here.

Ciro looks at the two published profiles that form RIST, the simple and main profile. The simple profile defines RTP interoperability with error correction, using re-requested packets with the option of bonding links. Ciro covers its use of RTCP for maintaining the channel and handling the negative acknowledgements (NACKs) which are based on RFC 4585. RIST can bond multiple links or use 2022-7 seamless switching.

The Main profile builds on the simple profile by adding encryption, authentication and tunnelling. Tunnels allow multiple flows down one connection which simplifies firewall configuration, encryption and allows either end to initiate the bi-directional link. The tunnel can also carry non-RIST traffic for any other purpose. The tunnels are FRE over UDP (RFC 8086). DTLS is used for encryption which is almost identical to TLS used to secure websites. DTLS uses certificates meaning you get to authenticate the other end, not just encrypt the data. Alternatively, you can send a password that avoids the need for certificates when that’s not needed or for one-to-many distribution. Ciro concludes by showing that it can work with up to 50% packet loss and answers many questions in the Q&A.

Watch now!
Speakers

Byran Nelson Bryan Nelson
Sales Account Executive,
Alpha Video
Ciro Noronha Ciro Noronha
President, RIST Forum
Executive Vice President of Engineering, Cobalt Digital

Video: PTP/ST 2059 Best Practices developed from PTP deployments and experiences

PTP is foundational for SMPTE ST 2110 systems. It provides the accurate timing needed to make the most out of almost zero-latency professional video systems. In the strictest sense, some ST 2110 workflows can work without PTP where they’re not combining signals, but for live production, this is almost never the case. This is why a lot of time and effort goes into getting PTP right from the outset because making it work perfectly from the outset gives you the bedrock on which to build your most valuable infrastructure upon.

In this video, Gerard Phillips from Arista, Leigh Whitcomb from Imagine Communications and Telestream’s Mike Waidson join forces to run down their top 15 best practices of building a PTP infrastructure you can rely on.

Gerard kicks off underlining the importance of PTP but with the reassuring message that if you ‘bake it in’ to your underlying network, with PTP-aware equipment that can support the scale you need, you’ll have the timing system you need. Thinking of scale is important as PTP is a bi-directional protocol. That is, it’s not like the black and burst and TLS that it replaces which are simply waterfall signals. Each endpoint needs to speak to a clock so understanding how many devices you’ll be having and where is important to consider. For a look a look at PTP itself, rather than best practices, have a look at this talk free registration required or this video with Meinberg.

 

 

Gerard’s best practices advice continues as he recommends using a routed network meaning having multiple layer 2 networks with layer 3 routing between This reduces the broadcast domain size which, in turn, increases stability and resilience. JT-NM TR-1001 can help to assist in deployments using this network architecture. Gerard next cautions about layer 2 IGMP snoopers and queriers which should exist on every VLAN. As the multicast traffic is flooded to the snooping querier in layer 2, it’s important to consider traffic flows.

When Gerard says PTP should be ‘baked in’, it’s partly boundary clocks he’s referring to. Use them ‘everywhere you can’ is the advice as they bring simplicity to your design and allow for easier debugging. Part of the simplicity they bring is in helping the scalability as they shed load from your GM, taking the brunt of the bi-directional traffic and can reduce load on the endpoints.

It’s long been known that audio devices, for instance, older versions of Dante before v4.2, use version one of PTP which isn’t compatible with SPMTE ST 2059’s requirement to use PTP v2. Gerard says that, if necessary, you should buy a version 1 to version 2 converter from your audio vendor to join the v1 island to your v2 infrastructure. This is linked to best practice point 6; All GMs must have the same time. Mike makes the point that all GMs should be locked to GPS and that if you have multiple sites, they should all have an active, GPS-locked GM even if they do send PTP to each other over a WAN as that is likely to deliver less accurate timing even if it is useful as a backup.

Even if you are using physically separate networks for your PTP and ST 2110 main and backup networks, it’s important to have a link between the two GMs for ST 2022-7 traffic so a link between the two networks just for PTP traffic should be established.

The next 3 points of advice are about the ongoing stability of the network. Firstly, ST 2059-2 specifies the use of TLV messages as part of a mechanism for media notes to generate drop-frame timecode. Whilst this may not be needed day 1, if you have it running and show your PTP system works well with it on, there shouldn’t be any surprises in a couple of years when you need to introduce an end-point that will use it. Similarly, the advice is to give your PTP domain a number which isn’t a SMPTE or AES default for the sole reason that if you ever have a device join your network which hasn’t been fully configured, if it’s still on defaults it will join your PTP domain and could disrupt it. If, part of the configuration of a new endpoint is changing the domain number, the chances of this are notably reduced. One example of a configuration item which could affect the network is ‘ptp role master’ which will stop a boundary clock from taking part in BCMA and prevents unauthorised end-points taking over.

Gerard lays out the ways in which to do ‘proper commissioning’ which is the way you can verify, at the beginning, that your PTP network is working well-meaning you have designed and built your system correctly. Unfortunately, PTP can appear to be working properly when in reality it is not for reasons of design, the way your devices are acting, configuration or simply due to bugs. To account for this, Gerard advocates separate checklists for GM switches and media nodes with a list of items to check…and this will be a long list. Commissioning should include monitoring the PTP traffic, and taking a packet capture, for a couple of days for analysis with test and measurement gear or simply Wireshark.

Leigh finishes up the video talking about verifying functionality during redundancy switches and on power-up. Commissioning is your chance to characterise the behaviour of the system in these transitory states and to observe how equipment attached is affected. His last point before summarising is to implement a PTP monitoring solution to capture the critical parameters and to detect changes in the system. SMPTE RP 2059-15 will define parameters to monitor, with the aim that monitoring across vendors will provide some sort of consistent metrics. Also, a new version of IEEE-1588, version 2.1, will add monitoring features that should aid in actively monitoring the timing in your ST 2110 system.

This Arista white paper contains further detail on many of these best practices.

Watch now!
Speakers

Gerard Phillips Gerard Phillips
Solutions Engineer,
Arista
Leigh Whitcomb Leigh Whitcomb
Principal Engineer.
Imagine
Michael Waidson Mike Waidson
Application Engineer,
Telestream