Video: 5G – Game-Changer Or Meh?

The 5G rollout has started in earnest in the UK, North America, Asia and many other regions. As with any new tech rollout, it takes time and currently centres on densely populated areas, but tests and trials are already underway in TV productions to find out whether 5G can actually help improve workflows. Burnt by the bandwidth collapse of 4G in densely populated locations, there’s hope amongst broadcasters that the higher throughput and bandwidth slicing will, this time, deliver the high bandwidth, reliable connectivity that the industry needs.

Jason Thibeault from the Streaming Video Alliance join’s Zixi’s Eric Bolten to talk to Eric Schumacher-Rasmussen who moderates this discussion on how well 5G is standing up to the hype. For a deeper look at 5G, including understanding the mix of low frequencies (as used in 2G, 3G and 4G) and high, Ultra Wide Band (UWB) frequencies referred to in this talk, check out our article which does a deep dive on 5G covering roll out of infrastructure and many of the technologies that make it work.

 

 

Eric starts by discussing trials he’s been working on in including one which delivered 8K at 100Mbps over 5G. He sees 5G as being very useful to productions whether on location or on set. He’s been working to test routers and determine the maximum throughput possible which we already know is in excess of 100Mbps, likely in the gigabits. Whilst rollouts have started and there’s plenty of advertising surrounding 5G, the saturation in the market of 5G-capable phones is simply not there but that’s no reason for broadcasters of film crews not to use it. 30 markets in the US are planning to be 5G enabled and all the major telcos in the UK are rolling the technology out which is already in around 200 cities and towns. It’s clear that 5G is seen as a strategic technology for governments and telcos alike.

Jason talks about 5G’s application in stadia because it solves problems for both the on-location viewers but also the production team themselves. One of the biggest benefits of 5G is the ultra-low-latency. Having 5G cameras keeps wireless video in the milliseconds using low-latency codecs like JPEG XS then delivery to fans within the stadium can also be within milliseconds meaning the longest delay in the whole system is the media workflow required for mixing the video, adding audio and graphics. The panel discusses how this can become a strong selling point for the venue itself. Even supporters who don’t go into the stadium itself can come to an adjacent location for good food, drinks a whole load of like-minded people, massive screens and a second-screen experience like nothing available at home. On top of all of that, on-site betting will be possible, enabled by the low latency.

Moving away from the stadium, North America has already seen some interest in linking the IP-native ATSC 3.0 broadcast network to the 5G network providing backhaul capabilities for telcos and benefits for broadcasters. If this is shown to be practical, it shows just how available IP will become in the medium-term future.

Jason summarises the near-term financial benefits in two ways: the opportunity for revenue generation by delivering better video quality and faster advertising but most significantly he sees getting rid of the need for satellite backhaul as being the biggest immediate cost saver for many broadcast companies. This won’t all be possible on day one, remembering that to get the major bandwidths, UWB 5G is needed which is subject to a slower roll-out. UWB uses high-frequency RF, 24Ghz and above, which has very little penetration and relies on line-of-sight links. This means that even a single wall can block the signal but those that can pick it up will get gigabits of throughput.

The panel concludes by answering a number of questions from the audience on 5G’s benefit over fibre to the home, the benefits of abstracting the network out of workflows and much more.

Watch now!
Speakers

Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance
Eric Bolten Eric Bolten
VP of Business Development,
Zixi
Eric Schumacher-Rasmussen Moderator: Eric Schumacher-Rasmussen
Editor-in-Chief,
Streaming Media

Video: Building Media Systems in the Cloud: The Cloud Migration Challenge

Peter Wharton from TAG V.S. starts us on our journey to understanding how we can take real steps to deploying a project in the cloud. He outlines five steps starting with evaluation, building a knowledge base, building for scale, optimisation and finishing with ‘realising full cloud potential’. Peter says that the first step which he dubs ‘Will It Work?’ is about scoping out what you see cloud delivering to you; what is the future that the move to cloud will give you? You can then evaluate the activities in your organisation that are viable options to move to the cloud with the aim of finding quick, easy wins.

Peter’s next step in embracing the cloud in a company is to begin the transformation in earnest by owning the transformation and starting the move not through technical actions, but through the people. It’s a case of addressing the culture of your organisation, changing the lens through which people think and for the larger companies creating a ‘centre of excellence around cloud deployments. A big bottleneck for some organisations is siloing which is sometimes deliberate, sometimes intentional. When a broadcast workflow needs to go to the cloud, this can bring together many different parts of the company, often more than if it were on-prem, so Peter identifies ‘cross-functional leadership’ as an important step in starting the transformation. He also highlights cost modelling as an important factor at this stage. A clear understanding of the costs, and savings, that will be realised in the move is an important motivational factor, but should also be used to correctly set expectations. Not getting the modelling right at this stage can significantly weaken traction as the process continues. Peter talks about the importance of creating ‘key tenets’ of your migration.

Direct link

End-to-End Migration is the promise if you can bring your organisation along with you on this journey when you start looking at actually bringing full workflows into the cloud and deploying them in production. To do that, Peter suggests validating your solution when working at scale, finding ways of testing it way above the levels you need on day one. Another aspect is creating workflows that are cloud-first and translating your current workflows to the cloud rather than taking existing workflows and making the cloud follow the same procedures – to do so would be to miss out on much of the value of the cloud transition. This step will mark the start of you seeing the value of setting your key tenets but you should feel free to ‘break rules and make new ones’ as you adapt to your changing understanding.

The last two stages revolve around optimising and achieving the ‘full potential’ of the cloud. As such, this means taking what you’ve learnt to date and using that to remake your solutions in a better, more sustainable way. Doing this allows you to hone them to your needs but also introduce a more stable approach to implementation such as using an infrastructure-as-code philosophy. This is all topped off by the last stage which is adding cloud-only functionality to the workflows you’ve created such as using machine learning or scaling functions in ways that are seldom practical for on-prem solutions.

These steps are important for any organisation wanting to embrace the cloud, but Peter reminds us that it’s not just end users who are making the transition, vendors also are. Most technology suppliers have products that pre-date today’s cloud technologies and are having to make their own journey which can start with short-term fixes to ‘make it work’ and move their existing code to the cloud. They then will need to work on their pricing models and cloud security which Peter calls the ‘Make it Viable’ stage. It’s only then that they start to be able to leverage cloud capabilities such as scaling properly and if they are able to progress further they will become a cloud-native solution and fully cloud-optimised. However, these latter two steps can take a long time for some suppliers.

Peter finishes the video talking about the difference in perspective between legacy vendors and cloud-native vendors. For example, legacy vendors may still be thinking about site visits, whereas cloud-native vendors don’t need that. They will be charging using a subscription model, rather than large Capex pricing. Peter summarises his talk by underlining the need to set your vision, agree on your key tenets for migration, invest in the team, keep your teams accountable & small and seek partners that not only understand the cloud but that match your aims for the future.

Watch now!

Speakers

Peter Wharton Peter Wharton
Director of Corporate Strategy,
TAG V.S.

Video: Public Internet Transport of Live Broadcast Video – SRT, NDI and RIST for Compressed Video

Getting video over the internet and around the cloud has well-established solutions, but not only are they continuing to evolve, they are still new to some. This video looks at workflows that are possible teaming up SRT, RIST and NDI by getting a glimpse into projects that have gone live in 2020. We also get a deeper look at RIST’s features with a Q&A.

This video from SMPTE’s New York section starts with Bryan Nelson from Alpha Video who’s been involved in many cloud-based NDI projects many of which also use SRT to get in and out of the cloud. NDI’s a lightly compressed, low-delay codec suitable for production and works well on 1GbE networks. Not dependant on multicast, it’s a technology that lends itself to cloud-based production where it’s found many uses. Bryan looks at a number of workflows that are also enabled by the Sienna production system which can use many video formats including NDI.

For more information on SRT and RIST, have a look at this SMPTE video outlining how they work and the differences. For a deeper dive into NDI, this SMPTE webinar with VizRT explains how its works and also gives demos of the same software that Bryan uses. To get a feel for how NDI fits in with live production compared to SMPTE’s uncompressed ST 2110, this IBC Panel discussion ‘Where can SMPTE ST 2110 and NDI Co-exist’? explores the topic further.

Bryan’s first example is the 2020 NFL draft is first up which used remote contribution on iPhones streaming using SRT. All streams were aggregated in AWS and converted to NDI feeding NDI multiviewers and routed. These were passed down to on-prem NDI processors which used HP ProLiant servers to output as SDI for handoff to other broadcast workflows. The router could be controlled by soft panels but also hardware panels on-prem. Bryan explores an extension to this idea where multiple cloud domains can be used, with NDI being the handoff between them. In one cloud system, VizRT vision mixing and graphics can be added with multiviewers and other outputs being sent via SRT to remote directors, producers etc. Another cloud system could be controlled by a third party with other processing ahead of then being sent to side and being decoded to SDI on-prem. This can be totally separate to acquisition from SDI & NDI with cameras located elsewhere. SRT & NDI become the mediators between this decentralised production environment.

Bryan finishes off by talking about remote NLE monitoring and various types of MCR monitoring. NLE editing is made easy through NDI integration within Adobe Premiere and Avid Media Composer. It’s possible to bring all of these into a processing engine and move them over the public internet for viewing elsewhere via Apple TV or otherwise.

 

 

Ciro Noronha from Cobalt Digital takes the last half of the video to talk about RIST. In addition to the talks mentioned above, Ciro recently gave a talk exploring the many RIST use cases. A good written overview of RIST can be found here.

Ciro looks at the two published profiles that form RIST, the simple and main profile. The simple profile defines RTP interoperability with error correction, using re-requested packets with the option of bonding links. Ciro covers its use of RTCP for maintaining the channel and handling the negative acknowledgements (NACKs) which are based on RFC 4585. RIST can bond multiple links or use 2022-7 seamless switching.

The Main profile builds on the simple profile by adding encryption, authentication and tunnelling. Tunnels allow multiple flows down one connection which simplifies firewall configuration, encryption and allows either end to initiate the bi-directional link. The tunnel can also carry non-RIST traffic for any other purpose. The tunnels are FRE over UDP (RFC 8086). DTLS is used for encryption which is almost identical to TLS used to secure websites. DTLS uses certificates meaning you get to authenticate the other end, not just encrypt the data. Alternatively, you can send a password that avoids the need for certificates when that’s not needed or for one-to-many distribution. Ciro concludes by showing that it can work with up to 50% packet loss and answers many questions in the Q&A.

Watch now!
Speakers

Byran Nelson Bryan Nelson
Sales Account Executive,
Alpha Video
Ciro Noronha Ciro Noronha
President, RIST Forum
Executive Vice President of Engineering, Cobalt Digital

Video: Creating Interoperable Hybrid Workflows with RIST

TV isn’t made in one place anymore. Throughout media and entertainment, workflows increasingly involve many third parties and being in the cloud. Content may be king, but getting it from place to place is foundational in our ability to do great work. RIST is a protocol that is able to move video very reliably and flexibly between buildings, into, out of and through the cloud. Leveraging its flexibility, there are many ways to use it. This video helps review where RIST is up to in its development and understand the many ways in which it can be used to solve your workflow problems.

Starting the RIST overview is Ciro Noronha, chair of the RIST Forum. Whilst we have delved in to the detail here before in talks like this from SMPTE and this talk also from Ciro, this is a good refresher on the main points that RIST is published in three parts, known as profiles. First was the Simple Profile which defined the basics, those being that it’s based on RTP and uses an ARQ technology to dynamically request any missing packets in a timely way which doesn’t trip the stream up if there are problems. The Main Profile was published second which includes encryption and authentication. Lastly is the Advanced Profile which will be released later this year.

 

 

Ciro outlines the importance of the Simple Profile. That it guarantees compatibility with RTP-only decoders, albeit without error correction. When you can use the error correction, you’ll benefit from correction even when 50% of the traffic is being lost unlike similar protocols such as SRT. Another useful feature for many is multi-link support allowing you to use RIST over bonded LTE modems as well as using SMPTE ST 2022-7

The Main Profile brings with it support for tunnelling meaning you can set up one connection between two locations and put multiple streams of data through. This is great for simplifying data connectivity because only one port needs to be opened in order to deliver many streams and it doesn’t matter in which direction you establish the tunnel. Once established, the tunnel is bi-directional. The tunnel provides the ability to carry general data such as control data or miscellaneous IT.

Encryption made its debut with the publishing of the Main Profile. RIST can use DTLS which is a version of the famous TLS security used in web sites that runs on UDP rather than TCP. The big advantage of using this is that it brings authentication as well as encryption. This ensures that the endpoint is allowed to receive your stream and is based on the strong encryption we are familiar with and which has been tested and hardened over the years. Certificate distribution can be difficult and disproportionate to the needs of the workflow, so RIST also allows encryption using pre-shared keys.

Handing over now to David Griggs and Tim Baldwin, we discuss the use cases which are enabled by RIST which is already found in encoders, decoders and gateways which are on the market. One use case which is on the rise is satellite replacement. There are many companies that have been using satellite for years and for whom the lack of operational agility hasn’t been a problem. In fact, they’ve also been able to make a business model work for occasional use even though, in a pure sense, satellite isn’t perfectly suited to occasional use satellites. However, with the ability to use C-band closing in many parts of the world, companies have been forced to look elsewhere for their links and RIST is one solution that works well.

David runs through a number of others including primary and secondary distribution, links aggregation, premium sports syndication with the handoff between the host broadcaster and the multiple rights-holding broadcasters being in the cloud and also a workflow for OTT where RIST is used for ingest.

RIST is available as an open source library called libRIST which can be downloaded from videolan and is documented in open specifications TR-06-1 and TR-06-2. LibRIST can be found in gstreamer, Upipe, VLC, Wireshark and FFmpeg.

The video finishes with questions about how RIST compares with SRT. RTMP, CMAF and WebRTC.

Watch now!
Speakers

Tim Baldwin Tim Baldwin
Head of Product,
Zixi
David Griggs David Griggs
Senior Product Manager, Distribution Platforms
Disney Streaming Services
Ciro Noronha Ciro Noronha
President, RIST Forum
Executive Vice President of Engineering, Cobalt Digital