Video: A video transport protocol for content that matters

What is RIST and why’s it useful? The Reliable Internet Stream Protocol was seeing as strong uptake by broadcasters and other users wanting to use the internet to get their video from A to B over the internet even before the pandemic hit.

Kieran Kunhya from Open Broadcast Systems explains what RIST is trying to do. It comes from a history of expensive links between businesses, with fixed lines or satellite and recognises the increased use of cloud. With cloud computing increasingly forming a key part of many companies’ workflows, media needs to be sent over the internet to get into the workflow. Cloud technology, he explains, allows broadcasters to get away from the traditional on-prem model where systems need to be created to handle peak workload meaning there could be a lot of underutilised equipment.

Whilst the inclination to use the internet seems only too natural given this backdrop, RIST exists to fix the problems that the internet brings with it. It’s not controversial to say that it loses packets and adds jitter to signals. On top of that, using common file transfer technologies like HTTP on TCP leaves you susceptible to drops and variable latency. For broadcasters, it’s also important to know what your latency will be, and know it won’t change. This isn’t something that typical TCP-based technologies offer. On top of solving these problems, RIST also sets out to provide an authenticated, encrypted link.

Ways of doing this have been done before, with Zixi and VideoFlow being two examples that Kieran cites. RIST was created in order to allow interoperability between equipment in a vendor-neutral way. To underline it’s open nature, Kieran shows a table of the IETF RFCs used as part of the protocol.

RIST has two groups of features, those in the ‘Simple Profile’ such as use of RTP, packet loss recovery, bonding and hitless switching. Whereas the ‘Main Profile’ adds on top of that tunnelling (including the ability to choose which direction you set up your connection), encryption, authentication and null packets removal. Both of these are available as published specifications today. A third group of features is being planned under the ‘enhanced profile’ to be released around the beginning of Q2 2021.

Kieran discusses real-world proof points such as a 10-month link which had lost zero packets, though had needed to correct for millions of lost packets. He discusses deployments and moves on to SRT. SRT, Secure Reliable Transport, is a very popular technology which achieves a lot of what RIST does. Although it is an open-source project, it is controlled by one vendor, Haivision. It’s easy to use and has seen very wide deployment and it has done much to educate the market so people understand why they need a protocol such as RIST and SRT so has left a thirst in the market. Kieran sees benefit in RIST having brought together a whole range of industry experts, including Haivision, to develop this protocol and that it already has multipath support, unlike SRT. Furthermore, at 15% packet loss, SRT doesn’t work effectively whereas RIST can achieve full effectiveness with 40% packet loss, as long as you have enough bandwidth for a 200% overhead.

Watch now!
Speakers

Kieran Kunhya Kieran Kunhya
Director, RIST Forum
Founder & CEO, Open Broadcast Systems

Video: Preparing for 5G Video Streaming

Will streaming really be any better with 5G? What problems won’t 5G solve? Just a couple of the questions in this panel from the Streaming Video Alliance. There are so many aspects of 5G which are improvements, it can be very hard to clearly articulate for a given use case which are the main ones that matter. In this webinar, the use case is clear: streaming to the consumer.

Moderating the session, Dom Robinson kicks off the conversation asking the panellists to dig below the hype and talk about what 5G means for streaming right now. Brian Stevenson is first up explaining that the low-bandwidth 5G option really useful as it allows operators to roll out 5G offerings with the spectrum they already have and, given its low frequency, get a good decent a propagation distance. In the low frequencies, 5G can still give a 20% improvement bandwidth. Whilst this is a good start, he continues, it’s really delivering in the mid-band – where bandwidth is 6x – that we can really start enabling the applications which are discussed in the rest of the talk.

Humberto la Roche from Cisco says that in his opinion, the focus needs to be on low-latency. Latency at the network level is reduced when working in the millimetre wavelengths, reducing around 10x. This is important even for video on demand. He points out, though that delay happens within the IP network fabric as well as in the 5G protocol itself and the wavelength it’s working on. Adding buffers into the network drives down the cost of that infrastructure so it’s important to look at ways of delivering the overall latency needed at a reasonable cost. We also hear from Sanjay Mishra who explains that some telcos are already deploying millimetre wavelengths and focussing on advancing edge compute in high-density areas as their differentiator.

The panel discusses the current technical challenges for operators. Thierry Fautier draws from his experience of watching sports in the US on his mobile devices. The US has a zero-rating policy, he explains, where a mobile operator waives all data charges when you use a certain service, but only delivers the video at SD resolution at 1.5 Mbps. Whilst the benefits to this are obvious, it means that as people buy new, often larger phones, with better screens, they expect to reap the benefits. At SD, Thierry says, you can’t see the ball in Tennis, so there 5G will offer the over-the-air network bandwidth needed to allow the telcos to offer HD as part of these deals.

Preparing for 5G Video Streaming from Streaming Video Alliance on Vimeo.

The panel discusses the problems seen so far in delivering MBMS – multicast for mobile networks. MBMS has been deployed sporadically around the world in current LTE networks (using eMBMS) but has faced a typical chicken and egg problem. Given that both cell towers and mobile devices need to support the technology, it hasn’t been worth the upgrade cost for the telcos given that eMBMS is not yet supported by many chipsets including Apple’s. Thierry says there is hope for a 5G version of MBMS since Apple is now part of the 3GPP.

CMAF had a similar chicken and egg situation when it was finalised, there was hesitance in using it because Apple didn’t support it. Now with iOS 14 supporting HLS in CMAF, there is much more interest in deploying such services. This is just as well, cautions Thierry, as all the talk of reduced latency in 5G or in the network itself won’t solve the main problem with streaming latency which exists at the application layer. If services don’t abandon HLS/DASH and move to LL-HLS and LL-DASH/CMAF then the improvements in latency lower down the stack will only convey minimal benefits to the viewer.

Sanjay discusses the problem of coverage and penetration which will forever be a problem. “All cell towers are not created equal.” The challenge will remain as to how far and wide coverage will be there.

The panel finishes looking at what’s to come and suggests more ‘federations’ of companies working together, both commercially and technically, to deliver video to users in better ways. Thierry sums up the near future as providing higher quality experiences, making in-stadia experiences great and enabling immersive video.

Watch now!
Speakers

Brian Stevenson Brian Stevenson
SME,
Streaming Video Alliance
Humberto La Roche Humberto La Roche
Principal Engineer,
Cisco
Sanjay Mishra Sanjay Mishra
Associate Fellow,
Verizon
Thierry Fautier Thierry Fautier
President-Chair at Ultra HD Forum
VP Video Strategy Harmonic at Harmonic
Dom Robinson Moderator: Dom Robinson
Co-Founder, Director, and Creative Firestarter
id3as

Video: State of the Streaming Market

Streaming Media commissioned an extra mid-year update to their ‘State of the Streaming Market’ survey in order to understand how the landscape has changed due to COVID-19. With a survey already carried out once this year, this special Autumn edition captures the rapid changes we’ve been seeing.

Tim Siglin talks us through the results of the survey ahead of a full report being published. Since the last set of questioning the amount of live vs OTT in the businesses that responded has swung around 5% in favour of live content. The survey indicates that 65% of streaming infrastructure will be software-defined within 24 months, with some adopting a hybrid approach initially.

Tim also unveils a very striking graphic showing 56% of respondents see the internet being their company’s main way of transporting video via IP dwarfing the other answers, the biggest of which is CDN with 25% which covers delivery to CDN by dedicated links or internet links within the cloud.

Zixi is part of the RIST Forum and the SRT alliance, which indicates they understand the importance of multiple-codec workflows. We see the streaming industry is of the same opinion with more than two-thirds expecting to be using multiple protocols over the next twelve months,

Looking at the benefits of moving to the cloud, flexibility is number one, cost savings at three and supporting a virtualised workforce being five. Tim mentions surprise at seeing a remote workforce being only at number five but does suggest without the pandemic it would not have entered the top five at all. This seems quite reasonable as, whatever your motivation for starting using the cloud, flexibility is nearly always going to be one of the key benefits.

Reliability was ranked number two in ‘benefits of moving to the cloud’. The reasons for people choosing that were fairly evenly split with the exception of uptime being 39%. Quality of Service, Quality of Experience and cost all came in around 20%.

Tim Siglin and Gordon Brooks discuss how 5G will impact the industry. Gordon gives a business-to-business example of how they are currently helping a broadcaster contribute into the cloud and then deliver to and end-point all with low-latency. He sees these links as some of the first to ‘go 5G’. In terms of the survey, people see ‘in venue delivery’ as half as likely to be useful for video streaming than distribution to the consumer or general distribution. Tim finishes by saying that although it could well be impactful to streaming, we need to see how much of the hype the operators actually live up to before planning too many projects around it.

Watch now!
Speakers

Tim Siglin Tim Siglin
Founding Executive Director
HelpMeStream
Gordon Brooks Gordon Brooks
CEO
Zixi
Eric Schumacher-Rasmussen Moderator: Eric Schumacher-Rasmussen
Editor, Streaming Media

Video: AES67/SMPTE ST 2110 Audio Transport & Routing (NMOS IS-08)

Let’s face it, SMPTE ST 2110 isn’t trivial to get up and running at scale. It carries audio as AES67, though with some restrictions which can cause problems for full interoperability with non-2110 AES67 systems. But once all of this is up and running, you’re still lacking discoverability, control and management. These aspects are covered by AMWA’s NMOS IS-04, IS-05 and IS0-08 projects.

Andreas Hildrebrand, Evangelist at ALX NetworX, takes the stand at the AES exhibition to explain how this can all work together. He starts reiterating one of the main benefits of the move to 2110 over 2022-6, namely that audio devices don’t need to receive and de-embed audio. With a dependency on PTP, SMPTE ST 2110-30 an -31 define carriage of AES67 and AES3.

We take a look at IS-04 and IS-05 which define registration, discovery and configuration. Using an address received from DHCP, usually, new devices on the network will put in an entry into an IS-04 registry which can be queried by an API to find out what senders and listeners are available in a system. IS-05 can then use this information to create connections between devices. IS-05, Andreas explains, is able to issue a create connection request to endpoints asking them to connect. It’s up to the endpoints themselves to initiate the request as appropriate.

Once a connection has been made, there remains the problem of dealing with audio mapping. Andreas uses the example of a single stream containing multiple channels. Where a device only needs to use one or two of these, IS-08 can be used to tell the receiver which audio it should be decoding. This is ideal when delivering audio to a speaker. Andreas then walks us through worked examples.

Watch now!
Speaker

Andreas Hildebrand Andreas Hildebrand
Ravenna Technology Evangelist,
ALC NetworX