Video: RIST for high-end live media workflows

RIST overcomes the propensity of the internet to lose packets. It makes possible very-high-bandwidth, low-latency contribution over the internet into a studio or directly into the cloud as part of a streaming workflow. Broadcasters have long dreamed of using the increasingly ubiquitous internet to deliver programmes at a lower cost than fixed lines, satellite or microwave. Back in the day, FEC tended to save the day but it had limits meaning the internet was still not so appetising. Now with RIST, the internet is a safe medium for contribution. As ever, two paths are advised!

In this talk, Love Thyresson explains how NetInsight use RIST to deliver high bandwidth contribution for their customers. Love focusses on the lower-tier sports events which would attract an audience, but when the audience is small, the budgets are also small meaning that if you can’t use the internet to get the sports game back to your production centre, the costs – often just on connectivity – are too high to make the programme viable. So whether we are trying to cut costs on a big production or make new programming viable (which might even be the catalyst for a whole new business model or channel), internet contribution is the only way to go.

Love talks about the extension done in RIST to the standard RTP timestamp which, when using high bandwidth streams, quickly runs out of numbers. Expanding it from 16 to 32 bits was the way to allow for more packets to be delivered before having to start the timer from zero again. Indeed, it’s this extra capacity which allows the RIST main profile to deliver JPEG 2000 or JPEG XS. JPEG XS, in particular, is key to modern remote-production workflows. Ingest into the cloud may end up being the most common use for RIST despite the high-value use cases for delivering from events to broadcasters or between broadcasters’ buildings.

After a quick retransmission 101, Love Thyresson closes by looking at the features available now in the simple and main profile of RIST.

For more information, have a look at this article or these videos

Watch now!
Speakers

Love Thyresson Love Thyresson
Former Head of Internet Media Transport, NetInsight

Video: Futuristic Codecs and a Healthy Obsession with Video Startup Time

These next 12 months are going to see 3 new MPEG standards being released. What does this mean for the industry? How useful will they be and when can we start using them? MPEG’s coming to the market with a range of commercial models to show it’s learning from the mistakes of the past so it should be interesting to see the adoption levels in the year after their release. This is part of the second session of the Vienna Video Tech Meetup and delves into startup time for streaming services.

In the first talk, Dr. Christian Feldmann explains the current codec landscape highlighting the ubiquitous AVC (H.264), UHD’s friend, HEVC (H.265), and the newer VP9 & AV1. The latter two differentiate themselves by being free to used and are open, particularly AV1. Whilst slow, both the latter are seeing increasing adoption in streaming, but no one’s suggesting that AVC isn’t still the go-to codec for most online streaming.

Christian then introduces the three new codecs, EVC (Essential Video Coding), LCEVC (Low-Complexity Enhancement Video Coding) and VVC (Versatile Video Coding) all of which have different aims. We start by looking at EVC whose aim is too replicate the encoding efficiency of HEVC, but importantly to produce a royalty-free baseline profile as well as a main profile which improves efficiency further but with royalties. This will be the first time that you’ve been able to use an MPEG codec in this way to eliminate your liability for royalty payments. There is further protection in that if any of the tools is found to have patent problems, it can be individually turned off, the idea being that companies can have more confidence in deploying the new technology.

The next codec in the spotlight is LCEVC which uses an enhancement technique to encode video. The aim of this codec is to enable lower-end hardware to access high resolutions and/or lower bitrates. This can be useful in set-top boxes and for online streaming, but also for non-broadcast applications like small embedded recorders. It can achieve a light improvement in compression over HEVC, but it’s well known that HEVC is very computationally heavy.

LCEVC reduces computational needs by only encoding a lower resolution version (say, SD) of the video in a codec of your choice, whether that be AVC, HEVC or otherwise. The decoder will then decode this and upscale the video back to the original resolution, HD in this example. This would look soft, normally, but LCEVC also sends enhancement data to add back in the edges and detail that would have otherwise been lost. This can be done in CPU whilst the other decoding could be done by the dedicated AVC/HEVC hardware and naturally encoding/decoding a quarter-resolution image is much easier than the full resolution.

Lastly, VVC goes under the spotlight. This is the direct successor to HEVC and is also known as H.266. VVC naturally has the aim of improving compression over HEVC by the traditional 50% target but also has important optimisations for more types of content such as 360 degree video and screen content such as video games.

To finish this first Vienna Video Tech Meetup, Christoph Prager lays out the reasons he thinks that everyone involved in online streaming should obsess about Video Startup Time. After defining that he means the time between pressing play and seeing the first frame of video. The longer that delay, the assumption is that the longer the wait, the more users won’t bother watching. To understand what video streaming should be like, he examines Spotify’s example who have always had the goal of bringing the audio start time down to 200ms. Christophe points to this podcast for more details on what Spotify has done to optimise this metric which includes activating GUI elements before, strictly speaking, they can do anything because the audio still hasn’t loaded. This, however, has an impact of immediacy with perception being half the battle.

“for every additional second of startup delay, an additional 5.8% of your viewership leaves”

Christophe also draws on Akamai’s 2012 white paper which, among other things, investigated how startup time puts viewers off. Christophe also cites research from Snap who found that within 2 seconds, the entirety of the audience for that video would have gone. Snap, of course, to specialise in very short videos, but taken with the right caveats, this could indicate that Akamai’s numbers, if the research was repeated today, may be higher for 2020. Christophe finishes up by looking at the individual components which go towards adding latency to the user experience: Player startup time, DRM load time, Ad load time, Ad tag load time.

Watch now!
Speakers

Christian Feldmann Dr. Christian Feldmann
Team Lead Encoding,
Bitmovin
Christoph Prager Christoph Prager
Product Manager, Analytics
Bitmovin
Markus Hafellner Markus Hafellner
Product Manager, Encoding
Bitmovin

Video: RIST: Enabling Remote Work with Reliable Live Video Over Unmanaged Networks

Last week’s article on RIST, here on The Broadcast Knowledge, stirred up some interest about whether we view RIST as being against SRT & Zixi, or whether it’s an evolution thereof. Whilst the talk covered the use of RIST and the reasons one company chose to use it, this talk explains what RIST achieves in terms of features showing that it has a ‘simple’ and ‘main’ profile which bring different features to the table.

Rick Ackermans is the chair of the RIST Activity Group which is the group that develops the specifications. Rick explains some of the reasons motivating people to look at the internet and other unmanaged networks to move their video. The traditional circuit-based contribution and distribution infrastructure on which broadcasting relied has high fixed costs. Whilst this can be fully justifiable for transmitter links, though still expensive, for other ad-hoc circuits you are paying all the time for something which is only occasionally used, satellite space in the C-band is reducing squeezing people out. And, of course, remote working is much in the spotlight so technologies like RIST which don’t have a high latency (unlike HLS) are in demand.

RIST manages to solve many of the problems with using the internet such as protecting your content from theft and from packet loss. It’s a joint effort between many companies including Zixi and Haivision. The aim is to create choice in the market by removing vendor bias and control. Vendors are more likely to implement an open specification than one which has ties to another vendor so this should open up the market creating more demand for this type of solution.

In the next section, we see how RIST as a group is organised and who it fits in to the Video Services Forum, VSF. We then look at the profiles available in RIST. A full implementation aims at being a 3-layer onion with the ‘Simple Profile’ in the middle. This has basic network resilience and interoperability. On top of that, the ‘Main Profile’ is built which adds encryption, authentication and other features. The future sees an ‘Enhanced Profile’ which may bring with it channel management.

Rick then dives down into each of these profiles to uncover the details of what’s there and explain the publication status. The simple profile allows full RTP interoperability for use as a standard sender, but also adds packet recovery plus seamless switching. The Main profile introduces the use of GRE tunnels where a single connection is setup between two devices. Like a cable, multiple signals can then be sent down the cable together. From an IT perspective this makes life so much easier as the number of streams is totally transparent to the network so firewall configuration, for example, is made all the simpler. However it also means that by just running encryption on the tunnel, everything is encrypted with no further complexity. Encryption works better on higher bitrate streams so, again, running on the aggregate has a benefit than on each stream individually. Rick talks about the encryption modes with DTLS and Pre-shared Key being available as well as the all important, but often neglected, step of authenticating – ensuring you are sending to the endpoint you expected to be sending to.

The last part of the talk covers interoperability, including a comparison between RIST and SRT. Whilst there are many similarities, Rick claims RIST can cope with higher percentages of packet loss. He also says that 2022-7 doesn’t work with SRT, though The Broadcast Knowledge is aware of interoperable implementations which do allow 2022-7 to work even through SRT. The climax of this section is explaining the setup of the RIST NAB demo, a multi-vendor, international demo which proved the reliability claims. Rick finishes by examining some case studies and with a Q&A.

Watch now!
Speakers

Merrick Ackermans Rick Ackermans
MVA Broadcast Consulting
RIST Activity Group Chair

Video: RIST and Open Broadcast Systems

RIST is a streaming protocol which allows lossy networks such as the internet to be used for critical streaming applications. Called Reliable Internet Stream Transport, it uses ARQ (Automatic Repeat reQuest) retransmission technology to request any data that is lost by the network, creating reliable paths for video contribution.

In this presentation, Kieran Kunhya from Open Broadcast Systems explains why his company has chosen RIST protocol for their software-based encoders and decoders. Their initial solution for news, sports and linear channels contribution over public internet were based on FEC (Forward Error Correction), a technique used for controlling errors in transmission by sending data in a redundant way using error-correcting code. However, FEC couldn’t cope with large burst losses, there was limited interoperability and the implementation was complex. Protecting the stream by sending the same feed over multiple paths and/or sending a delayed version of the stream on the same path, had a heavy bandwidth penalty. This prompted them, instead, to implement an ARQ technique based on RFC 4585 (Extended RTP Profile for Real-time Transport Control Protocol-Based Feedback), which gave them functionality quite similar to the basic RIST functionality.

Key to the discussion, Kieran explains why they decided not to adopt the SRT protocol. As SRT is based file transfer protocol, it’s difficult or impossible to add features like bonding, multi-network and multi-point support which were available in RIST from day one. Moreover, RIST has a large IETF heritage from other industries and is vendor-independent. In Kieran’s opinion, SRT will become a prosumer solution (similar to RTMP, now, for streaming) and RIST will be the professional solution (analogous to MPEG-2 Transport Streams).

Different applications for the RIST protocol are discussed, including 24/7 linear channels for satellite uplink from playout, interactive (two-way) talking heads for news, high bitrate live events and reverse vision lines for monitoring purposes. Also, there is a big potential for using RIST in cloud solutions for live broadcast production workflows. Kieran hopes that more broadcasters will start using spin-up and spin-down cloud workflows, which will help save space and money on infrastructure.

What’s interesting, Open Broadcast Solutions are not currently interested in RIST Main Profile (the main advantages of this profile are support for encryption, authentication and in-band data). Kieran explains that to control devices in remote locations you need some kind of off-the-shelf VPN anyway. These systems provide encryption and NAT port traversal, so the problem is solved at a different layer in the OSI model and this gives customers more control over the type of encryption they want.

Watch now!

Speaker

Kieran Kunhya Kieran Kunhya
Founder and CEO,
Open Broadcast Systems