Video: Performance Measurement Study of RIST


RIST solves a problem by transforming unmanaged networks into reliable paths for video contribution. This comes amidst increasing interest in using the public internet to contribute video and audio. This is partly because it is cheaper than dedicated data circuits, partly that the internet is increasingly accessible from many locations making it convenient, but also when feeding cloud-based streaming platforms, the internet is, by definition, part of the signal path.

Packet loss and packet delay are common on the internet and there are only two ways to compensate for them: One is to use Forward Error Correction (FEC) which will permanently increase your bandwidth by up to 25% so that your receiver can calculate which packets were missing and re-insert them. Or your receiver can ask for the packets to be sent again.
RIST joins a number of other protocols to use the re-request method of adding resilience to streams which has the benefit of only increasing the bandwidth needed when re-requests are needed.

In this talk, Ciro Noronha from Cobalt Digital, explains that RIST is an attempt to create an interoperable protocol for reliable live streaming – which works with any RTP stream. Protocols like SRT and Zixi are, to one extent or another, proprietary – although it should be noted that SRT is an open source protocol and hence should have a base-level of interoperability. RIST takes interoperability one stage further and is seeking to create a specification, the first of which is TR-06-1 also known as ‘Simple Profile’.

We then see the basics of how the protocol works and how it uses RTCP for singling. Further more RIST’s support for bonding is explored and the impact of packet reordering on stream performance.

The talk finishes with a look to what’s to come, in particular encryption, which is an important area that SRT currently offers over and above reliable transport.
Watch now!

To dig into SRT, check out this talk from Chris Michaels
For more on RIST, have a look at Kieran Kunhya’s talk and Rick Ackerman’s introduction to RIST.

Speaker

Ciro Noronha Ciro Noronha
Director of Technology, Compression Systems,
Cobalt Digital

Video: AV1/VVC Update

AV1 and VVC are both new codecs on the scene. Codecs touch our lives every day both at work and at home. They are the only way that anyone receives audio and video online and television. So all together they’re pretty important and finding better ones generates a lot of opinion.

So what are AV1 and VVC? VVC is one of the newest codecs on the block and is undergoing standardisation in MPEG. VVC builds on the technologies standardised by HEVC but adds many new coding tools. The standard is likely to enter draft phase before the end of 2019 resulting in it being officially standardised around a year later. For more info on VVC, check out Bitmovin’s VVC intro from Demuxed

AV1 is a new but increasingly known codec, famous for being royalty free and backed by Netflix, Apple and many other big hyper scale players. There have been reports that though there is no royalty levied on it, patent holders have still approached big manufacturers to discuss financial reimbursement so its ‘free’ status is a matter of debate. Whilst there is a patent defence programme, it is not known if it’s sufficient to insulate larger players. Much further on than VVC, AV1 has already had a code freeze and companies such as Bitmovin have been working hard to reduce the encode times – widely known to be very long – and create live services.

Here, Christian Feldmann from Bitmovin gives us the latest status on AV1 and VVC. Christian discusses AV1’s tools before discussing VVC’s tools pointing out the similarities that exist. Whilst AV1 is being supported in well known browsers, VVC is at the beginning.

There’s a look at the licensing status of each codec before a look at EVC – which stands for Essential Video Coding. This has a royalty free baseline profile so is of interest to many. Christian shares results from a Technicolor experiment.

Speakers

Christian Feldmann Christian Feldmann
Codec Engineer,
Bitmovin

Video: QUIC in Theory and Practice


Most online video streaming uses HTTP to deliver the video to the player in the same way web pages are delivered to the browser. So QUIC – a replacement for HTTP – will affect us professionally and personally.

This video explains how HTTP works and takes us on the journey to seeing why QUIC (which should eventually be called HTTP/3) speeds up the process of requesting and delivering files. Simply put there are ways to reduce the number of times messages have to be passed between the player and the server which reduces overall overhead. But one big win is its move away from TCP to UDP.

Robin Marx delivers these explanations by reference to superheroes and has very clear diagrams leading to this low-level topic being pleasantly accessible and interesting.

There are plenty of examples which show easy-to-see gains website speed using QUIC over both HTTP and HTTP/2 but QUIC’s worth in the realm of live streaming is not yet clear. There are studies showing it makes streaming worse, but also ones showing it helps. Video players have a lot of logic in them and are the result of much analysis, so it wouldn’t surprise me at all to see the state of the art move forward, for players to optimise for QUIC delivery and then all tests to show an improvement with QUIC streaming.

QUIC is coming, one way or another, so find out more.
Watch now!

Speaker

Robin Marx Robin Marx
Web Performance Researcher,
Hasslet University

Video: Optimizing ABR Encode, Compute & Control for Performance & Quality

Adaptive bitrate, ABR, is vital in effective delivery of video to the home where bandwidth varies over time. It requires creating several different renditions of your content at various bitrates, resolutions and even frame rate. These multiple encodes put a computational burden on the transcode stage.

Lowell Winger explains ways of optimising ABR encodes to reduce the computation needed to create these different versions. He explains ways to use encoding decisions from one version and use them in other encodes. This has a benefit of being able to use decisions made on high-resolution versions – which are benefiting from high definition to inform the decision in detail – on low-resolution content where the decision would otherwise be made with a lot less information.

This talk is the type of deep dive into encoding techniques that you would expect from the Video Engineering Summit which happens at Streaming Media East.

Watch now!

Speaker

Lowell Winger Lowell Winger
Former Senior Director of Engineering,
IDT Inc.