Video: RTMP: A Quick Deep-Dive

RTMP hasn’t left us yet, though, between HLS, DASH, SRT and RIST, the industry is doing its best to get rid of it. At the time RTMP’s latency was seen as low and it became a defacto standard. But as it hasn’t gone away, it pays to take a little time to understand how it works

Nick Chadwick from Mux is our guide in this ‘quick deep-dive’ into the protocol itself. To start off he explains the history of the Adobe-created protocol to help put into context why it was useful and how the specification that Adobe published wasn’t quite as helpful as it could have been.

Nick then gives us an overview of the protocol explaining that it’s TCP-based and allows for multiple, bi-directional streams. He explains that RTMP multiplexes larger, say video, messages along with very short data requests, such as RPC, but breaking down the messages into chunks which can be multiplexed over just the one TCP connection. Multiplexing at the packet level allows RTMP to be asking the other end a question at the same time as delivering a long message.

Nick has a great ability to make describing the protocol and showing ASCII tables accessible and interesting. We quickly start looking at the header for chunks explaining what the different chunks are and how you can compress the headers to save bit rate. He also describes how the RTMP timestamp works and the control message and command message mechanism. Before answering Q&A questions, Nick outlines the difficulty in extending RTMP to new codecs due to the hard-coded list of codecs that can be used as well as recommending improvements to the protocol. It’s worth noting that this talk is from 2017. Whilst everything about RTMP itself will still be correct, it’s worth remembering that SRT, RIST and Zixi have taken the place of a lot of RTMP workflows.

Watch now!
Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux

Video: Doing Better Congestion Control with BBR & Copa

In networking there are many possible bottlenecks, but the most pervasive is congestion caused by links operating at capacity and saturating the buffers. Full buffers are unable to fully adapt to the incoming traffic, increasing the chances of dropped packets, but the extra latency added by full buffer after full buffer quickly adds up and this extra latency further degrades the quality of the connection for the data that does make it through.

It’s no surprise then, that a lot of work goes into finding the best ‘congestion’ algorithms to allow data senders to back off when a link stops responding well. This talk, from Facebook engineer Nitin Garg, examines old and new approaches to keeping streams fast and responsive by running a 4-million-data-point test of three contenders, Cubic, BBR and Copa.


Nitin starts by introducing what we mean by ‘congestion’, how and why it occurs. The simple example is that your computer can send data, typically, at up to 1Gbps, yet your uplink to the internet is likely below this number. So congestion control is a feedback mechanism which lets your computer realise that sending at 1Gbps isn’t working and allows it to throttle back to a speed which fits within your upload bandwidth. The same is true further down the pipe. If you have 50Mbps uplink to the internet, but you are sending to a server which only has 10Mbps left, not only does your computer need to throttle below 50, but also 10Mbps.

We then walk through how Cubic, BBR and Copa work with Nitin explaining the differences. <a href=”https://web.mit.edu/copa/” rel=”noopener” target=”_blank>Copa is the newest of the protocols comes from MIT and comes with the unique ability to tune it to your need; throughput or low latency. As discussed above, to keep latency down, buffer size needs to be minimised which stops you being aggressive in loading up links which leads to latency and throughput being at opposite ends of a see-saw.

Nitin’s test was on mobile phones using Facebook’s Live streaming app on Android and iOS for live streaming with ABR where the app itself adapts to ensure that it is streaming with as high a quality as possible, but willing to reduce the bitrate when needed. Testing from global markets, they measured round trip times and the amount of delivered data. Nitin walks through the results both for latency and throughput and shows that when Copa is optimised for latency, in the worst conditions it leads the other two protocols in latency reduction.

Watch now!
Speakers

Nitin Garg Nitin Garg
Software Engineer, Videos Infra,
Facebook

Video: What is esports? A crash course in modern esports broadcast

With an estimated global revenue of over USD1.1 billion1 and a global audience of almost half a billion people2, esports is a big industry and all accounts report it as growing. Although it sounds different, when you look behind the scenes, there’s actually lot of equipment and production that a broadcaster would recognise, as we showed in this behind the scenes footage that we featured in a previous article

Press play below as a taster before the main video to be a fly on the wall for five minutes as the tension mounts at this esports event final.

In this today’s talk from the Royal Television Society, Thames Valley, we’re introduced to esports from the bottom up: What it is, who does it and which companies are involved. I think esports is special in its ability to capture the interest of the broadcast industry, but exactly what it is and how it’s structured…few actually know. That’s all changing here, with Steven “Claw” Jalicy from ESL.

Steven explains that ESL is the largest company that runs tournaments and competitions outside of the games publishers. He explains that, unlike sports such as tennis, athletics and football which don’t have ‘owners’, all esports games have publishers who are able to control the way that gaming happens and have the ability to run tournaments themselves or, in effect, franchise this out around the world.

 
Steven takes us through the broadcast chain. Usually held in a stadium, OB kit and temporary set ups are nothing new to to the broadcast sports community. The first thing which is a change however, is ‘in-game’. There’s a lot more to covering esports than tennis in as much as for a tennis match you can turn up with some cameras and ball tracking kit and televise the games. Whilst doing it well is by no means trivial, with esports there are many more levels due to the fact that we have human players who are playing computer characters; to experience both the real and the in-game drama you need camera angles both in the real world and within the game. These in-game camera operators are call observers and just like real-life camera operators, their task is to capture all the action of the game. Sometimes this is done by following the players, sometimes by a birds-eye-view camera, depending on the game and, as ever, the publisher.

Naturally when you have a peak viewership of over a million people, streaming and live content distribution is really important. ESPN and, more recently, Eurosport have been airing esports so it’s important to realise that linear distribution is very much part of the mix for esports, it’s not an on-line only thing, though most of the numbers shared are the verified streaming numbers.

Steven talks about some of the challenges ESL faces in delivering the highest quality streams with so many tournaments happening and then moving to remote operation.

ESL prefers to build their own hardware for several reasons that Steven explains which include having the result fully-customisable and simplifying replacements. Similarly, ffmpeg and other open-source encoding is favoured for similar reasons.

The discussion finishes off with an extensive Q&A session including the ‘sanctity’ of the players’ equipment, the threshold for choosing to use vendor equipment (EVS vs Mediakind), transport over the internet and much more.

Watch now!
1Statista revenue report
2Statista eSports audience report
Speakers

Steven Jalicy Steven “Claw” Jalicy
Global Head of Streaming,
ESL Gaming

Video: Getting Your Virtual Hands On RIST

RIST is one of a number of error correction protocols that provide backwards error correction. These are commonly used to transport media streams into content providers but are increasingly finding use in other parts of the broadcast workflow including making production feeds, such as multiviewers and autocues available to staff at internet-connected locations, such as the home.

The RIST protocol (Reliable Internet Stream Protocol) is being created by a working group in the VSF (Video Services Forum) to provide an open and interoperable specification, available for the whole industry to adopt. This article provides a brief summary, whereas this talk from FOSDEM20 goes into some detail.

We’re led through the topic by Sergio Ammirata, CTO of DVEO who are members of the RIST Forum and collaborating to make the protocol. What’s remarkable about RIST is that several companies which have created their own error-correcting streaming protocols such as DVEO’s Dozer, which Sergio created, have joined together to share their experience and best practices.

Press play to watch:

Sergio starts by explaining why RIST is based on UDP – a topic explored further in this article about RIST, SRT and QUIC – and moves on to explaining how it works through ‘NACK’ messages, also known as ‘Negative Acknowledgement’ messages.

We hear next about the principles of RIST, of which the main one is interoperability. There are two profiles, simple and main. Sergio outlines the Simple profile which provides RTP and error correction, channel bonding. There is also the Main profile, which has been published as VSF TR-06-2. This includes encryption, NULL packet removal, FEC and GRE tunnelling. RIST uses a tunnel to multiplex many feeds into one stream. Using Cisco’s Generic Routing Encapsulation (GRE), RIST can bring together multiple RIST streams and other arbitrary data streams into one tunnel. The idea of a tunnel is to hide complexity from the network infrastructure.

Tunnelling allows for bidirectional data flow under one connection. This means you can create your tunnel in one direction and send data in the opposite direction. This gets around many firewall problems since you can create your tunnel in the direction which is easiest to achieve without having to worry about the direction of dataflow. Setting up GRE tunnels is outside of the scope of RIST.

Sergio finishes by introducing librist, demo applications and answerin questions from the audience.

Watch now!
Speaker

Sergio Ammirata Sergio Ammirata
Chief Technical Officer of DVEO
Managing Partner of SipRadius LLC.