Video: Latency Still Sucks (and What You Can Do About It)

The streaming industry is on an ever-evolving quest to reduce latency to bring it in line with, or beat linear broadcasts and to allow business models such as gaming to flourish. When streaming started, latency of a minute or more was not uncommon and whilst there are some simple ways to improve that, getting down to the latency of digital TV, approximately 5 seconds, is not without challenges. Whilst the target of 5 seconds works for many use cases, it’s still not enough for auctions, gambling or ‘gamification’ which need sub-second latency.

In this panel, Jason Thielbaut explores how to reduce latency with Casey Charvet from Gigcasters, Rob Roskin from CenturyLink and Haivision VP Engineering, Marc Cymontkowski. This wide-ranging discussion covers CDN caching, QUIC and HTTP/3, encoder settings, segmented Vs. non-segmented streaming, ingest and distribution protocols.

Key to the discussion is differentiating the ingest protocol from the distribution protocol. Often, just getting the content into the cloud quickly is enough to bring the latency into the customer’s budget. Marc from Haivision explains how SRT achieves low-latency contribution. SRT allows unreliable networks like the Internet to be used for reliable, encrypted video contribution. Created by Haivision and now an Open Source technology with an IETF draft spec, the alliance of SRT users continues to grow as the technology continues to develop and add features. SRT is a ‘re-request’ technology meaning it achieves its reliability by re-requesting from the encoder any data it missed. This is in contrast to TCP/IP which acknowledges every single packet of data and is sent missing data when acknowledgements aren’t received. Doing it the SRT, way really makes the protocol much more efficient and able to cope with real-time media. SRT can also encrypt all traffic which, when sending over the internet, is extremely important even if you’re not sending live-sports. In this video, Marc makes the point that SRT also recovers the timing of the stream so that the data comes out the SRT pipe in the same ‘shape’ as it went in. Particularly with VBR encoding, your decoder needs to receive the same peaks and troughs as the encoder created to save it having to buffer the input as much. All this included, SRT manages to deliver a transport latency of around 2.5 times the round trip time.

Haivision are members of RIST which is a similar technology to SRT. Marc explains that RIST is approaching the problem from a standards perspective; taking IETF RFCs and applying them to RTP. SRT took a more pragmatic way forward by creating a binary which implemented the features and by making this open source for interoperability.

The video finishes with a Q&A covering HTTP Header compression, recommended size of HLS chunks, peer-to-peer streaming and latency requirements for VoD.

Watch now!

Rob Roskin Rob Roskin
Principal Solutions Architect,
Level3 Communications
Marc Cymontkowski Marc Cymontkowski
VP Engineering – Cloud,
Casey Charvet Casey Charvet
Managing Director,
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Media Alliance

Video: AV1 – A Reality Check

Released in 2018, AV1 had been a little over two years in the making at the Alliance of Open Media founded by industry giants including Google, Amazon, Mozilla, Netflix. Since then work has continued to optimise the toolset to bring both encoding and decoding down to real-world levels.

This talk brings together AOM members Mozilla, Netflix, Vimeo and Bitmovin to discus where AV1’s up to and to answer questions from the audience. After some introductions, the conversation turns to 8K. The Olympics are the broadcast industry’s main driver for 8K at the moment, though it’s clear that Japan and other territories aim to follow through with further deployments and uses.

“AV1 is the 8K codec of choice” 

Paul MacDougall, Bitmovin
 CES 2020 saw a number of announcements like this from Samsung regarding AV1-enabled 8K TVs. In this talk from Google, Matt Frost from Google Chrome Media explains how YouTube has found that viewer retention is higher with VP9-delivered videos which he attributes to VP9’s improved compression over AVC which leads to quicker start times, less buffering and, often, a higher resolution being delivered to the user. AV1 is seen as providing these same benefits over AVC without the patent problems that come with HEVC.

It’s not all about resolution, however, points out Paul MacDougall from BitMovin. Resolution can be useful, for instance in animations. For animated content, resolution is worth having because it accentuates the lines which add intelligibility to the picture. For some content, with many similar textures, grass, for instance, then quality through bitrate may be more useful than adding resolution. Vittorio Giovara from Vimeo agrees, pointing out that viewer experience is a combination of many factors. Though it’s trivial to say that a high-resolution screen of unintended black makes for a bad experience, it is a great reminder of things that matter. Less obviously, Vittorio highlights the three pillars of spatial, temporal and spectral quality. Temporal refers to upping the bitrate, spatial is, indeed, the resolution and spectral refers to bit-depth and colour-depth know as HDR and Wide Colour Gamut (WCG).

Nathan Egge from Mozilla acknowledges that in their 2018 code release at NAB, the unoptimized encoder which was claimed by some to be 3000 times slower than HEVC, was ’embarrassing’, but this is the price of developing in the open. The panel discusses the fact that the idea of developing compression is to try out approaches until you find a combination that work well. While you are doing that, it would be a false economy to be constantly optimising. Moreover, Netflix’s Anush Moorthy points out, it’s a different set of skills and, therefore, a different set of people who optimise the algorithms.

Questions fielded by the panel cover whether there are any attempts to put AV1 encoding or decoding into GPU. Power consumption and whether TVs will have hardware or software AV1 decoding. Current in-production AV1 uses and AVC vs VVC (compression benefit Vs. royalty payments).

Watch now!

Vittorio Giovara Vittorio Giovara
Manager, Engineering – Video Technology
Nathan Egge Nathan Egge
Video Codec Engineer,
Paul MacDougall Paul MacDougall
Principal Sales Engineer,
Anush Moorthy Anush Moorthy
Manager, Video and Image Encoding
Tim Siglin Tim Siglin
Founding Executive Director
Help Me Stream, USA

Video: How to Optimize Your Live Streaming Workflow

Running the live streaming for an event can be fraught, so preparation needs to be the number one priority. In this talk, Robert Reinhardt, a highly experienced streaming consultant takes us through choosing encoders, finding out what the client wanted, helping the client understand what needs to be done, choosing software and ensuring the event stays on air.

This is a wide-ranging and very valuable talk for anyone who’s going to be involved with a live streaming event. In this article, I’ll highlight 3 of the big topics nestled in with the continuous stream of tips and nuances that Rob unearths.

System Architecture. Reliability is usually a big deal for live streaming and this needs to be a consideration not only in the streaming infrastructure in the cloud, but in contribution and the video equipment itself. No one wants to have a failed stream due to a failed camera, so have two. Can you afford a hardware switcher/vision mixer? Rob prefers hardware units in terms of reliability (no random OS reboots), but he acknowledges this is not always practical or possible. Audio, too needs to be remembered and catered for. It’s always better to have black vision and hear the programme than to have silent video. Getting your streams from the event into the cloud can also be done resiliently either by having dual streams into a Wowza server or similar or having some other switching in the cloud. Rob spends some time discussing
whether to use AVC or HEVC, plus the encoder manufacturers that can help.

Discovery and Budget Setting. This is the most important part of Rob’s talk. Finding out what your customer wants to achieve in a structured, well recorded way is vital in order to ensure you meet their expectations and that their expectations are realistic. This discovery process can also be used as a way to take the customer through the options available and decisions that need to be made. For many clients, this discovery process then starts to happen on both sides. Once the client is fully aware of what they need, this can directly feed into the budget setting.

Discovery is more than just helping get the budget right and ensure the client has thought of all aspects of the event, it’s also vital in drawing a boundary around your work and allows you to document your touchpoints who will be providing you things like video, slides and connectivity. Rob suggests using a survey to get this information and offers, as an example, the survey he uses with clients. This part of the talk finishes with Rob highlighting costs that you may incur that you need to ensure are included. Rob has also written up his advice.

Setup and Testing. Much of the final part of the presentation is well understood by people who have done events before and is summarised as ‘test and test again’. But it’s always helpful to have this reiterated and, in this case, from the streaming angle. Rob goes through a long list of what to determine ahead of the event, what to test on-site ahead of the event and again what to test just before the event.

The talk concludes with a twenty minute Q&A.

Watch now!

Robert Reinhardt Robert Reinhardt

Video: How CBS Sports Digital streams live events at scale

Delivering high scale in streaming really exposes the weaknesses of every point of your workflow, so even those of us who are not streaming at maximum scale, there are many lessons to be learnt. CBS Sports Digital delivered the Super Bowl using the principles of ‘practice, practice, practice’, keeping the solution as simple as possible and making mitigation of problems primary to solving them.

Taylor Busch tells walks us through their solution explaining how it supported their key principles and highlighting the technology used. Starting with Acquisition, he covers the SDI fibre delivery to a backup facility as well as the AWS Direct Connect links for their Elemental Live encoders. The origin servers were in two different regions and both received data from both sets of encoders.

CBS used ‘Output locking’ which ensures that the TS segments are all aligned even across different encoders which is done by respecting the timecode in the SDI and helps in encoder failover situations. QVBR encoding is a method of encoding up to a quality level rather than simply saying ‘7000 kbps’. QVBR provides a more efficient use a bandwidth since in the situations where a scene doesn’t require a lot of bandwidth, it won’t be sent. This variability, even if you run in capped mode to limit the bandwidth of particularly complex scenes, can look like a failing encoder to some systems, so the fact this is now in ‘VBR’ mode, needs to be understood by all the departments and companies who monitor your feed.

Advertising is famously important for the Super Bowl, so Taylor gives an overview of how they used the CableLabs ESAM protocol and SCTE to receive information about and trigger the adverts. This combined SCTE-104, ESAM and SCTE-35 as we’ll as allowing clients to use VAST for tracking. Extra caching was provided by Fastly’s Media Shield which tests for problems with manifests, origin servers and encoders. This fed a Multi-CDN setup using 4 CDNs which could be switched between. There is a decision point for requests to determine which CDN should answer.

Taylor then looks at the tools, such as Mux’s dashboard, which they used to spot problems in the system; both NOC-style tools and multiviewers. They set up three war rooms which looked at different aspects of the system, connectivity, APIs etc. This allowed them to focus on what should be communicated keeping ‘noise’ down to give people the space they needed to do their work at the same time as providing the information required. Taylor then opens up to questions from the floor.

Watch now!

Taylor Busch, Sr. Taylor Busch
Senior Director Engineering,
CBS Sports Digital