Video: Mitigating Online Video Delivery Latency

Real-world solutions to real-world streaming latency in this panel from the Content Delivery Summit at Streaming Media East. With everyone chasing reductions in latency, many with the goal of matching traditional broadcast latencies, there are a heap of tricks and techniques at each stage of the distribution chain to get things done quicker.

The panel starts by surveying the way these companies are already serving video. Comcast, for example, are reducing latency by extending their network to edge CDNs. Anevia identified encoding as latency-introducer number 1 with packaging at number 2.

Bitmovin’s Igor Oreper talks about Periscope’s work with low-latency HLS (LHLS) explaining how Bitmovin deployed their player with Twitter and worked closely with them to ensure LHLS worked seamlessly. Periscope’s LHLS is documented in this blog post.

The panel shares techniques for avoiding latency such as keeping ABR ladders small to ensure CDNs cache all the segments. Damien from Anevia points out that low latency can quickly become pointless if you end up with a low-latency stream arriving on an iPhone before Android; relative latency is really important and can be more so than absolute latency.

The importance of HTTP and the version is next up for discussion. HTTP 1.1 is still widely used but there’s increasing interest in HTTP 2 and QUIC which both handle connections better and reduce overheads thus reducing latency, though often only slightly.

The panel finishes with a Q&A after discussing how to operate in multi-CDN environments.

Watch now!
Speakers

Damien Lucas Damien Lucas
CTO & Co-Founder,
Anevia
Ryan Durfey Ryan Durfey
CDN Senior Product Manager,
Comcast Technology Solutions
Igor Oreper Igor Oreper
Vice President, Solutions
Bitmovin
Eric Klein Eric Klein
Director, Content Distribution,
Disney Streaming Services (was BAMTECH Media)
Dom Robinson Dom Robinson
Director,
id3as

Video: Understanding esports production

Esports is here to stay and brings a new dimension on big events which combine the usual challenges of producing and broadcasting events at scale with less usual challenges such as non-standard resolutions and frame rates. This session from the IBC 2019 conference looks at the reality of bringing such events to life.

The talk starts with a brief introduction to some Esports-only terms before heading into the discussions starting with Simon Eicher who talks about his switch toward typical broadcast tools for Esports which has helped drive better production values and story telling. Maxwell Trauss from Riot Games explains how they incubated a group of great producers and were able keep production values high by having them working on shows remotely worldwide.

Blizzard uses the technique of using a clean ‘world feed’ which is shared worldwide for regions to regionalise it with graphics and language before then broadcasting this to the world. In terms of creating better storytelling, Blizzard have their own software which interprets the game data and presents it in a more consumable way to the production staff.

Observers are people who control in-game cameras. A producer can call out to any one of the observers. The panel talks about how separating the players from the observers from the crowd allows them to change the delay between what’s happening in the game and each of these elements seeing it. At the beginning of the event, this creates the opportunity to move the crowd backwards in time so that players don’t get tipped off. Similarly they can be isolated from the observers for the same effect. However, by the end of the game, the delays have been changed to bring everyone back into present time for a tense finale.

Corey Smith from Blizzard explains the cloud setup including clean feeds where GFX is added in the cloud. This would lead to a single clean feed from the venue, in the end. ESL, on the other hand choose to create their streams locally.

Ryan Chaply from Twitch explains their engagement models some of which reward for watching. Twitch’s real-time chat banner also changes the way productions are made because the producers have direct feedback from the viewers. This leads, day by day, to tweaks to the formats where a production may stop doing a certain thing by day three if it’s not well received, conversely when something is a hit, they can capitalise on this.

Ryan also talks about what they are weighing up in terms of when they will start using UHD. Riot’s Maxwell mentions the question of whether fans really want 4K at the moment, acknowledging it’s an inevitability, he asks whether the priority is actually having more/better stats.

The panel finishes with a look to the future, the continued adoption of broadcast into Esports, timing in the cloud and dealing with end-to-end metadata and a video giving a taste of the Esports event.

Watch now!
Speakers

Simon Eicher Simon Eicher
Executive Producer, Director of Broadcast, eSports Services,
ESL
Ryan Chaply Ryan Chaply
Senior Esports Program Manager,
Twitch
Corey Smith Corey Smith
Director, Live Operations Broadcast Technology Group,
Blizzard
Maxwell Trauss Maxwell Trauss
Broadcast Architect,
Riot Games
Jens Fischer Jens Fischer
Global Esport Specialist and Account Manager D.A.CH,
EVS

Video: Recent trends in live cloud video transcoding using FPGA acceleration

FPGAs are flexible, reprogrammable chips which can do certain tasks faster than CPUs, for example, video encoding and other data-intensive tasks. Once the domain of expensive hardware broadcast appliances, FPGAs are now available in the cloud allowing for cheaper, more flexible encoding.

In fact, according to NGCodec founder Oliver Gunasekara, video transcoding makes up a large percentage of cloud work loads and this increasing year on year. The demand for more video and the demand for more efficiently-compressed video both push up the encoding requirements. HEVC and AV1 both need much more encoding power than AVC, but the reduced bitrate can be worth it as long as the transcoding is quick enough and the right cost.

Oliver looks at the likely future adoption of new codecs is likely to playout which will directly feed into the quality of experience: start-up time, visual quality, buffering are all helped by reduced bitrate requirements.

It’s worth looking at the differences and benefits of CPUs, FPGAs and ASICs. The talk examines the CPU-time needed to encode HEVC showing the difficulty in getting real-time frame rates and the downsides of software encoding. It may not be a surprise that NGCodec was acquired by FPGA manufacturer Xilinx earlier in 2019. Oliver shows us the roadmap, as of June 2019, of the codecs, VQ iterations and encoding densities planned.

The talk finishes with a variety of questions like the applicability of Machine Learning on encoding such as scene detection and upscaling algorithms, the applicability of C++ to Verilog conversion, the need for a CPU for supporting tasks.

Watch now!

Speakers

Former CEO, founder & president, NGCodec
Oliver is now an independent consultant.

Oliver Gunasekara Oliver Gunasekara

Video: The Evolution of Video APIs

APIs underpin our modern internet and particularly our online streaming services which all. An API is a way for two different programs or services to communicate with each other; allowing access, sharing locations of videos, providing recommendations etc.

Phil Cluff from Mux, takes a look at the evolution of these APIs, showing the simple ones, the complex and how they have changed as time has gone on, culminating in advice to the APIs writers of today and tomorrow.

Security is a big deal and increasingly is in focus for video companies. Whilst the API itself is usually sent over secure means, the service still needs to authenticate users and the use of DRM needs to be considered. Phil talks about this and ultimately the question comes down to what you are trying to protect and your attack surface.

APIs tend to come in two types, explains Phil, Video Platform vs ‘Encoding’ APIs. Encoding APIs a more than pure encoding APIs, there is transcoding, packaging, file transfer and other features built in to most ‘encoding’ services. Video Platform APIs are typically for a whole platform so also include CDN, Analytics, Cataloguing, playback and much more

In terms of advice, Phil explains that APIs can enable ‘normal’ coders – meaning people who aren’t interested specifically in video – to use video in their programs. This can be done through well thought out APIs which make good decisions behind the scenes and use sensible defaults.

API is so important, asserts Phil, that it should be considered as part of the product so treated with similar care. It should be planned, resourced properly, be created as part of a dialogue with customers and, most importantly, revisited later to be upgraded and improved.

Phil finishes the talk with a number of other pieces of advice and answers questions from the floor.

Watch Now!

Speaker

Phil Cluff Phil Cluff
Streaming Specialist,
Mux