Video: Moving Live Video Quality Control from the Broadcast Facility to the Living Room

Moving an 24×7 on-site MCR into people’s rooms is not trivial, but Disney Streaming Services, in common with most broadcasters knew they had to move people home, often into their living rooms. Working in an MCR requires watching incoming video to check content which is not easy to do at home, particularly when some of their contribution arrives at 100 Mb/s. These two MCRs in San Francisco and NYC covering Hulu Live & ESPN+ along with other services had two weeks to move remote.

Being a major streaming operator, DSS had their own encoding product called xCoder. DSS soon realised this would be their ticket to making home working viable. As standard, these encoders reject any video which doesn’t match a small range of templates. Michael Rappaport takes us how they wrote scripts to use ffprobe to analyse the desired video and then configure the xCoder just the right way. The incoming video goes straight to xCoder without being ‘groomed’ as it normally wood to add closed captions, ABR etc.

Aside from bandwidth, it was also important to provide these streams as close to real-time as possible, as the operators needed to see ‘right now’ to do their job effectively. This is why the ‘grooming’ section is skipped as that would add latency but also the added functions such as PID normalisation and closed caption insertion aren’t needed. Michael explains that when a feed is needed, it will call out to the whole encoder pool, find an underutilised one and then can program it automatically using an API.

Watching this at home was made possible by some work done by Disney Streaming Services to allow their player to receive feeds directly from an xCoder without having any problems decoder parameters. Michael doesn’t mention what protocol they use, but as the xCoder creates a proprietary video stream, so they could be used that carried over TCP.

Made their own players to the receiver from the xCoders. xCoder, as a standalone, produces a proprietary TCP stream. xCoder exposes an API hook that allows us to quickly determine things like frame rate, resolution, and even whether or not the xCoder is able to subscribe to the stream

Watch now!
Speakers

Michael Rappaport Michael Rappaport
Senior Manager, Encoding Administration,
Disney Streaming Services

Video: The QUIC-ematic universe season 2020-2021 preview

QUIC is an encrypted transport protocol with better performance than HTTP and HTTP/2. While young, it’s already seeing some use in the larger internet companies who are learning how best to harness the optimisations. One of the stark differences is that it’s built on top of UDP rather than TCP. This is one of the main ways it increases efficiency. Freed from TCP’s constant acknowledgement of packets, QUIC also ensures reliable delivery but on its own terms which allows it to prioritise speedy delivery over acknowledgement admin. We’ve covered QUIC before, so if it’s new to you, check out this explainer as this talk is an update on what’s happened in 2020 and the plans for 2021 as QUIC aims to be standardised and much more available.

Lucas Pardue from Cloudflare works on the IETF working group devoted to QUIC and spoke at Demuxed 2020. “The IETF are standardisers” he says with QUIC being on its 31st draft with a move to standardise during 2021 what is called ‘IETF QUIC’ to differentiate from a slightly different version of QUIC from Google. IETF quick, Lucas outlines, delivers secure, reliable stream multiplexing.
 

 

QUIC actually forms a base layer for other applications like HTTP/3 with HTTP semantics to work on top of. Like most modern standards, QUIC is actually a name for more than one document. There is a transport layer, header compression, TLS handshake description and a document for recovery and loss protection. QUIC itself lives on UDP datagrams which is why one of the new options coming is to turn off some of the reliability which has been built on top of UDP to deliver TCP-like reliability for data which doesn’t really need it. One possibility here is running a QUIC tunnel where one QUIC connection actually has many QUIC streams within it. In this circumstance, you only want any one bit of data being protected by one reliable transmission mechanism. So you’d want to be turning off reliable transmission for your internal QUIC streams as they would be protected by the outer QUIC layer. There is a project called MASQUE which is working on this.

As with anything arriving on the market, it’s important to establish interoperability. We see this with the JT-NM and SRT plugfests. Lucas shows us the QUIC interop tester which automatically tests the latest implementations with each other and shows the results in a matrix plus allows access to logs and packet traces.

Lucas reminds us the QUIC streams are a first-class transport primitive providing reliable delivery. Within a stream, data will be delivered in order, but QUIC doesn’t specify how to schedule multiplexed streams. HTTP3 initially borrowed HTTP/2’s prioritisation scheme but found a better way to prioritise which is currently being discussed and finalised. Lucas has been working on quiche, Cloudflare’s own implementation of QUIC and shows a three-step process to getting quiche up and running.

Web Transport is another offering from QUIC which complements WebSocket which gives web apps better access to QUIC itself. The Chome Origin Trial explains how this is built in Chrome. Lucas talks about a test project he built on top of existing examples which is hosted at http3.wtf

Lucas ends by summarising the coming year: The working group is aiming to deliver documents to an IETF last call ahead of publication. The community will continue to get deployment experience as new users ar already working on enabling the technology and there is still work to be done on other adopted work items as well as considering others. Lucas ends by encouraging viewers to join in with the community,

Watch now!
Speaker

Lucas Pardue Lucas Pardue
Senior Software Engineer, Cloudflare
Co-Chair of the QUIC working group, IETF

Video: Providing better video experiences for the next billion users

What’s the best way for a billion people all on mobile networks to have a universally great streaming experience? It’s not trivial, and no service is perfect, but Facebook set out to find out what problems existed and find ways to fix them. This video explains their approach and solutions.

Denise Noyes from Facebook spoke at Demuxed 2020 about their work in India over the year. For Facebook, India is unique for this research as it represents such a large number of people almost universally using Android phones and mobile data. Not only does this allow them to understand the low-bitrate performance of video, but the Android penetration level simplifies comparisons.

The problems that Denise and her colleagues identified were gaps in the bitrate ladders where the ABR ladder either wasn’t well optimised or didn’t go low enough. There were also some ABR logic/decisions that were seen to be causing problems along with server delays from the CDN and internal congestion within the app. The research looked at ‘average bad sessions per user’ rather than the overall number of bad sessions which would be skewed by how many videos people generally watched.

Covid had a bearing on the research as this was being conducted by in-person interviews within India. These teams had to come home but the relevance of the research was acutely highlighted by the networks in other countries which worsened in response to the rising amount of traffic making them closer to the Indian example.

Denise’s team worked with colleagues throughout the company to create improvements across the whole network and delivery stack. On the encoding front, they decreased the lowest encoding level to 100kbps. This doesn’t look amazing, as seen by the metric score, but it’s better than buffering and can be watchable dependent on content. The GOP size was also increased from 2 seconds to 5. Longer GOP sizes are known to deliver improved bitrate, in this case up to 8%, but there is a tradeoff to pay in latency and how frequently you can move up/down the ABR ladder. Facebook found that the tradeoffs were worth the improvement for the viewers.

Denise introduces FB-MOS, Facebook’s objective model of the MOS objective metric. The lower the number, the worse the video looks. Facebook have used the fact that encoding resolution ‘A’ at, say, 400kbps and 200kbps can look better than encoding resolution ‘A’ at 400kbps and using a lower resolution ‘B’ for the 200kbps encode. This has lead to the ABR having 360p at two bitrates and 480p at two bitrates.

That FB-MOS score comes in handy for avoiding the lowest rungs of the ABR ladder. As their MOS score is quite low, the player will only choose it if it really has no choice otherwise, it will prefer to settle on a higher quality version if it isn’t able to go up the ladder. Ironically, they have also implemented logic to limit who gets the highest bandwidth streams since most users would prefer to spend less on data than get that disproportionately low improvement in quality.

In playback, Denise explains that they have reduced the impact of occasional anomalies on the bandwidth estimation and adjusted prefetching to prefetch the first chunk of all videos it would like to prefetch before getting the next chunk. This has reduced the chance that someone is able to choose a video which hasn’t yet been buffered and hence have to wait for it to start.

Lastly Denise covers the work done at the network layer seeing a move from HTTP/2 to QUIC. We see how the removal of head-of-line blocking has helped and that, not only has this the move to QUIC seen an overall improvement in performance but as congestion increased, QUIC traffic has shown a disproportionate improvement.

Denise concludes highlighting that this work across the network stack with wide collaboration has not only delivered the desired results but is a vital approach for any company looking to make marked improvements in customer experience.

Watch now!
Speaker

Denise Noyes Denise Noyes
Software Developer,
Facebook

Video: Building an 8k encoder + live streaming platform

Streamline is a reference system design for premium quality, end to end live streaming all the way from SDI to a player fed from a CDN that works on the web, iOS, and Android devices. It uses commodity computer hardware, free software, and AWS to create an affordable way to learn how to build a high-quality live streaming system.

Already capable of 4K, this project is ideal for people to use as a learning tool to get first-hand experience of how live video works end to end. Now, the project is being extended to be able to four 4K 60fps feeds, or a single 8K stream. Ths update is called Streamline 2.
 

 
Colleen Henry from Facebook introduces the hardware behind the feat as comprising two NVIDIA QUADRO GPUs and one large CPU – a Ryzen 3990x. The equipment is perfectly capable of 8K, but the goal actually is to have enough power to deal with 10bit, 4K, HDR, high frame-rate feeds. The kit’s also intended to be able to encode AV1, LCEVC and VP9. Colleen suggests considering using the Lenovo ThinkStation P620 as a pre-built Threadripper desktop rather than building yourself.

Code for the project can be found at https://streamline.wtf. After encoding, the rest of the work is done in AWS. Caitlin O’Callaghan talks us through how to set up AWS by setting up an m4.xlarge server with the correct firewall and building the code from the Streamline 2 repository and then shows us how to install the encoder.

Watch now!
Speakers

Colleen Henry Colleen Henry
Cobra Commander of Facebook Video Special Forces.
Caitlin O'Callaghan Caitlin O’Callaghan
Former Software Engineering Co-op,
Facebook