Video: Providing better video experiences for the next billion users

What’s the best way for a billion people all on mobile networks to have a universally great streaming experience? It’s not trivial, and no service is perfect, but Facebook set out to find out what problems existed and find ways to fix them. This video explains their approach and solutions.

Denise Noyes from Facebook spoke at Demuxed 2020 about their work in India over the year. For Facebook, India is unique for this research as it represents such a large number of people almost universally using Android phones and mobile data. Not only does this allow them to understand the low-bitrate performance of video, but the Android penetration level simplifies comparisons.

The problems that Denise and her colleagues identified were gaps in the bitrate ladders where the ABR ladder either wasn’t well optimised or didn’t go low enough. There were also some ABR logic/decisions that were seen to be causing problems along with server delays from the CDN and internal congestion within the app. The research looked at ‘average bad sessions per user’ rather than the overall number of bad sessions which would be skewed by how many videos people generally watched.

Covid had a bearing on the research as this was being conducted by in-person interviews within India. These teams had to come home but the relevance of the research was acutely highlighted by the networks in other countries which worsened in response to the rising amount of traffic making them closer to the Indian example.

Denise’s team worked with colleagues throughout the company to create improvements across the whole network and delivery stack. On the encoding front, they decreased the lowest encoding level to 100kbps. This doesn’t look amazing, as seen by the metric score, but it’s better than buffering and can be watchable dependent on content. The GOP size was also increased from 2 seconds to 5. Longer GOP sizes are known to deliver improved bitrate, in this case up to 8%, but there is a tradeoff to pay in latency and how frequently you can move up/down the ABR ladder. Facebook found that the tradeoffs were worth the improvement for the viewers.

Denise introduces FB-MOS, Facebook’s objective model of the MOS objective metric. The lower the number, the worse the video looks. Facebook have used the fact that encoding resolution ‘A’ at, say, 400kbps and 200kbps can look better than encoding resolution ‘A’ at 400kbps and using a lower resolution ‘B’ for the 200kbps encode. This has lead to the ABR having 360p at two bitrates and 480p at two bitrates.

That FB-MOS score comes in handy for avoiding the lowest rungs of the ABR ladder. As their MOS score is quite low, the player will only choose it if it really has no choice otherwise, it will prefer to settle on a higher quality version if it isn’t able to go up the ladder. Ironically, they have also implemented logic to limit who gets the highest bandwidth streams since most users would prefer to spend less on data than get that disproportionately low improvement in quality.

In playback, Denise explains that they have reduced the impact of occasional anomalies on the bandwidth estimation and adjusted prefetching to prefetch the first chunk of all videos it would like to prefetch before getting the next chunk. This has reduced the chance that someone is able to choose a video which hasn’t yet been buffered and hence have to wait for it to start.

Lastly Denise covers the work done at the network layer seeing a move from HTTP/2 to QUIC. We see how the removal of head-of-line blocking has helped and that, not only has this the move to QUIC seen an overall improvement in performance but as congestion increased, QUIC traffic has shown a disproportionate improvement.

Denise concludes highlighting that this work across the network stack with wide collaboration has not only delivered the desired results but is a vital approach for any company looking to make marked improvements in customer experience.

Watch now!
Speaker

Denise Noyes Denise Noyes
Software Developer,
Facebook

Video: Building an 8k encoder + live streaming platform

Streamline is a reference system design for premium quality, end to end live streaming all the way from SDI to a player fed from a CDN that works on the web, iOS, and Android devices. It uses commodity computer hardware, free software, and AWS to create an affordable way to learn how to build a high-quality live streaming system.

Already capable of 4K, this project is ideal for people to use as a learning tool to get first-hand experience of how live video works end to end. Now, the project is being extended to be able to four 4K 60fps feeds, or a single 8K stream. Ths update is called Streamline 2.
 

 
Colleen Henry from Facebook introduces the hardware behind the feat as comprising two NVIDIA QUADRO GPUs and one large CPU – a Ryzen 3990x. The equipment is perfectly capable of 8K, but the goal actually is to have enough power to deal with 10bit, 4K, HDR, high frame-rate feeds. The kit’s also intended to be able to encode AV1, LCEVC and VP9. Colleen suggests considering using the Lenovo ThinkStation P620 as a pre-built Threadripper desktop rather than building yourself.

Code for the project can be found at https://streamline.wtf. After encoding, the rest of the work is done in AWS. Caitlin O’Callaghan talks us through how to set up AWS by setting up an m4.xlarge server with the correct firewall and building the code from the Streamline 2 repository and then shows us how to install the encoder.

Watch now!
Speakers

Colleen Henry Colleen Henry
Cobra Commander of Facebook Video Special Forces.
Caitlin O'Callaghan Caitlin O’Callaghan
Former Software Engineering Co-op,
Facebook

Video: Scaling Video with AV1!

A nuanced look at AV1. If we’ve learnt one thing about codecs over the last year or more, it’s that in the modern world pure bitrate efficiency isn’t the only game in town. JPEG 2000 and, now, JPEG XS, have always been excused their high bitrate compared to MPEG codecs because they deliver low latency and high fidelity. Now, it’s clear that we also need to consider the computational demand of codec when evaluating which to use in any one situation.

John Porterfield welcomes Facebook’s David Ronca to understand how AV1’s arriving on the market. David’s the director of Facebook’s video processing team, so is in pole position to understand how useful AV1 is in delivering video to viewers and how well it achieves its goals. The conversation looks at how to encode, the unexpected ways in which AV1 performs better than other codecs and the state of the hardware and software decoder ecosystem.

David starts by looking at the convex hull, explaining that it’s a way of encoding content multiple times at different resolutions and bitrates and graphing the results. This graph allows you to find the best combination of bitrate and resolution for a target quality. This works well, but the multiple encodes burdens the decision with a lot of extra computation to get the best set of encoding parameters. As proof of its effectiveness, David cites a time when a 200kbps max target was given for and encoder of video plus audio. The convex hull method gave a good experience for small screens despite the compromises made in encoding fidelity. The important part is being flexible on which resolution you choose to encode because by allowing the resolution to drift up or down as well as the bitrate, higher fidelity combinations can be found over keeping the resolution fixed. This is called per-title encoding and was pioneered by Netflix as discussed in the linked talk, where David previously worked and authored this blog post on the topic.

It’s an accepted fact that encoder complexity increases for every generation. Whilst this makes sense, particularly in the standard MPEG line where MPEG 2 gave way to AVC which gave way to HEVC which is now being superseded by VVC all of which achieved an approximately 50% compression improvement at the cost of a ten-fold computation increase. But David contends that this buries the lede. Whilst it’s true that the best (read: slowest) compression improves by 50% and has a 10% complexity increase, it’s often missed that at the other end of the curve, one of the fastest settings of the newer codec can now match the best of the old codec with a 90% reduction in computation. For companies working in the software world encoding, this is big news. David demonstrates this by graphing the SVT-AV1 encoder against the x265 HEVC encoder and that against x264.

David touches on an important point, that there is so much video encoding going on in the tech giants and distributed around the world, that it’s important for us to keep reducing the complexity year on year. As it is now, with the complexity increasing with each generation of encoder, something has to give in the future otherwise complexity will go off the scale. The Alliance for Open Media’s AV1 has something to say on the topic as it’s improved on HEVC with only a 5% increase in complexity. Other codecs such as MPEG’s LCEVC also deliver improved bitrate but at lower complexity. There is a clear environmental impact from video encoding and David is focused on reducing this.

AOM is also fighting the commercial problem that codecs have. Companies don’t mind paying for codecs, but they do mind uncertainty. After all, what’s the point in paying for a codec if you still might be approached for more money. Whilst MPEG’s implementation of VVC and EVC aims to give more control to companies to help them control their risk, AOM’s royalty-free codec with a defence fund against legal attacks, arguably, gives the most predictable risk of all. AOM’s aim, David explains, is to allow the web to expand without having to worry about royalty fees.

Next is some disappointing news for AV1 fans. Hardware decoder deployments have been delayed until 2023/24 which probably means no meaningful mobile penetration until 2026/27. In the meantime the very good dav1d decoder and also gav1 are expected to fill the gap. Already quite fast, the aim is for them to be able to do 720p60 decoding for average android devices by 2024.

Watch now!
Speakers

David Ronca David Ronca
Director, Video Encoding,
Facebook
John Porterfield
Freelance Video Webcast Producer and Tech Evangelist
JP’sChalkTalks YouTube Channel

Video: AV1 Commercial Readiness Panel

With two years of development and deployments under its belt, AV1 is still emerging on to the codec scene. That’s not to say that it’s no in use billions of times a year, but compared to the incumbents, there’s still some distance to go. Known as very slow to encode and computationally impractical, today’s panel is here to say that’s old news and AV1 is now a real-time codec.

Brought together by Jill Boyce with Intel, we hear from Amazon, Facebook, Googles, Amazon, Twitch, Netflix and Tencent in this panel. Intel and Netflix have been collaborating on the SVT-AV1 encoder and decoder framework for two years. The SVT-AV1 encoder’s goal was to be a high-performance and scalable encoder and decoder, using parallelisation to achieve this aim.

Yueshi Shen from Amazon and Twitch is first to present, explaining that for them, AV1 is a key technology in the 5G area. They have put together a 1440p, 120fps games demo which has been enabled by AV1. They feel that this resolution and framerate will be a critical feature for Twitch in the next two years as computer games increasingly extend beyond typical broadcast boundaries. Another key feature is achieving an end-to-end latency of 1.5 seconds which, he says, will partly be achieved using AV1. His company has been working with SOC vendors to accelerate the adoption of AV1 decoders as their proliferation is key to a successful transition to AV1 across the board. Simultaneously, AWS has been adding AV1 capability to MediaConvert and is planning to continue AV1 integration in other turnkey content solutions.

David Ronca from Facebook says that AV1 gives them the opportunity to reduce video egress bandwidth whilst also helping increase quality. For them, SVT-AV1 has brought using AV1 into the practical domain and they are able to run AV1 payloads in production as well as launch a large-scale decoder test across a large set of mobile devices.

Matt Frost represent’s Google Chrome and Android’s point of view on AV1. Early adopters, having been streaming partly using AV1 since 2018 in resolution small and large, they have recently added support in Duo, their Android video-conferencing application. As with all such services, the pandemic has shown how important they can be and how important it is that they can scale. Their move to AV1 streaming has had favourable results which is the start of the return on their investment in the technology.

Google’s involvement with the Alliance for Open Media (AOM), along with the other founding companies, was born out of a belief that in order to achieve the scales needed for video applications, the only sensible future was with cheap-to-deploy codecs, so it made a lot of sense to invest time in the royalty-free AV1.

Andrey Norkin from Netflix explains that they believe AV1 will bring a better experience to their members. Netflix has been using AV1 in streaming since February 2020 on android devices using a software decoder. This has allowed them to get better quality at lower bitrates than VP9 Testing AV1 on other platforms. Intent on only using 10-bit encodes across all devices, Andrey explains that this mode gives the best efficiency. As well as being founding members of AoM, Netflix has also developed AVIF which is an image format based on AV1. According to Andrey, they see better performance than most other formats out there. As AVIF works better with text on pictures than other formats, Netflix are intending to use it in their UI.

Tencent’s Shan Liu explains that they are part of the AoM because video compression is key for most Tencent businesses in their vast empire. Tencent cloud has already launched an AV1 transcoding service and support AV1 in VoD.

The panel discusses low-latency use of AV1, with Dave Ronca explaining that, with the performance improvements of the encoder and decoders along-side the ability to tune the decode speed of AV1 by turning on and off certain tools, real-time AV1 are now possible. Amazon is paying attention to low-end, sub $300 handsets, according to Yueshi, as they believe this will be where the most 5G growth will occur so site recent tests showing decoding AV1 in only 3.5 cores on a mobile SOC as encouraging as it’s standard to have 8 or more. They have now moved to researching battery life.

The panel finishes with a Q&A touching on encoding speed, the VVC and LCEVC codecs, the Sisvel AV1 patent pool, the next ramp-up in deployments and the roadmap for SVT-AV1.

Watch now!
Please note: After free registration, this video is located towards the bottom of the page
Speakers

Yueshi Shen Yueshi Shen
Principle Engineer
AWS & Twitch
David Ronca David Ronca
Video Infrastructure Team,
Facebook
Matt Frost Matt Frost
Product Manager, Chome Media Technologies,
Google
Andrey Norkin Andrey Norkin
Emerging Technologies Team
Netflix
Shan Liu Dr Shan Liu
Chief Scientist & General Manager,
Tencent Media Lab
Jill Boyce Jill Boyce
Intel