Video: Buffer Sizing and Video QoE Measurements at Netflix

At a time when Netflix is cutting streaming quality to reduce bandwidth, we take a look at the work that’s gone into optimising latency within the switch at ISPs which was surprisingly high.

Bruce Spang interned at Netflix and studied the phenomenon of unexpected latency variation within the netflix caches they deploy at ISPs to reduce latency and bandwidth usage. He starts by introducing us to the TCP buffering models looking at how they work and what they are trying to achieve with the aim of identifying how big it is supposed to be. The reason this is important is that if it’s a big buffer, you may find that data takes a long time to leave the buffer when it gets full, thus adding latency to the packets as they travel through. Too small, of course, and packets have to be dropped. This creates more rebuffing which impacts the ABR choice leading to lower quality.

Bruce was part of an experiment that studied whether the buffer model in use behaved as expected and whist he found that it did most of the time, he did find that video performance varied which was undesirable. To explain this, he details the testing they did and the finding that congestion, as you would expect, increases latency more during a congested time. Moreover, he showed that a 500MB had more latency than 50MB.

To explain the unexplained behaviour such as long-tail content having lower latency than popular content, Bruce explains how he looked under the hood of the router to see how VOQs are used to create queues of traffic and how they work. Seeing the relatively simply logic behind the system, Bruce talks about the results they’ve achieved working with the vendor to improve the buffering logic.

Watch now!
Speakers

Bruce Spang Bruce Spang
PhD Student, Stanford