Video: QoE Impact from Router Buffer sizing and Active Queue Management

Netflix take to the stage at Demux to tell us about the work they’ve been doing to understand and reduce latency by looking at the queue management of their managed switches. As Tony Orme mentioned yesterday, we need buffers in IP systems to allow synchronous parts to interact. Here, we’re looking at how the core network fabric’s buffers can get in
the way of the main video flows.

Te-Yuan Huang from Netflix explains their work in investigating buffers and how best to use them. She talks about the flows that occur due to the buffer models of standard switches i.e. waiting until the buffer is full and then dropping everything else that comes in until the buffer is emptied. There is an alternative method, Active Queue Management (AQM), called FQ-CoDel which drops packets based on probability before the buffer is dropped. By carefully choosing the probability, you can actually improve buffer handling and the impact it has on latency.

Te-Yuan shows us results from tests that her team has done which show that the FQ-CoDel specification does, indeed, reduce latency. After showing us the data, she summarises saying that FQ-CoDel improves playback and QOE.

Watch now!
Speaker

Te-Yuan Huang Te-Yuan Huang
Engineering Manager (Adaptive Streaming),
Netflix

Video: Buffer Sizing and Video QoE Measurements at Netflix

At a time when Netflix is cutting streaming quality to reduce bandwidth, we take a look at the work that’s gone into optimising latency within the switch at ISPs which was surprisingly high.

Bruce Spang interned at Netflix and studied the phenomenon of unexpected latency variation within the netflix caches they deploy at ISPs to reduce latency and bandwidth usage. He starts by introducing us to the TCP buffering models looking at how they work and what they are trying to achieve with the aim of identifying how big it is supposed to be. The reason this is important is that if it’s a big buffer, you may find that data takes a long time to leave the buffer when it gets full, thus adding latency to the packets as they travel through. Too small, of course, and packets have to be dropped. This creates more rebuffing which impacts the ABR choice leading to lower quality.

Bruce was part of an experiment that studied whether the buffer model in use behaved as expected and whist he found that it did most of the time, he did find that video performance varied which was undesirable. To explain this, he details the testing they did and the finding that congestion, as you would expect, increases latency more during a congested time. Moreover, he showed that a 500MB had more latency than 50MB.

To explain the unexplained behaviour such as long-tail content having lower latency than popular content, Bruce explains how he looked under the hood of the router to see how VOQs are used to create queues of traffic and how they work. Seeing the relatively simply logic behind the system, Bruce talks about the results they’ve achieved working with the vendor to improve the buffering logic.

Watch now!
Speakers

Bruce Spang Bruce Spang
PhD Student, Stanford

Webinar: Multicast ABR opens the door to a new DVB era

Now available on demand

With video delivery constituting the majority of traffic, it’s clear there’s a big market for it. ON the internet, this is done with unicast streaming where for each receiver, the stream source has to send another stream. The way this has been implemented using HTTP allows for a very natural system, allied Adaptive Bit Rate (ABR), which means that every when your network capacity is constrained (by the network itself or bandwidth contention), you can still get a picture just at a lower bit rate.

But when extrapolating this system linear television, we find that large audience place massive demands on the originating infrastructure. This load on the infrastructure drives its architects to implement a lot of redundancy making it expensive to run. Within a broadcaster, such loads would be dealt with by multicast traffic but on the internet, Multicast is not enabled. For an IPTV system where each employee had access via a program on their PC and/or a set-top-box on their desk, the video would be sent by multicast meaning that it is the network that was providing the duplication of the streams to each endpoint, not the source.

By combining existing media encoding and packaging formats with the efficiency of point-to-multipoint distribution to the edge of IP-based access networks, it is possible to design a system for linear media distribution that is both efficient and scalable to very large audiences, while remaining technically compatible with the largest possible set of already-deployed end user equipment.

This webinar by Guillaume Bichot which is in place of his talk at the cancelled DVB World 2020 event explains DVB’s approach to doing thus that; combining multicast ordination of content with delivery of an ABR feed, called DVB-mABR.

Video broadcast has been digitised since it’s initial broadcasts in the 30s, and more than once. In Europe, we have seen IP carriage (IPTV) services and most recently the hybrid approach where broadband access is merged into transmitted content with the aim of delivering a unified service to the viewer called HbbTV. Multicast ABR (mABR) defines the carriage of Adaptive Bit Rate video formats and protocols over a broadcast/multicast feed. Guillaume explains the mABR architecture and then looks at the deployment possibilities and what the future might hold.

mABR comprises a multicast server at the video headend. This server/transcaster, receives standard ABR feeds and then encapsulates it into multicast before sending. The decoder does the opposite, removing any multicast headers revealing the ABR underneath. It’s not uncommon for mABR to be combined with HTTP unicast allowing the unicast to pick up the less popular channels but for the main services to benefit from multicast.

Guillaume explores these topics plus whether mABR saves bit rate, how it’s deployed and how it can change in the future to keep up with viewers’ requirements.

Watch now on demand!
Speaker

Guillaume Bichot Guillaume Bichot
Principal Engineer, Head of Exploration
Broadpeak

Video: Pervasive video deep-links

Google have launched a new initiative allowing publishers to highlight key moments in a video so that search results can jump straight to that moment. Whether you have a video that looks at 3 topics, one which poses questions and provides answers or one which has a big reveal and reaction shots, this could help increase engagement.

The plan is the content creators tell Google about these moments so Paul Smith from theMoment.tv takes to the stage at San Francisco Video Tech to explain how. After looking at a live demo, Paul takes a dive into the webpage code that makes it happen. Hidden in the tag, he shows the script which has its type set to application/ld+json. This holds the metadata for the video as a whole such as the thumbnail URL and the content URL. However it also then defines the highlighted ‘parts’ of the video with URLs for those.

Whiles the programme is currently limited to a small set of content publishers, everyone can benefit from these insights on google video search. It will also look at YouTube descriptions in which some people give links to specific times such as different tracks in a music mix, and bring those into the search results.

Paul looks at what this means for website and player writers. On suggestion is the need to scroll the page to the correct video and make the different videos on a page clearly signposted. Paul also looks towards the future at what could be done to better integrate with this feature. For example updating the player UI to see and create moments or improve the ability to seek to sub-second accuracy. Intriguingly he suggests that it may be advantageous to synchronise segment timings with the beginning of moments for popular video. Certainly food for thought.

Watch now!
Speaker

Paul Smith Paul Smith
Founder,
theMoment.tv