Video: Web Media Standards

The internet has been a continuing story of proprietary technologies being overtaken by open technologies, from the precursors to TCP/IP, to Flash/RTMP video delivery, to HLS. Understanding the history of why these technologies appear, why they are subsumed by open standards and how boost in popularity that happens at that transition is important to help us make decisions now and foresee how the technology landscape may look in five or ten years’ time.

This talk, by Jonn Simmons, is a talk of two halves. Looking first at the history of how our standards coalesced into what we have today will fill in many blanks and make the purpose of current technologies like MPEG DASH & CMAF clearer. He then looks at how we can understand what we have today in light of similar situations in the past answering the question whether we are at an inflexion point in technology.

John first looks at the importance of making DRM-protected content portable in the same way as non-protected content was easy to move between computers and systems. This was in response to a WIPO analysis which, as many would agree, concluded that this was essential to enable legal video use on the internet. In 2008, Mircosoft analysed all the elements needed, beyond the simple encryption, to allow such media to be portable. It would require HTML extensions for delivery, DRM signalling, authentication, a standard protocol for Adaptive Delivery (also known as ABR) and an adaptive container format. We then take a walk through the timeline staring in 2009 through to 2018 seeing the beginnings and published availability of such technologies Common Encryption, MPEG DASH and CMAF.

Milestones for Web Media Portability

John then walks through these key technologies starting with the importance of Common Encryption (also known as CENC). Previously all the DRM methods had their own container formats. Harmonisation of DRM is, likely, never going to happen so we’ll always have Apple’s own, Google’s own, Microsoft’s and plenty of others. For streaming providers, it’s a major problem to deliver all the different formats and makes for messy, duplicative workflows. Common Encryption allows for one container format which can contain any DRM information allowing for a single workflow with different inputs. On the player side, the player can, now, simply accept a single stream of DRM information, authenticate with the appropriate service and decode the video.

CMAF is another key technology called out by John in enabling portability of media. It was co-developed with Apple to enable a common media format for HLS and DASH. We’ve covered this before on The Broadcast Knowledge starting with the ISO BMFF format on which DASH and CMAF are based, Will Law’s famous ‘Chunky Monkey’ talk and many more. We recently covered FuboTV’s talk on how they distribute HLS & DASH multi-codec encoding and packaging.

Also highlighted by John. are the JavasScript Media Source Extensions and Encrypted Media Extensions which allow interaction from browsers/JavaScript with both ABR/Adaptive Streaming and DRM. He then talks about CTA WAVE which is a project that specifically aims to improve streamed media experiences on consumer devices, CTA being the Consumer Technology Association who are behind the annual CES exhibition in Las Vegas.

What is often less apparent is the current work happening developing new standards and specifications. John calls out a number of different projects within W3C and MPEG such as Low latency support for CMAF, MSE and codec switching in MSE. Work on ad signalling period boundaries and SCTE-35 is making its debut into JavaScript with some ongoing work to create the link between ad markers and JS applications. He also calls out VVC and AV1 mappings into CMAF.

In the second part of the presentation, John asked ‘where will we end up?’ John draws upon two examples. One is the number of TCP/IP hosts between 1980 and 1992. He shows it was clear that when TCP/IP was publicly available there was an exponential increase in adoption of TCP/IP, moving on from proprietary network interfaces available in the years before. Similarly with websites between 1990 and 1997. Exponential growth happened after 1993 when the standard was set for Web Clients. This did take a few years to have a marked effect, but the number of websites moved from a flat ‘less than 100’ number to 600, then 10,000 in 1994 increasing to a quarter of a million by 1995 and then over one million in 1996. This shows the difference between the power ‘walled garden’ environments and the open internet.

John sees media technology today as still having a number of ‘traditional’ walled gardens such as DISH and Sky TV. He sees people self-serving multiple walled gardens to create their own larger pool of media options, typically known as ‘cord cutters’. He, therefore, sees two options for the future. One is ever larger walled gardens where large companies aggregate the content of smaller content owners/providers. The other option is having cloud services that act as a one-stop-shop for your media, but dynamically authenticate against whichever service is needed. This is a much more open environment without the need to be separately subscribing to each and every outlet in the traditional sense.

Watch now!
Speakers

John Simmons John Simmons
W3C Evangelist, Media & Entertainment
W3C

Video: Bandwidth Prediction for Multi-Bitrate Streaming at Low Latency

Low latency protocols like CMAF are wreaking havoc with traditional ABR algorithms. We’re having to come up with new ways of assessing if we’re running out of bandwidth. Traditionally, this is done by looking at how long a video chunk takes to download and comparing that with its playback duration. If you’re downloading at the same speed it’s playing, it’s time consider changing stream to a lower-bandwidth one.

As latencies have come down, servers will now start sending data from the beginning of a chunk as it’s being written which means it’s can’t be downloaded any quicker. To learn more about this, look at our article on ISO BMFF and this streaming primer. Since the file can’t be downloaded any quicker, we can’t ascertain if we should move up in bitrate to a better quality stream, so while we can switch down if we start running out of bandwidth, we can’t find a time to go up.

Ali C. Begen and team have been working on a way around this. The problem is that with the newer protocols, you pre-request files which start getting sent when they are ready. As such you don’t actually know the time the chunk starts downloading to you. Whilst you know when it’s finished, you don’t have access, via javascript, to when the file started being sent to you robbing you of a way of determining the download time.

Ali’s algorithm uses the time the last chunk finished downloading in place of the missing timestamp figuring that the new chunk is going to load pretty soon after the old. Now, looking at the data, we see that the gap between one chunk finishing and the next one starting does vary. This lead Ali’s team to move to a sliding window moving average taking the last 3 download durations into consideration. This is assumed to be enough to smooth out some of those variances and provides the data to allow them to predict future bandwidth and make a decision to change bitrate or not. There have been a number of alternative suggestions over the last year or so, all of which perform worse than this technique called ACTE.

In the last section of this talk, Ali explores the entry he was part of into a Twitch-sponsored competition to keep playback latency close to a second in test conditions with varying bitrate. Playback speed is key to much work in low-latency streaming as it’s the best way to trim off a little bit of latency when things are going well and allows you to buy time if you’re waiting for data; the big challenge is doing it without the viewer noticing. The entry used a heuristics and a machine learning approach which worked so well, they were runners up in the contest.

Watch now!
Speaker

Ali C. Begen
Ali C. Begen,
Technical Consultant, Comcast
Professor, Computer Science, Özyeğin University

Video: Banding Impairment Detection

It’s one of the most common visual artefacts affecting both video and images. The scourge of the beautiful sunset and the enemy of natural skin tones, banding is very noticeable as it’s not seen in nature. Banding happens when there is not enough bit depth to allow for a smooth gradient of colour or brightness which leads to strips of one shade and an abrupt change to a strip of the next, clearly different, shade.

In this Video Tech talk, SSIMWAVE’s Dr. Hojat Yeganeh explains what can be done to reduce or eliminate banding. He starts by explaining how banding is created during compression, where the quantiser has reduced the accuracy of otherwise unique pixels to very similar numbers leaving them looking the same.

Dr. Hojat explains why we see these edges so clearly. By both looking at how contrast is defined but also by referencing Dolby’s famous graph showing contrast steps against luminance where they plotted 10-bit HDR against 12-bit HDR and show that the 12-bit PQ image is always below the ‘Barten limit’ which is the threshold beyond which no contrast steps are visible. It shows that a 10-bit HDR image is always susceptible to showing quantised, i.e. banded, steps.

Why do we deliver 10-bit HDR video if it can still show banding? This is because in real footage, camera noise and film grain serve to break up the bands. Dr. Hojat explains that this random noise amounts to ‘dithering’. Well known in both audio and video, when you add random noise which changes over time, humans stop being able to see the bands. TV manufacturers also apply dithering to the picture before showing which can further break up banding, at the cost of more noise on the image.

How can you automatically detect banding? We hear that typical metrics like VMAF and SSIM aren’t usefully sensitive to banding. SSIMWAVE’s SSIMPLUS metric, on the other hand, has been created to also be able to create a banding detection map which helps with the automatic identification of banding.

The video finishes with questions including when banding is part of artistic intention, types of metrics not identifiable by typical metrics, consumer display limitations among others.

Watch now!
Speakers

Dr. Hojat Yeganeh Dr. Hojat Yeganeh
Senior Member Technical Staff,
SSIMWAVE Inc.

Video: No-Reference QoE Assessment: Knowledge-based vs. Learning-based

Automatic assessment of video quality is essential for creating encoders, selecting vendors, choosing operating points and, for online streaming services, in ongoing service improvement. But getting a computer to understand what looks good and what looks bad to humans is not trivial. When the computer doesn’t have the source video to compare against, it’s even harder.

In this talk, Dr. Ahmed Badr from SSIMWAVE looks at how video quality assessment (VQA) works and goes into detail on No-Reference (NR) techniques. He starts by stating the case for VQA which is an extension, and often replacement for subjective scoring by people. Clearly this is time-consuming, can be more expensive due to involvement of people (and the time) plus requires specific viewing conditions. When done well, a whole, carefully decorated room is required. So when it comes to analysing all the video created by a TV station or automating per-title encoding optimisation, we know we have to remove the human element.

Ahmed moves on to discuss the challenges of No Reference VQA such as identifying intended blur or noise. NR VQA is a two-step process with the first being extracting features from the video. These features are then mapped to a quality model which can be done with a machine learning/AI process which is the technique which Ahmed analyses next. The first task is to come up with a dataset of videos which should be carefully chosen, then it’s important to choose a metric to use for the training, for instance, MS-SSIM or VMAF. This is needed so that the learning algorithm can get the feedback it needs to improve. The last two elements are choosing what you are optimising for, technically called a loss function, and then choosing an AI model for use.

The data set you create needs to be aimed at exploring a certain aspect or range of aspects of video. It could be that you want to optimise for sports, but if you need a broad array of genres, optimising for reducing compression or scaling artefacts may be the main theme of the video dataset. Ahmed talks about the millions of video samples that they have collated and how they’ve used that to create their metric called SSIMPLUS which can work both with a reference and without.

Watch now!
Speaker

Dr. Ahmed Badr Dr. Ahmed Badr
SSIMWAVE