Video: Making a case for DVB-MABR

Multicast ABR (mABR) is a way of delivering traditional HTTP-based streams like HLS and DASH over multicast. On a managed telco network, the services are multicast to thousands of homes and only within the home itself does the stream gets converted back unicast HTTP. Devices in the home then access streaming services in exactly the same way as they would Netflix or iPlayer over the internet, but the content is served locally. Streaming is a point-to-point service so each device takes its own stream. If you have 3 devices in the home watching a service, you’ll be sending 3 streams out to them. With mABR, the core network only ever sees one stream to the home and the linear scaling is done internally. Not only does this help remove peaks in traffic, but it significantly reduces the load on the upstream networks, the origin servers and smooths out the bandwidth use.

This video from DVB lays out the business cases which are enabled by mABR. mABR has approved the specification which is now going for standardisation within ETSI. It’s already gained some traction with deployments in the field, so this talk looks at what the projects that drive the continued growth in mABR may look like.

Williams Tovar starts first by making the case for OTT over satellite. With OTT services continuing to take viewing time away from traditional broadcast services, satellite providers are working to ensure they retain relevance and offer value. Delivering these OTT services is, thus, clearly beneficial, but why would you want to? On top of the mABR benefits briefly outlined above, this business case recognises that not everyone is served by a good internet connection. Distributing OTT by satellite can provide high bitrate, OTT experiences to areas with bad broadband and could also be an efficient way to deliver to large public places such as hotels and ships.

Julian Lemotheux from Orange presents a business case for next-generation IPTV. The idea here is to bring down the cost of STBs by replacing CA security with DRM and replacing the chipset with a cheaper one which is less specialised. As DASH and HLS streaming are cpu-based tasks and well understood, general, mass-produced chipsets can be used which are cheaper and removing CA removes some hardware from the box. Also to be considered is that the OTT ecosystem is continually seeing innovation so delivering services in the same format allows providers to keep their offerings up to date without custom development in the IPTV software stack.

Xavier Leclercq from Broadpeak looks, next, at Scaling ABR Delivery. This business case is a consideration of what the ultimate situation will be regarding MPEG2 TSes and ABR. Why don’t we provide all services as Netflix-style ABR streams? One reason is that the scale is enormous with one connection per device, CDNs and national networks would still not be able to cope. Another is that the QoS for MPEG2 transport streams is very good and, whilst it is possible to have bad reception, there is little else that causes interruption to the stream.

mABR can address both of these challenges. By delivering one stream to each home and having the local gateway do the scaling, mass delivery of streamed content becomes both predictable and practical. Whilst there is still a lot of bandwidth involved, the predictable load on the CDNs is much more controlled and with lower peaks, the CDN cost is reduced as this is normally based on the maximum throughput. mABR can also be delivered with a higher QoS than public internet traffic which allows it to benefit from better reliability which could move it in the realm of the traditional transport-stream based serviced. Xavier explains that if you put the gateway within a TV, you are able to deliver a set-top-box-less service whilst if you want to address all devices in you home, you can provide a separate gateway.

Before the video finishes with a Q&A session, Williams delivers the business case for Backhauling over Satellite for CDNs and IP backhaul for 5G Networks. The use case for both has similarities. The CDN backhauling example looks at using satellite to efficiently deliver directly to CDN PoPs in hard to reach areas which may have limited internet links. The Satellite could deliver a high bandwidth set of streams to many PoPs. A similar issue presents itself as there is so much bandwidth available, there is a concern about getting enough into the transmitter. Whether by satellite or IP Multicast, mABR could be used for CDN backhauling to 5G networks delivering into a Mobile Edge Computing (MEC) cache. A further benefit in doing this is avoiding issues with CDN and core network scalability where, again, keeping the individual requests and streams away from the CDN and the network is a big benefit.

Watch now!
Download the slides from this video
Speakers

Williams Tovar Williams Tovar
Soultion Pre-sales manager,
ENENSYS Technologies
Julien Lemotheux Julien Lemotheux
Standardisation Expert,
Orange Labs
Xavier Leclercq Xavier Leclercq
VP Business Development,
Broadpeak
Christophe Berdinat Moderator: Christophe Berdinat
Chairman CM-I MABR, DVB
Innovation and Standardisation Manager, ENENSYS

Video: Optimal Design of Encoding Profiles for Web Streaming

With us since 1998, ABR (Adaptive Bitrate) has been allowing streaming players to select a stream appropriate for their computer and bandwidth. But in this video, we hear that over 20 years on, we’re still developing ways to understand and optimise the performance of ABRs for delivery, finding the best balance of size and quality.

Brightcove’s Yuriy Reznik takes us deep into the theory, but start at the basics of what ABR is and why we. use it. He covers how it delivers a whole series os separate streams at different resolutions and bitrates. Whilst that works well, he quickly starts to show the downsides of ‘static’ ABR profiles. These are where a provider decides that all assets will be encoded at the same set bitrate of 6 or 7 bitrates even though some titles such as cartoons will require less bandwidth than sports programmes. This is where per-title and other encoding techniques come in.

Netflix coined the term ‘per-title encoding’ which has since been called content-aware encoding. This takes in to consideration the content itself when determining the bitrate to encode at. Using automatic processes to determine objective quality of a sample encode, it is able to determine the optimum bitrate.

Content & network-aware encoding takes into account the network delivery as part of the optimisation as well as the quality of the final video itself. It’s able to estimate the likelihood of a stream being selected for playback based upon its bitrate. The trick is combining these two factors simultaneously to find the optimum bitrate vs quality.

The last element to add in order to make this ABR optimisation as realistic as practical is to take into account the way people actually view the content. Looking at a real example from the US open, we see how on PCs, the viewing window can be many different sizes and you can calculate the probability of the different sizes being used. Furthermore we know there is some intelligence in the players where they won’t take in a stream with a resolution which is much bigger than the browser viewport.

Yuriy brings starts the final section of his talk by explaining that he brought in another quality metric from Westerink & Roufs which allows him to estimate how people see video which has been encoded at a certain resolution which is then scaled to a fixed interim resolution for decoding and then to the correct size for the browser windows.

The result of adding in this further check shows that fewer points on the ladder tend to be better, giving an overall higher quality value. Going much beyond 3 is typically not useful for the website. Shows only a few resolutions needed to get good average quality. Adding more isn’t so useful.

Yuriy finishes by introducing SSIM modeling of the noise of an encoder at different bitrates. Bringing together all of these factors, modelled as equations, allows him to suggest optimal ABR ladders.

Watch now!
Speaker

Yuriy Reznik Yuriy Reznik
Technology Fellow and Head of Research,
Brightcove

Video: Bandwidth Prediction for Multi-Bitrate Streaming at Low Latency

Low latency protocols like CMAF are wreaking havoc with traditional ABR algorithms. We’re having to come up with new ways of assessing if we’re running out of bandwidth. Traditionally, this is done by looking at how long a video chunk takes to download and comparing that with its playback duration. If you’re downloading at the same speed it’s playing, it’s time consider changing stream to a lower-bandwidth one.

As latencies have come down, servers will now start sending data from the beginning of a chunk as it’s being written which means it’s can’t be downloaded any quicker. To learn more about this, look at our article on ISO BMFF and this streaming primer. Since the file can’t be downloaded any quicker, we can’t ascertain if we should move up in bitrate to a better quality stream, so while we can switch down if we start running out of bandwidth, we can’t find a time to go up.

Ali C. Begen and team have been working on a way around this. The problem is that with the newer protocols, you pre-request files which start getting sent when they are ready. As such you don’t actually know the time the chunk starts downloading to you. Whilst you know when it’s finished, you don’t have access, via javascript, to when the file started being sent to you robbing you of a way of determining the download time.

Ali’s algorithm uses the time the last chunk finished downloading in place of the missing timestamp figuring that the new chunk is going to load pretty soon after the old. Now, looking at the data, we see that the gap between one chunk finishing and the next one starting does vary. This lead Ali’s team to move to a sliding window moving average taking the last 3 download durations into consideration. This is assumed to be enough to smooth out some of those variances and provides the data to allow them to predict future bandwidth and make a decision to change bitrate or not. There have been a number of alternative suggestions over the last year or so, all of which perform worse than this technique called ACTE.

In the last section of this talk, Ali explores the entry he was part of into a Twitch-sponsored competition to keep playback latency close to a second in test conditions with varying bitrate. Playback speed is key to much work in low-latency streaming as it’s the best way to trim off a little bit of latency when things are going well and allows you to buy time if you’re waiting for data; the big challenge is doing it without the viewer noticing. The entry used a heuristics and a machine learning approach which worked so well, they were runners up in the contest.

Watch now!
Speaker

Ali C. Begen
Ali C. Begen,
Technical Consultant, Comcast
Professor, Computer Science, Özyeğin University

Video: DASH: from on-demand to large scale live for premium services

A bumper video here with 7 short talks from VideoLAN, Will Law and Hulu among others, all exploring the state of MPEG DASH today, the latest developments and the hot topics such as low latency, ad insertion, bandwidth prediction and one red letter feature of DASH – multi-DRM.

The first 10 minutes sets the scene introducing the DASH Industry Forum (DASH IF) and explaining who takes part and what it does. Thomas Stockhammer, who is chair of the Interoperability Working Group explains that DASH IF is made of companies, headline members including Google, Ericsson, Comcast and Thomas’ employer Qualcomm who are working to promote the adoption MPEG-DASH by working to imrove the specification, advise on how to put it into practice in real life, promote interoperability, and being a liaison point for other standards bodies. The remaining talks in this video exemplify the work which is being done by the group to push the technology forward.

Meeting Live Broadcast Requirements – the latest on DASH low latency!
Akamai’s Will Law takes to the mic next to look at the continuing push to make low-latency streaming available as a mainstream option for services to use. Will Law has spoken about about low latency at Demuxed 2019 when he discussed the three main file-based to deliver low latency DASH, LHLS and LL-HLS as well as his famous ‘Chunky Monkey’ talk where he explains how CMAF, an implementation of MPEG-DASH, works in light-hearted detail.

In today’s talk, Will sets out what ‘low latency’ is and revises how CMAF allows latencies of below 10 seconds to be achieved. A lot of people focus on the duration of the chunks in reducing latency and while it’s true that it’s hard to get low latency with 10 second chunk sizes, Will puts much more emphasis on the player buffer rather than the chunk size themselves in producing a low-latency stream. This is because even when you have very small chunk sizes, choosing when to start playing (immediately or waiting for the next chunk) can be an important part of keeping the latency down between live and your playback position. A common technique to manage that latency is to slightly increase and decrease playback speed in order to manage the gap without, hopefully, without the viewer noticing.

Chunk-based streaming protocols like HLS make Adaptive Bitrate (ABR) relatively easy whereby the player monitors the download of each chunk. If the, say, 5 second chunk arrives within 0.25 seconds, it knows it could safely choose a higher-bitrate chunk next time. If, however the chunk arrives in 4.8 seconds, it can choose to the next chunk to be lower-bitrate so as to receive the chunk with more headroom. With CMAF this is not easy to do since the segments all arrive in near real-time since the transferred files represent very small sections and are sent as soon as they are created. This problem is addressed in a later talk in this talk.

To finish off, Will talks about ‘Resync Elements’ which are a way of signalling mid-chunk IDRs. These help players find all the points which they can join a stream or switch bitrate which is important when some are not at the start of chunks. For live streams these are noted in the manifest file which Will walks through on screen.

Ad Insertion in Live Content:Pre-, Mid- and Post-rolling
Whilst not always a hit with viewers, ads are important to many services in terms of generating the revenue needed to continue delivering content to viewers. In order to provide targeted ads, to ensure they are available and to ensure that there is a record of which ads were played when, the ad-serving infrastructure is complex. Hulu’s Zachary Cava walks us through the parts of the infrastructure that are defined within DASH such as exchanging information on ‘Ad Decision Parameters’ and ad metadata.

In chunked streams, ads are inserted at chunk boundaries. This presents challenges in terms of making sure that certain parameters are maintained during this swap which is given the general name of ‘Content Splice Conditioning.’ This conditioning can align the first segment aligned with the period start time, for example. Zachary lays out the three options provided for this splice conditioning before finishing his talk covering prepared content recommendations, ad metadata and tracking.

Bandwidth Prediction for Multi-bitrate Streaming at Low Latency
Next up is Comcast’s Ali C. Begen who follows on from Will Law’s talk to cover bandwidth prediction when operating at low-latency. As an example of the problem, let’s look at HTTP/1.1 which allows us to download a file before it’s finished being written. This allows us to receive a 10-second chunk as it’s being written which means we’ll receive it at the same rate the live video is being encoded. As a consequence the time each chunk takes to arrive will be the same as the real-time chunk duration (in this example, 10 seconds.) When you are dealing with already-written chunks, your download time will be dependent on your bandwidth and therefore the time can be an indicator of whether your player should increase or decrease the bitrate of the stream it’s pulling. Getting back this indicator for low-latency streams is what Ali presents in this talk.

Based on this paper Ali co-authored with Christian Timmerer, he explains a way of looking at the idle time between consecutive chunks and using a sliding window to generate a bandwidth prediction.

Implementing DASH low latency in FFmpeg
Open-source developer Jean-Baptiste Kempf who is well known for his work on VLC discusses his work writing an MPEG-DASH implementation for FFmpeg called the DASH-LL. He explains how it works and who to use it with examples. You can copy and paste the examples from the pdf of his talk.

Managing multi-DRM with DASH
The final talk, ahead of Q&A is from NAGRA discussing the use of DRM within MPEG-DASH. MPEG-DASH uses Common Encryption (CENC) which allows the DASH protocol to use more than one DRM scheme and is typically seen to allow the use of ‘FairPlay’, ‘Widevine’ and ‘PlayReady’ encryption schemes on a single stream dependent on the OS of the receiver. There is complexity in having a single server which can talk to and negotiate signing licences with multiple DRM services which is the difficulty that Lauren Piron discusses in this final talk before the Q&A led by Ericsson’s VP of international standards, Per Fröjdh.

Watch now!
Speakers

Thomas Stockhammer Thomas Stockhammer
Director of Technical Standards,
Qualcomm
Will Law Will Law
Chief Architect,
Akamai
Zachary Cava Zachary Cava
Software Architect,
Hulu
Ali C. Begen Ali C. Begen
Technical Consultant, Video Architecture, Strategy and Technology group,
Comcast
Jean-Baptiste Kempf Jean-Baptiste Kempf
President & Lead VLC Developer
VideoLAN
Laurent Piron Laurent Piron
Principal Solution Architect
NAGRA
Per Fröjdh Moderator: Per Fröjdh
VP International Standards,
Ericsson