Webinar: Multicast ABR opens the door to a new DVB era

Now available on demand

With video delivery constituting the majority of traffic, it’s clear there’s a big market for it. ON the internet, this is done with unicast streaming where for each receiver, the stream source has to send another stream. The way this has been implemented using HTTP allows for a very natural system, allied Adaptive Bit Rate (ABR), which means that every when your network capacity is constrained (by the network itself or bandwidth contention), you can still get a picture just at a lower bit rate.

But when extrapolating this system linear television, we find that large audience place massive demands on the originating infrastructure. This load on the infrastructure drives its architects to implement a lot of redundancy making it expensive to run. Within a broadcaster, such loads would be dealt with by multicast traffic but on the internet, Multicast is not enabled. For an IPTV system where each employee had access via a program on their PC and/or a set-top-box on their desk, the video would be sent by multicast meaning that it is the network that was providing the duplication of the streams to each endpoint, not the source.

By combining existing media encoding and packaging formats with the efficiency of point-to-multipoint distribution to the edge of IP-based access networks, it is possible to design a system for linear media distribution that is both efficient and scalable to very large audiences, while remaining technically compatible with the largest possible set of already-deployed end user equipment.

This webinar by Guillaume Bichot which is in place of his talk at the cancelled DVB World 2020 event explains DVB’s approach to doing thus that; combining multicast ordination of content with delivery of an ABR feed, called DVB-mABR.

Video broadcast has been digitised since it’s initial broadcasts in the 30s, and more than once. In Europe, we have seen IP carriage (IPTV) services and most recently the hybrid approach where broadband access is merged into transmitted content with the aim of delivering a unified service to the viewer called HbbTV. Multicast ABR (mABR) defines the carriage of Adaptive Bit Rate video formats and protocols over a broadcast/multicast feed. Guillaume explains the mABR architecture and then looks at the deployment possibilities and what the future might hold.

mABR comprises a multicast server at the video headend. This server/transcaster, receives standard ABR feeds and then encapsulates it into multicast before sending. The decoder does the opposite, removing any multicast headers revealing the ABR underneath. It’s not uncommon for mABR to be combined with HTTP unicast allowing the unicast to pick up the less popular channels but for the main services to benefit from multicast.

Guillaume explores these topics plus whether mABR saves bit rate, how it’s deployed and how it can change in the future to keep up with viewers’ requirements.

Watch now on demand!

Guillaume Bichot Guillaume Bichot
Principal Engineer, Head of Exploration

Video: Investigating Media Over IP Multicast Hurdles in Containerized Platforms

As video infrastructures have converged with enterprise IT, they started incorporating technologies and methods typical for data centres. First came virtualisation allowing for COTS (Common Off The Shelf) components to be used. Then came the move towards cloud computing, taking advantage of scale economies.

However, these innovations did little to address the dependence on monolithic projects that impeded change and innovation. Early strategies for Video over IP were based on virtualised hardware and IP gateway cards. As the digital revolution took place with emergence of OTT players, the microservices based on containers have been developed. The aim was to shorten the cycle of software updates and enhancements.

Containers allow to insulate application software from underlying operating systems to remove the dependence on hardware and can be enhanced without changing the underlying operational fabrics. This provides the foundation for more loosely coupled and distributed microservices, where applications are broken into smaller, independent pieces that can be deployed and managed dynamically.

Modern containerized server software methods such as Docker are very popular in OTT and cloud solution, but not in SMPTE ST 2110 systems. In the video above, Greg Shay explains why.

Docker can package an application and its dependencies in a virtual container that can run on any Linux server. It uses the resource isolation features of the Linux kernel and a union-capable file system to allow containers to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines. Docker can get more applications running on the same hardware than comparing with VMs, makes it easy for developers to quickly create ready-to-run containered applications and makes managing and deploying applications much easier.

However, currently there is a huge issue with using Docker for ST 2110 systems, because Docker containers do not work with Multicast traffic. The root of the multicast problem is the specific design of the way that the Linux kernel handles multicast routing. It is possible to wrap a VM around each Docker container just to achieve the independence of multicast network routing by emulating the full network interface, but this defeats capturing and delivering the behaviour of the containerized product in a self-contained software deliverable.

There is a quick and dirty partial shortcut which enable container to connect to all the networking resources of the Docker host machine, but it does not isolate containers into their own IP addresses and does not isolate containers to be able to use their own ports. You don’t really get a nice structure of ‘multiple products in multiple containers’, which defeats the purpose of containerized software.

You can see the slides here.

Watch now!


Greg Shay Greg Shay
The Telos Alliance

Video: The 7th Circle of Hell; Making Facility-Wide Audio-over-IP Work


When it comes to IP, audio has always been ahead of video. Whilst audio often makes up for it in scale, its relatively low bandwidth requirements meant computing was up to the task of audio-over-IP long before uncompressed video-over-IP. Despite the early lead, audio-over-IP isn’t necessarily trivial. However, this talk aims to give you a heads up to the main hurdles so you can address them right from the beginning.

Matt Ward, Head of Video for UK-based Jigsaw24, starts this talk revising the reasons to go audio over IP (AoIP). The benefits vary for each company. For some, reducing cabling is a benefit, many are hoping it will be cheaper, for others achievable scale is key. Matt’s quick to point out the drawbacks we should be cautious of, not least of which are complexity and skill gaps.

Matt fast-tracks us to better installations by hitting a list of easy wins some of which are basic, but a disproportionately important as the project continues i.e. naming paths and devices and having IP addresses in logical groups. Others are more nuanced like ensuring cable performance. For CAT6 cabling, it’s easy to get companies to test each of your cables to ensure the cable and all terminations are still working at peak performance.

Planning your timing system is highlighted as next on the road to success with smaller facilities more susceptible to problems if they only have one clock. But any facility has to be carefully considered and Matt points out that the Best Master Clock Algorithm (BMCA).

Network considerations are the final stop on the tour, underlining that audio doesn’t have to run in its own network as long as QoS is used to maintain performance. Matt details his reasons to keep Spanning Tree Protocol off, unless you explicitly know that you need it on. The talk finishes by discussing multicast distribution and IGMP snooping.

Watch now!

Matt Ward Matt Ward
Head of Audio,

Video: Multicast ABR

Multicast ABR is a mix of two very beneficial technologies which are seldom seen together. ABR – Adaptive Bitrate – allows a player to change the bitrate of the video and audio that it’s playing to adapt to changing network conditions. Multicast is a network technology which efficiently sends a video stream over the network without duplicating bandwidth.

ABR has traditionally been deployed for chunk-based video like HLS where each client downloads its own copy of the video in blocks of several seconds in length. This means that you bandwidth you use to distribute your video increases by one thousand times if 1000 people play your video.

Multicast works with live streams, not chunks, but allows the bandwidth use for 1000 players to increase – in the best case – by 0%.

Here, the panelists look at the benefits of combining multicast distribution of live video with techniques to allow it to change bitrate between different quality streams.

This type of live streaming is actually backwards compatible with old-style STBs since the video sent is a live transport stream, it’s possible to deliver that to a legacy STB using a converter in the house at the same time as delivering a better, more modern delivery to other TVs and devices.

It thus also allows pure-streaming providers to compete with conventional broadcast cable providers and can also result in cost savings in equipment provided but also in bandwidth used.

There’s lots to unpack here, which is why the Streaming Video Alliance have put together this panel of experts.

Watch now and find out more!


Phillipe Carol Phillipe Carol
Senior Product Manager,
Neil Geary Neil Geary
Technical Strategy Consultant,
Liberty Global
Brian Stevenson Brian Stevenson
VP of Ecosystem Strategy & Partnerships,
Mark Fisher Mark Fisher
VP of Marketing & Business Development,
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance