Video: Encoding and packaging for DVB-I services

There are many ways of achieving a hybrid of OTT-delivered and broadcast-delivered content, but they are not necessarily interoperable. DVB aims to solve the interoperability issue, along with the problem of service discovery with DVB-I. This specification was developed to bring linear TV over the internet up to the standard of traditional broadcast in terms of both video quality and user experience.

DVB-I supports any device with a suitable internet connection and media player, including TV sets, smartphones, tablets and media streaming devices. The medium itself can still be satellite, cable or DTT, but services are encapsulated in IP. Where both broadband and broadcast connections are available, devices can present an integrated list of services and content, combining both streamed and broadcast services.

DVB-I standard relies on three components developed separately within DVB: the low latency operation, multicast streaming and advanced service discovery. In this webinar, Rufael Mekuria from Unified Streaming focuses on low latency distributed workflow for encoding and packaging.


The process starts with an ABR (adaptive bit rate) encoder responsible for producing streams with multiple bit rates and clear segmentation – this allows clients to automatically choose the best video quality depending on available bandwidth. Next step is packaging where streaming manifests are added and content encryption is applied, then data is distributed through origin servers and CDNs.

Rufael explains that low latency mode is based on an enhancement to the DVB-DASH streaming specification known as DVB Bluebook A168. This incorporates the chunked transfer encoding of the MPEG CMAF (Common Media Application Format), developed to enable co-existence between the two principle flavors of adaptive bit rate streaming: HLS and DASH. Chunked transfer encoding is a compromise between segment size and encoding efficiency (shorter segments make it harder for encoders to work efficiently). The encoder splits the segments into groups of frames none of which requires a frame from a later group to enable decoding. The DASH packager then puts each group of frames into a CMAF chunk and pushes it to the CDN. DVB claims this approach can cut end-to-end stream latency from a typical 20-30 seconds down to 3-4 seconds.

The other topics covered are: encryption (exhanging key parameters using CPIX), content insertion, metadata, supplemental descriptors, TTML subitles and MPD proxy.

Watch now!

Download the slides.


Rufael Mekuria Rufael Mekuria
Head of Research & Standardization
Unified Streaming

Video: Doing Server-Side Ad Insertion on Live Sports for 25.3M Concurrent Users

Delivering ads successfully is done by some services by having the client insert the different ads, and some by inserting the ads at the server end. The choice of which to use requires knowing your customers and how they are most likely to receive your streams. With the prevalence of ad blockers, businesses find the many customers never see the client-side inserted ads. Inserting ads at the server, therefore allows you to get around this as even the ads look like they are continuation of the same video feed.

The downside of server-side ad insertion (SSAI), whilst rendering the ads unblockable, restricts the ads you can place. Theoretically, in client-side ad insertion, each user can have their own advert. With SSAI, to do that you would need to create a new stream per user which becomes much more computationally hungry. So the sweet spot comes in between the two where viewers are grouped in to categories so that only a few tens of streams, for example, are needed to match ten demographics identified to advertisers. This is known as ‘dynamic SSAI’.

Ashutosh Agrawal took to the stage at the Demuxed SF 2019 conference to explain how Hotstar used dynamic SSAI to deliver targeted ads to their 25 million viewers. As an example of your understanding of your viewers driving your choice of ad-delivery technology, Ashutosh explains that close to 85% of their viewing is on mobile and much of that has marginal reception. In hostile network conditions, the requirement for the player to be downloading ads in the background doesn’t work well since the network can only just about support the live video, so a background download pushes the ABR quality down and could even create pausing and rebuffering. It’s for this reason that Hotstar decided that server-side was the way to go.

Ashutosh takes us through how Hotstar approached this large event. In India cricket is a very popular game which last for up to 8 hours a day. This gives rise to a large number breaks, over 100, which add up to over an hour’s advertising in total so it’s clear to see why this is a massive opportunity for optimisation. Static ad insertion reacts to SCTE 35 markers inserted. This can work well in the sense that for a 40 seconds SCTE marker, the platform can ad an approx 40 second ad or two 20 second ads. However, it isn’t flexible enough to real with the times when there are far more people watching than that ad agency has paid for which means that Hotstar would end up delivering more viewers than necessary. It would be better for those viewers to see a different ad, triggered by SCTE 35.

As discussed above, doing SSAI for each person is a scalability and cost nightmare, so we quickly see that Targeted SSAI is the way forward. This allows different cohorts of users to be identified. Each cohort will receive it’s own virtual feed with their own adverts. We then see the architecture of the system showing how the CDN is used. For scaling, we see that they use a cache rather than a database.

Nginx then gets a namecheck as Ashutosh explains how they provide caching, including an nginx memory cache, to deal with up to 50% of the overall load, shared with the CDN if necessary. He then finishes with a look at the best practices they have learnt and what Ashutosh sees is the future for this technique.

Watch now!

Ashutosh Agrawal Ashutosh Agrawal
Evangelist/Architect – CTO’s Office,

Webinar: Multicast ABR opens the door to a new DVB era

Now available on demand

With video delivery constituting the majority of traffic, it’s clear there’s a big market for it. ON the internet, this is done with unicast streaming where for each receiver, the stream source has to send another stream. The way this has been implemented using HTTP allows for a very natural system, allied Adaptive Bit Rate (ABR), which means that every when your network capacity is constrained (by the network itself or bandwidth contention), you can still get a picture just at a lower bit rate.

But when extrapolating this system linear television, we find that large audience place massive demands on the originating infrastructure. This load on the infrastructure drives its architects to implement a lot of redundancy making it expensive to run. Within a broadcaster, such loads would be dealt with by multicast traffic but on the internet, Multicast is not enabled. For an IPTV system where each employee had access via a program on their PC and/or a set-top-box on their desk, the video would be sent by multicast meaning that it is the network that was providing the duplication of the streams to each endpoint, not the source.

By combining existing media encoding and packaging formats with the efficiency of point-to-multipoint distribution to the edge of IP-based access networks, it is possible to design a system for linear media distribution that is both efficient and scalable to very large audiences, while remaining technically compatible with the largest possible set of already-deployed end user equipment.

This webinar by Guillaume Bichot which is in place of his talk at the cancelled DVB World 2020 event explains DVB’s approach to doing thus that; combining multicast ordination of content with delivery of an ABR feed, called DVB-mABR.

Video broadcast has been digitised since it’s initial broadcasts in the 30s, and more than once. In Europe, we have seen IP carriage (IPTV) services and most recently the hybrid approach where broadband access is merged into transmitted content with the aim of delivering a unified service to the viewer called HbbTV. Multicast ABR (mABR) defines the carriage of Adaptive Bit Rate video formats and protocols over a broadcast/multicast feed. Guillaume explains the mABR architecture and then looks at the deployment possibilities and what the future might hold.

mABR comprises a multicast server at the video headend. This server/transcaster, receives standard ABR feeds and then encapsulates it into multicast before sending. The decoder does the opposite, removing any multicast headers revealing the ABR underneath. It’s not uncommon for mABR to be combined with HTTP unicast allowing the unicast to pick up the less popular channels but for the main services to benefit from multicast.

Guillaume explores these topics plus whether mABR saves bit rate, how it’s deployed and how it can change in the future to keep up with viewers’ requirements.

Watch now on demand!

Guillaume Bichot Guillaume Bichot
Principal Engineer, Head of Exploration

Video: CMAF and DASH-IF Live ingest protocol

Of course without live ingest of content into the cloud, there is no live streaming so why would we leave such an important piece of the puzzle to an unsupported protocol like RTMP which has no official support for newer codecs. Whilst there are plenty of legacy workflows that still successfully use RTMP, there are clear benefits to be had from a modern ingest format.

Rufael Mekuria from Unified Streaming, introduces us to DASH-IF’s CMAF-based live ingest protocol which promises to solve many of these issues. Based on the ISO BMFF container format which underpins MPEG DASH. Whilst CMAF isn’t intrinsically low-latency, it’s able to got to much lower latencies than standard HLS, for instance.

This work to create a standard live ingest protocol was born out of an analysis, Rufael explains, of which part of the content delivery chain were most ripe for standardisation. It was felt that live ingest was an obvious choice partly because of the decaying RTMP protocol which was being sloppy replaced by individual companies doing their own thing, but also because there everyone contributing in the same way is of a general benefit to the industry. It’s not typically, at the protocol level, an area where individual vendors differentiate to the detriment of interoperability and we’ve already seen the, then, success of RMTP being used inter-operably between vendor equipment.

MPEG DAHS and HLS can be delivered in a pull method as well as pushed, but not the latter is not specified. There are other aspects of how people have ‘rolled their own’ which benefit from standardisation too such as timed metadata like ad triggers. Rufael, explaining that the proposed ingest protocol is a version of CMAF plus HTTP POST where no manifest is defined, shows us the way push and pull streaming would work. As this is a standardisation project, Rufael takes us through the timeline of development and publication of the standard which is now available.

As we live in the modern world, ingest security has been considered and it comes with TLS and authentication with more details covered in the talk. Ad insertion such as SCTE 35 is defined using binary mode and Rufael shows slides to demonstrate. Similarly in terms of ABR, we look at how switching sets work. Switching sets are sets of tracks that contain different representations of the same content that a player can seamlessly switch between.

Watch now!

Rufael Mekuria Rufael Mekuria
Head of Research & Standardisation,
Unified Streaming