Video: Implementing standards-based Targeted Advertising on broadcast television

Last month, we featured DVB’s Targeted Advertising solution called DVB-TA. In that article, we saw the motivations to move to targeted advertising and how it was in use in Spain’s Artresmedia. Today’s video follows on from that introduction to DVB-TA with a range of speakers talking about implementation methods, interoperability and benefits to standardisation.

DVB’s Emily Dubs introduces the presenters starting with Nicolas Guyot from ENENSYS who speaks on the subject of the media value chain and TV’s reach. In Europe, the weekly reach of broadcast TV is still high at around 77% meaning the medium has still got strength. In terms of getting targeted ads on there, however, only a subset of devices can be used. In France, 36% of homes have a smart TV. Whilst this is a minority, it still equates to ten million TV sets. We hear about how France TV trialled targeted advertising for HbbTV where they collected consent and data which they used to segment people into four categories. The categories were finance, health, family and weather which they used to place ads in front of the viewer. With a view to scaling this out, the view was that standardisation was important to ensure ad placement was well understood by all equipment as well as measurement metrics.



Angelo Pettazzi from Mediaset makes a case next for standardisation. For Mediaset, moving to Targeted Advertising is a strategic move and mirrors the points made in the first video focusing on the need to keep TV advertising in line with what advertisers are looking for. In short, TA will maintain the relative value of their advertising slots. There are other benefits, however, such as more readily opening up advertising slots to local businesses and SMEs by providing availability of lower-cost slots.

Standards feature heavily for Mediaset. They have 4 million HbbTVs active monthly on their platform which simply wouldn’t have been possible without the HbbTV 2.0 standard in the first place. Using these devices they had previously tried a proprietary TA technology based on HTML5 but they found it didn’t always work well and the switching time could vary. They see the TA spec as a move towards more confidence in products along with the ability to substitute only single ads, a whole contiguous block or multiple substitutions in the same break.

Joe Winograd from Verance talks next about the use of Watermarking for targeted advertising. Advert timing and other signalling are usually carried separately to the media as SCTE-104, -25 or -224. However, there are times when a distribution chain is not yet compatible with this separate signalling. Linear advert substitution is usually done on the device, though, so by embedding this same signalling information within the audio and/or video feeds themselves, the receiving box is able to decode the embedded data and insert the ads as desired. Modifying video/audio data to carry messages is called watermarking and usually refers to the practice of marking a feed to uniquely identify it for the purposes of crime prevention. This method, however, is designed to carry dynamic data and is defined by the ATSC un their standards A/334, A/335 and A/336.

Pascal Jezequel from Harmonic speaks next about Dynamic Ad insertion interoperability. His main point is that if we’re to be inserting ads in a world of linear and OTT and streaming we should have one standard which covers them all. We need a detailed standard that allows precise, frame-accurate timing with smooth transitions. DVB-TA and HbbTV-TA initially focussed only on broadcast but is now being extended to cover streaming services provided over broadband. This interoperability will be a boost for operators and broadcasters.

Last in the video is Unified Streaming’s Rufael Mekuria who briefly explains the work that DVB is doing within the DBB-TA work but also within DVB-DASH. Having DVB involved helps with liaisons which is proving critical in ensuring that SCTE-35 is compatible with DVB-DASH. This work is in progress. Additionally, DVB is working with MPEG on CMAF and DVB is also liaising with DASH-IF.

The panel ends with a Q&A.

Watch now!

Nicolas Guyot Nicolas Guyot
Product Manager,
ENENSYS Technologies
Dror Mangel Dror Mangel
Product Manager,
Angelo Pettazzi Angelo Pettazzi
Mediaset Group
Joe Winograd Joe Winograd
Pascal Jezequel Pascal Jezequel
DTV Global Solution Architect,
Rufael Mekuria Rufael Mekuria
Head of Reasearch & Stanardisation,
Unified Streaming
Emily Dubs Emily Dubs
Head of Technology,
DVB Project

Video: CMAF Live Media Ingest Protocol Masterclass

We’ve heard before on The Broadcast Knowledge about CMAF’s success at bringing down the latency for live dreaming to around 3 seconds. CMAF is standards based and works with Apple devices, Android, Windows and much more. And while that’s gaining traction for delivery to the home, many are asking whether it could be a replacement technology for contribution into the cloud.

Rufael Mekuria from Unified Streaming has been working on bringing CMAF to encoders and packagers. All the work in the DASH Industry forum has centred around to key points in the streamin architecture. The first is on the output of the encoder to the input of the packager, the second between the packager and the origin. This is work that’s been ongoing for over a year and a half, so let’s pause to ask why we need a new protocol for ingest.



RTMP and Smooth streaming have not been deprecated but they have not been specified to carry the latest codecs and while people have been trying to find alternatives, they have started to use fragmented MP4 and CMAF-style technologies for contribution in their own, non-interoperable ways. Push-based DASH and HLS are common but in need of standardisation and in the same work, support for timed metadata such as splice information for ads could be addressed.

The result of the work is a method of using a separate TCP connection for each essence track; there is a POST command for each subtitles stream, metadata, video etc. This can be done with fixed length POST, but is better achieved with chunked tranfer encoding.

Rufael next shows us an exmaple of a CMAF track. Based on the ISO BMFF standard, CMAF specifies which ‘boxes’ can be used. The CMAF specification provides for optional boxes which would be used in the CMAF fragements. Time is important so is carried in ‘Live basemedia decodetime’ which is a unix-style time stamp that can be inserted into both the fragment and the CMAF header.

With all media being sent separately, the standard provides a way to define groups of essences both implicitly and explicity. Redundancy and hot failover have been provided for with multiple sources ingesting to multiple origins using the timestamp synchronisation, identical fragments can be detected.

The additional timed metadata track is based on the ISO BMFFF standard and can be fragmented just like other media. This work has extended the standard to allow the carrying of the DASH EventMessageBox in the time metadata track in order to reuse existing specifications like id3 and SCTE 214 for carrying SCTE 35 messages.

Rufael finishes by explaining how SCTE messages are inserted with reference to IDR frames and outlines how the DASH/HLS ingest interface between the packager and origin server works as well as showing a demo.

Watch now!

Rufael Mekuria Rufael Mekuria
Head of Research & Standardisation,
Unifed Streaming

Video: Don’t let latency ruin your longtail: an introduction to “dref MP4” caching

So it turns out that simply having an .mp4 file isn’t enough for low-latency streaming. In fact, for low latency streaming, MP4s work well, but for very fast start times, there’s optimisation work to be done.

Unified Streaming’s Boy van Dijk refers to how mp4s are put together (AKA ISO BMFF) to explain how just restructuring the data can speed up your time-to-play.

Part of the motivation to optimise is the financial motivation to store media on Amazon’s S3 which is relatively cheap and can deal with a decent amount of throughput. This costs latency, however. The way to work around this, explains Boy, is to bring the metadata out of the media so you can cache it separately and, if possible, elsewhere. Within the spec is the ability to bring the index information out of the original media and into a separate file called the dref, the Data Reference box.

Boy explains that by working statelessly, we can see why latency is reduced. Typically three requests would be needed, but we can save those if we just make one, moreover, stateless architectures scale better.

The longtail of your video library is affected most by this technique as it is, by proportion, the largest part, but gets the least requests. Storing the metadata closer, or in faster storage ca vastly reduce startup times. DREF files point to media data allowing a system to bring that closer. For a just-in-time packaging system, drefs works as a middle-man. The beauty is that a DREF for a film is only a few 10s of MB for a film of many gigabytes.

Unified Origin, for different tests, saw reductions of 1160ms->15, 185ms->13 and 240ms->160ms. Depending on what exactly was being tested which Boy explains in the talk in more detail. Overall they have shown that there’s a non-trivial improvement in startup delay.

Watch now!
Download a detailed presentation

Boy van Dijk Boy van Dijk
Streaming Solutions Engineer,
Unified Streaming

Video: CMAF and DASH-IF Live ingest protocol

Of course, without live ingest of content into the cloud, there is no live streaming so why would we leave such an important piece of the puzzle to an unsupported protocol like RTMP which has no official support for newer codecs. Whilst there are plenty of legacy workflows that still successfully use RTMP, there are clear benefits to be had from a modern ingest format.

Rufael Mekuria from Unified Streaming, introduces us to DASH-IF’s CMAF-based live ingest protocol which promises to solve many of these issues. Based on the ISO BMFF container format which underpins MPEG DASH. Whilst CMAF isn’t intrinsically low-latency, it’s able to got to much lower latencies than standard HLS and LHLS.

This work to create a standard live-ingest protocol was born out of an analysis, Rufael explains, of which part of the content delivery chain were most ripe for standardisation. It was felt that live ingest was an obvious choice partly because of the decaying RTMP protocol which was being sloppy replaced by individual companies doing their own thing, but also because there everyone contributing, in the same way, is of a general benefit to the industry. It’s not typically, at the protocol level, an area where individual vendors differentiate to the detriment of interoperability and we’ve already seen the, then, success of RMTP being used inter-operably between vendor equipment.

MPEG DASH and HLS can be delivered in a pull method as well as pushed, but not the latter is not specified. There are other aspects of how people have ‘rolled their own’ which benefit from standardisation too such as timed metadata like ad triggers. Rufael, explaining that the proposed ingest protocol is a version of CMAF plus HTTP POST where no manifest is defined, shows us the way push and pull streaming would work. As this is a standardisation project, Rufael takes us through the timeline of development and publication of the standard which is now available.

As we live in the modern world, ingest security has been considered and it comes with TLS and authentication with more details covered in the talk. Ad insertion such as SCTE 35 is defined using binary mode and Rufael shows slides to demonstrate. Similarly in terms of ABR, we look at how switching sets work. Switching sets are sets of tracks that contain different representations of the same content that a player can seamlessly switch between.

Watch now!

Rufael Mekuria Rufael Mekuria
Head of Research & Standardisation,
Unified Streaming