SCTE-35 has been used for a long time in TV to signal ad break insertions and other events and in recent years has been evolved into SCTE-104 and SCTE-224. But how can SCTE-35 be used in live OTT and what are the applications?
The talk starts with a look at what SCTE is and what SCTE-35 does – namely digital program insertion. Then the talk moves on to discuss the most well-known, and the original, use case of local ad insertion. This use case is due to the fact that ads are sold nationally and locally so whereas the national ads can be played from the playout centre, the local ads need to be inserted closer to the local transmitter.
Alex Zambelli, Principal Product Manager at Hulu, then explains the message format in SCTE along with the commands and descriptors giving us an idea of what type of information can be sent and how it might be structured. Looking then to applying this to OTT, Alex continues to look at SCTE-224 which defines how to signal SCTE-35 in DASH.
For those who still use HLS rather than DASH, Alex looks at a couple of different ways of using this with Apple, perhaps unsurprisingly, preferring a method different from the one recommended by SCTE.
The talk finishes with a discussion of the challenges of using SCTE in OTT applications. See the slides
Whether it’s to thwart ad blockers or to compensate for unreliable players, server-side ad insertion (SSAI) has an important role for many ad-based services. Phil Cluff is here to look at today’s difficulties and to look into the future.
Talking at the August Seattle Video Tech meet up, Phil looks at how we got where we are and why SSAI came about in the first place. He then looks at the manifest-manipulation method of doing this before seeing how well OTT devices actually support it showing inconsistent support for DRM in DASH and HLS. Smart TVs are a big problem delivering consistent viewing with all being different and even the new ones being delivered into the market now are few compared to the older, 5+ year-old TVs.
One solution to levelling the playing field is to distribute Chromecasts which works fairly well in allowing any device to be come a streaming device. Another option is to use server-side sitting SSAI meaning the video stream itself has the advert in it. One problem with this approach is the impracticality to target individual users. HbbTV and ATSC 3.0 are other ways to deliver adverts to the television.
Beacons are a way of players singling back to the ad networks that adverts were actually shown so Phil takes a look at how these will change as time moves on before opening up to questions from the floor.
As HDR continues its slow march into use, its different forms both in broadcast and streaming can be hard to keep track of and even differentiate. This talk from the Seattle Video Tech meetup aims to tease out these details.
Brian Alvarez from Aamzon Prime Video starts with a very brief look at how HDR has been created to sit on top of the existing distribution formats: HLS, DASH, HEVC, VP9, AV1, ATSC 3.0 and DVB. The way it does this is in a form based on either HLG or PQ.
Brian takes some time to discuss the differences between the two approaches to HDR. First off, he looks at HLG which is an ARIB standard freely available, though still with licencing. This standard is, technically, backwards compatible with SDR but most importantly doesn’t require metadata which is a big benefit in the live environment and simplifies broadcast. PQ, then, is next when we hear about the differences in approach from HLG and suggests that this gives better visual peformance than HLG. In the PQ ecosystem, Brian works through the many standards explaining how they differ and we see that the main differences are in in colour space and bit-depth.
The next part of the talk looks at the, now famous, venn diagrams showing which copmanies/products support each variant of HDR. This allows us to visualise and understand the adoption of HDR10 vs HLG for instance, to see how much broadcast TV is in PQ and HLG, to see how the film industry is producing exclusively in PQ and much more. Brian comments and gives context to each of the scenarios as he goes.
Finally a Q&A session talks about displays, end-to-end metadata flow, whether customers can tell the difference, the drive for HDR adoption and a discussion on monitors for grading HDR.
Continuing our look at ATSC 3.0, our fifth talk straddles technical detail and basic business cases. We’ve seen talks on implementation experience such as in Chicago and Phoenix and now we look at receiving the data in open source.
We’ve covered before the importance of ATSC 3.0 in the North American markets and the others that are adopting it. Jason Justman from Sinclair Digital states the business cases and reasons to push for it despite it being incompatible with previous generations. He then discusses what Software Defined Radio is and how it fits in to the puzzle. Covering the early state of this technology.
With a brief overview of the RF side of ATSC 3.0 which itself is a leap forward, Jason explains how the video layer benefits. Relying on ISO BMMFF, Jason introduces MMT (MPEG Media Transport) explaining what it is and why it’s used for ATSC 3.0.
The next section of the talk showcases libatsc3 whose goal is to open up ATSC 3.0 to talented Software Engineers and is open source which Jason demos. The library allows for live decoding of ATSC 3.0 including MMT material.
Finishing his talk with a Q&A including SCTE 34 and an interesting comparison between DVB-T2 and ATSC 3.0 makes this a very useful talk to fill in technical gaps that no other ATSC 3.0 talk covers.