Video: WAVE (Web Application Video Ecosystem) Update

With wide membership including Apple, Comcast, Google, Disney, Bitmovin, Akamai and many others, the WAVE interoperability effort is tackling the difficulties web media encoding, playback and platform issues utilising global standards.

John Simmons from Microsoft takes us through the history of WAVE, looking at the changes in the industry since 2008 and WAVE’s involvement. CMAF represents an important milestone in technology recently which is entwined with WAVE’s activity backed by over 60 major companies.

The WAVE Content Specification is derived from the ISO/IEC standard, “Common media application format (CMAF) for segmented media”. CMAF is the container for the audio, video and other content. It’s not a protocol like DASH, HLS or RTMP, rather it’s more like an MPEG 2 transport stream. CMAF nowadays has a lot of interest in it due to its ability to deliver very low latency streaming of less than 4 seconds, but it’s also important because it represents a standardisation of fMP4 (fragmented MP4) practices.

The idea of standardising on CMAF allows for media profiles to be defined which specify how to encapsulate certain codecs (AV1, HEVC etc.) into the stream. Given it’s a published specification, other vendors will be able to inter-operate. Proof of the value of the WAVE project is the 3 amendments that John mentions issued from MPEG on the CMAF standard which have come directly from WAVE’s work in validating user requirements.

Whilst defining streaming is important in terms of helping in-cloud vendors work together and in allowing broadcasters to more easily build systems, it’s vital the decoder devices are on board too, and much work goes into the decoder-device side of things.

On top of having to deal with encoding and distribution, WAVE also specifies an HTML5 APIs interoperability with the aim of defining baseline web APIs to support media web apps and creating guidelines for media web app developers.

This talk was given at the Seattle Video Tech meetup.

Watch now!
Slides from the presentation
Check out the free CTA specs

Speaker

John Simmons John Simmons
Media Platform Architect,
Microsoft

Video: SCTE-35 In-band Event Signalling in OTT


SCTE-35 has been used for a long time in TV to signal ad break insertions and other events and in recent years has been evolved into SCTE-104 and SCTE-224. But how can SCTE-35 be used in live OTT and what are the applications?

The talk starts with a look at what SCTE is and what SCTE-35 does – namely digital program insertion. Then the talk moves on to discuss the most well-known, and the original, use case of local ad insertion. This use case is due to the fact that ads are sold nationally and locally so whereas the national ads can be played from the playout centre, the local ads need to be inserted closer to the local transmitter.

Alex Zambelli, Principal Product Manager at Hulu, then explains the message format in SCTE along with the commands and descriptors giving us an idea of what type of information can be sent and how it might be structured. Looking then to applying this to OTT, Alex continues to look at SCTE-224 which defines how to signal SCTE-35 in DASH.

For those who still use HLS rather than DASH, Alex looks at a couple of different ways of using this with Apple, perhaps unsurprisingly, preferring a method different from the one recommended by SCTE.

The talk finishes with a discussion of the challenges of using SCTE in OTT applications.
See the slides

Watch now!
Speaker

Alex Zambelli Alex Zambelli
Principal Product Manager,
Hulu

Video: The Future of SSAI on OTT Devices

Whether it’s to thwart ad blockers or to compensate for unreliable players, server-side ad insertion (SSAI) has an important role for many ad-based services. Phil Cluff is here to look at today’s difficulties and to look into the future.

Talking at the August Seattle Video Tech meet up, Phil looks at how we got where we are and why SSAI came about in the first place. He then looks at the manifest-manipulation method of doing this before seeing how well OTT devices actually support it showing inconsistent support for DRM in DASH and HLS. Smart TVs are a big problem delivering consistent viewing with all being different and even the new ones being delivered into the market now are few compared to the older, 5+ year-old TVs.

One solution to levelling the playing field is to distribute Chromecasts which works fairly well in allowing any device to be come a streaming device. Another option is to use server-side sitting SSAI meaning the video stream itself has the advert in it. One problem with this approach is the impracticality to target individual users. HbbTV and ATSC 3.0 are other ways to deliver adverts to the television.

Beacons are a way of players singling back to the ad networks that adverts were actually shown so Phil takes a look at how these will change as time moves on before opening up to questions from the floor.

Watch now!
Speakers

Phil Cuff Phil Cluff
Streaming Specialist,
Mux

Video: HDR Formats and Trends

As HDR continues its slow march into use, its different forms both in broadcast and streaming can be hard to keep track of and even differentiate. This talk from the Seattle Video Tech meetup aims to tease out these details. Whilst HDR has long been held up as a perfect example of ‘better pixels’ and many have said they would prefer to deploy HD video plus HDR rather than moving in to UHD at the same time as introducing HDR, few have followed through.

Brian Alvarez from Amazon Prime Video starts with a very brief look at how HDR has been created to sit on top of the existing distribution formats: HLS, DASH, HEVC, VP9, AV1, ATSC 3.0 and DVB. The way it does this is in a form based on either HLG or PQ.

Brian takes some time to discuss the differences between the two approaches to HDR. First off, he looks at HLG which is an ARIB standard freely available, though still with licencing. This standard is, technically, backwards compatible with SDR but most importantly doesn’t require metadata which is a big benefit in the live environment and simplifies broadcast. PQ is next, and we hear about the differences in approach from HLG with the suggestion that this gives better visual performance than HLG. In the PQ ecosystem, Brian works through the many standards explaining how they differ and we see that the main differences are in in colour space and bit-depth.

The next part of the talk looks at the, now famous, venn diagrams (by Yoeri Geutskens) showing which companies/products support each variant of HDR. This allows us to visualise and understand the adoption of HDR10 vs HLG for instance, to see how much broadcast TV is in PQ and HLG, to see how the film industry is producing exclusively in PQ and much more. Brian comments and gives context to each of the scenarios as he goes.

Finally a Q&A session talks about displays, end-to-end metadata flow, whether customers can tell the difference, the drive for HDR adoption and a discussion on monitors for grading HDR.

Watch now! / Download the Slides

Speaker

Brian Alvarez Brian Alvarez
Principal Product Manager,
Amazon Prime Video