Video: Understanding the World of Ad Tech

Advertising has been the mainstay of TV for many years. Like it or loathe it, ad-support VoD (AVoD) delivers free to watch services that open up content to a much wider range of people than otherwise possible just like ad-supported broadcast TV. Even people who can afford subscriptions have a limit to the number of services they will subscribe to. Having an AVoD offering means you can draw people in and if you also have SVoD, there’s a path to convince them to sign up.

To look at where ad tech is today and what problems still exist, Streaming Media contributing editor Nadine Krefetz has brought together Byron Saltysiak from WarnerMedia, Verizon Media’s Roy Firestone, CBS Interactive’s Jarred Wilichinksy and Newsy’s Tony Brown to share their daily experience of working with OTT ad tech.

 

 

Nadine is quick to ask the panel what they feel the weakest link is in ad tech. ‘Scaling up’ answered Jarred who’s seen from massive events how quickly parts of the ad ecosystem fail when millions of people need an ad break at the same time. Bryon adds that with the demise of flash came the loss of an abstraction layer. Now, each platform has to be targetted directly leading to a lot of complexity. Previously, as long as you got flash right, it would work on all platforms. Lastly, redundancy came up as a weakness. Linked to Jarred’s point about the inability to scale easily, the panel’s consensus is they are far off broadcast’s five-nines uptime targets. In some ways, this is to be expected as IT is a more fragmented, faster-moving market than consumer TVs making it all the harder to keep up and match the changing patterns.

A number of parts of the conversation centred around ad tech as an ecosystem. This is a benefit and a drawback. Working in an ecosystem means that as much as the streaming provider wants to invest in bolstering their own service to make it able to cope with millions upon millions of requests, they simply can’t control what the rest of the ecosystem does and if 2 million people all go for a break at once, it doesn’t take much for an ad provider’s servers to collapse under the weight. On the other hand, points out Byron, what is a drawback is also a strength whereby streaming has the advantage of scale which broadcasters don’t. Roy’s service delivered one hundred thousand matches last year. Byron asks how many linear channels you’d need to cover that many.

Speed is a problem given that the ad auction needs to happen in the twenty seconds or so leading up to the ad being shown to the viewer. With so many players, things can go wrong starting off simply with slow responses to requests. But also with ad lengths. Ad breaks are built around 15 seconds segments so it’s difficult when companies want 6 or 11 seconds and it’s particularly bad when five 6-second ads are scheduled for a break: “no-one wants to see that.”

Jarred laments that despite the standards and guidelines available that “it’s still the wild west” when it comes to ad quality and loudness where viewers are the ones bearing the brunt of these mismatched practices.

Nadine asks about privacy regulations that are increasingly reducing the access advertisers have to viewer data. Byron points out that they do in some way need a way to identify a user such that they avoid showing them the same ad all the time. It turns out that registered/subscribed users can be tracked under some regulations so there’s a big push to have people sign up.

Other questions covered by the panel include QA processes, the need for more automation in QA, how to go about starting your own service, dealing with Roku boxes and how to deal with AVoD downloaded files which, when brought online, need to update the ad servers about which ads were watched.

Watch now!
Speakers

Tony Brown Tony Brown
Chief of Staff,
Newsy
Jarred Wilichinsky Jarred Wilichinsky
SVP Global Video Monetization and Operations,
CBS Interactive
Byron Saltysiak Byron Saltysiak
VP of Video and Connected Devices,
WarnerMedia
Roy Firestone Roy Firestone
Principal Product Manger,
Verizon Media
Nadine Krefetz Nadine Krefetz
Contributing Editor,
Streaming Media

Video: Engineering a Live Streaming Workflow for Super Bowl LIII


Super Bowl 53 has come and gone with another victory for the New England Patriots. CBS Interactive responsible for streaming of this event built a new system to deal with all the online viewers. Previously they used one vendor for acquisition and encoding and another vendor for origin storage, service delivery and security. This time the encoders were located in CBS Broadcast Centre in New York and all other systems moved to AWS cloud. Such approach gave CBS full control over the streams.

Due to a very high volume of traffic (between 30 and 35 terabits) four different CDN vendors had to be engaged. A cloud storage service optimized for live streaming video not only provided performance, consistency, and low latency, but also allowed to manage multi-CDN delivery in effective way.

In this video Krystal presents a step-by-step approach to creating a hybrid cloud/on premise infrastructure for the Super Bowl, including ad insertion, Multi-CDN delivery, monitoring and operational visibility. She emphasizes importance of scaling infrastructure to meet audience demands, taking ownership of end to end workflow, performing rigorous testing and handling communication across multiple teams and vendors.

You can download the slides from here.

Watch now!

Speaker

Krystal Mejia Krystal Mejia
Software Engineer,
CBS Interactive

Video: What’s the Deal with LL-HLS?

Low latency streaming was moving forward without Apple’s help – but they’ve published their specification now, so what does that mean for the community efforts that were already underway and, in some places, in use?

Apple is responsible for HLS, the most prevalent protocol for streaming video online today. In itself, it’s a great success story as HLS was ideal for its time. It relied on HTTP which was a tried and trusted technology of the day, but the fact it was file-based instead of a stream pushed from the origin was a key factor in its wide adoption.

As life has moved on and demands have moved from “I’d love to see some video – any video – on the internet!” to “Why is my HD stream arriving after my flat mate’s TV’s?” we see that HLS isn’t quite up to the task of low-latency delivery. Using pure HLS as originally specified, a latency of less than 20 seconds was an achievement.

Various methods were, therefore, employed to improve HLS. These ideas included cutting the duration of each piece of the video, introducing HTTP 1.1’s Chunked Transfer Encoding, early announcement of chunks and many others. Using these, and other, techniques, Low Latency HLS (LHLS) was able to deliver streams of 9 down to 4 seconds.

Come WWDC this year, Apple announced their specification on achieving low latency streaming which the community is calling ALHLS (Apple Low-latency HLS). There are notable differences in Apple’s approach to that already adopted by the community at large. Given the estimated 1.4 billion active iOS devices and the fact that Apple will use adherence to this specification to certify apps as ‘low latency’, this is something that the community can’t ignore.

Zac Shenker from Comcast explains some of this backstory and helps us unravel what this means for us all. Zac first explains what LHS is and then goes into detail on Apple’s version which includes interesting, mandatory, elements like using HTTP/2. Using HTTP/2 and the newer QUIC (which will become effectively HTTP/3) is very tempting for streaming applications but it requires work both on the server and the player side. Recent tests using QUIC have been, when taken as a whole, inconclusive in terms of working out whether this it has a positive or a negative impact on streaming performance; experiments have shown both results.

The talk is a detailed look at the large array of requirements in this specification. The conclusion is a general surprise at the amount of ‘moving parts’ given there is both significant work to be done on the server as well as the player. The server will have to remember state and due to the use of HTTP/2, it’s not clear that the very small playlist.m3u8 files can be served from a playlist-optimised CDN separately from the video as is often the case today.

There’s a whole heap of difference between serving a flood of large files and delivering a small, though continually updated, file to thousands of endpoints. As such, CDNs currently optimised separately for the text playlists and the media files they serve. They may even be delivered by totally separate infrastructures.

Zac explains why this changes with LL-HLS both in terms of separation but also in the frequency of updating the playlist files. He goes on to explore the other open questions like how easy it will be to integrate Server-Side Ad Insertion (SSAI) and even the appetite for adoption of HTTP/2.

Watch now!
Speaker

Zac Shenker Zac Shenker
Director of Engineering, Video Experience & Optimization,
CBS Interactive