Video: Broadcast and OTT monitoring: The challenge of multiple platforms


Is it possible to monitor OTT services to the same standard as traditional broadcast services? How can they be visualised, what are the challenges and what makes monitoring streaming services different?

As with traditional broadcast, some broadcasters outsource the distribution of streaming services to third parties. Whilst this can work well in broadcast, there any channel would be missing out on a huge opportunity if they didn’t also monitor some analytics of the viewer using their streaming service. So, to some extent, a broadcaster always wants to look at the whole chain. Even when the distribution is not outsourced and the OTT system has been developed and is run by the broadcaster, at some point a third party will have to be involved and this is typically the CDN and/or Edge network. A broadcaster would do well to monitor the video provided at all points through the chain including right up to the edge.

The reason for monitoring is to keep viewers happy and, by doing so, reduce churn. When you have analytics from a player telling you something isn’t right, it’s only natural to want too find out what went wrong and to know that, you will need monitoring in your distribution chain. When you have that monitoring, you can be much more pro-active in resolving issues and improve your service overall.

Jeff Herzog from Verizon Digital Media Services explains ways to achieve this and the benefits it can bring. After a primer on HLS streaming, he explains ways to monitor the video itself and also how to monitor everything but the video as a light-touch monitoring solution.

Jeff explains that because HLS is based on playlists and files being available, you can learn a lot about your service just by monitoring these small text files, parsing them and checking that all the files it mentions are available with minimal wait times. By doing this and other tricks, you can successfully gauge how well your service is working without the difficulty of dealing with large volumes of video data. The talk finishes with some examples of what this monitoring can look like in action.

This talk was given at the SMPTE Annual Technical Conference 2018.
For more OTT videos, check out The Broadcast Knowledge’s Youtube OTT playlist.
Speakers

Jeff Herzog Jeff Herzog
Senior Product Manger, Video Monitoring & Compliance,
Verizon Digital Media Services

Video: What’s the Deal with ALHLS?

Low latency streaming was moving forward without Apple’s help – but they’ve published their specification now, so what does that mean for the community efforts that were already under way and, in some places, in use?

Apple is responsible for HLS, the most prevalent protocol for streaming video online today. In itself it’s a great success story as HLS was ideal for its time. It relied on HTTP which was a tried and trusted technology of the day, but the fact it was file-based instead of a stream pushed from the origin was a key factor in its wide adoption.

As life has moved on and demands have moved from “I’d love to see some video – any video – on the internet!” to “Why is my HD stream arriving after my flat mate’s TV’s?” we see that HLS isn’t quite up to the task of low-latency delivery. Using pure HLS as originally specified, a latency of less than 20 seconds was an achievement.

Various methods were, therefore, employed to improve HLS. These ideas included cutting the duration of each piece of the video, introducing HTTP 1.1’s Chunked Transfer Encoding, early announcement of chunks and many others. Using these, and other, techniques, Low Latency HLS (LHLS) was able to deliver streams of 9 down to 4 seconds.

Come WWDC this year, Apple announced their specification on achieving low latency streaming which the community is calling ALHLS (Apple Low-latency HLS). There are notable differences in Apple’s approach to that already adopted by the community at large. Given the estimated 1.4 billion active iOS devices and the fact that Apple will use adherence to this specification to certify apps as ‘low latency’, this is something that the community can’t ignore.

Zac Shenker from Comcast explains some of this backstory and helps us unravel what this means for us all. Zac first explains the what LHS is and then goes in to detail on Apple’s version which includes interesting, mandatory, elements like using HTTP/2. Using HTTP/2 and the newer QUIC (which will become effectively HTTP/3) is very tempting for streaming applications but it requires work both on the server and the player side. Recent tests using QUIC have been, when taken as a whole, inconclusive in terms of working out whether this it has a positive or a negative impact on streaming performance; experiemnts have shown both results.

The talk is a very good and detailed look at the large array of requirements in this specification. The conclusion is a general surprise at the amount of ‘moving parts’ given there is both significant work to be done on the server as well as the player. The server will have to remember state and due to the use of HTTP/2, it’s not clear that the very small playlist.m3u8 files can be served from a playlist-optimised CDN separately from the video as is often the case today.

There’s a whole heap of difference between serving a flood of large files and delivering a small, though continually updated, file to thousands of end points. As such, CDNs currently optimised separately for the text playlists and the media files they serve. They may even be delivered by totally separate infrastructures.

Zac explains why this changes with ALHLS both in terms of separation but also in the frequency of updating the playlist files. He goes on to explore the other open questions like how easy it will be to integrate Server-Side Add Insertion (SSAI) and even the appetite for adoption of HTTP/2.

Watch now!
Speaker

Zac Shenker Zac Shenker
Director of Engineering, Video Experience & Optimization,
CBS Interactive

Video: How to Identify Real-World Playout Options

There are so many ways to stream video, how can you find the one that suits you best? Weighing up the pros and cons in this talk is Robert Reindhardt from videoRx.

Taking each of the main protocols in turn, Robert explains the prevalence of each technology from HLS and DASH through to WebRTC and even Websockets. Commenting on each from his personal experience of implementing each with clients, we build up a picture of when the best situations to use each of them.

Speakers

Robert Reinhardt Robert Reinhardt
CTO,
videoRX

Video: Monetization with Manifest Manipulation

Manipulating the manifest of streamed video allows localisation of adverts with the option of per-client customisation. This results in better monetisation but also a better way to deal with blackouts and other regulatory or legal restrictions.

Using the fact that most streamed video is delivered by using a playlist which is simply a text file which lists the locations of the many files which contain the video, we see that you could deliver different playlists to clients in different locations – detected via geolocating the IP address. Similarly different ads can be delivered depending on the type of client requesting – phone, tablet, computer etc.

Here, Imagine’s Yuval Fisher starts by reminding us how online streaming typically works using HLS as an example. He then leads us through the possibilities of manifest manipulation. One interesting idea is using this to remove hardware delivering cost savings using the same infrastructure to deliver to both the internet and broadcast. Yuval finshes up with a list of “Dos and Don’ts” to explain the best way to achieve the playlist manipulation.

Sarah Foss rounds off the presentation explaining how manifest manipulation sits at the centre of the rest of the ad-delivery system.

Watch now!

Speaker

Yuval Fisher Yuval Fisher
CTO, Distribution
Imagine Communications.
Sarah Foss Sarah Foss
Former SVP & GM, Ad Tech,
Imagine Communications.