Video: WAVE (Web Application Video Ecosystem) Update

With wide membership including Apple, Comcast, Google, Disney, Bitmovin, Akamai and many others, the WAVE interoperability effort is tackling the difficulties web media encoding, playback and platform issues utilising global standards.

John Simmons from Microsoft takes us through the history of WAVE, looking at the changes in the industry since 2008 and WAVE’s involvement. CMAF represents an important milestone in technology recently which is entwined with WAVE’s activity backed by over 60 major companies.

The WAVE Content Specification is derived from the ISO/IEC standard, “Common media application format (CMAF) for segmented media”. CMAF is the container for the audio, video and other content. It’s not a protocol like DASH, HLS or RTMP, rather it’s more like an MPEG 2 transport stream. CMAF nowadays has a lot of interest in it due to its ability to deliver very low latency streaming of less than 4 seconds, but it’s also important because it represents a standardisation of fMP4 (fragmented MP4) practices.

The idea of standardising on CMAF allows for media profiles to be defined which specify how to encapsulate certain codecs (AV1, HEVC etc.) into the stream. Given it’s a published specification, other vendors will be able to inter-operate. Proof of the value of the WAVE project is the 3 amendments that John mentions issued from MPEG on the CMAF standard which have come directly from WAVE’s work in validating user requirements.

Whilst defining streaming is important in terms of helping in-cloud vendors work together and in allowing broadcasters to more easily build systems, it’s vital the decoder devices are on board too, and much work goes into the decoder-device side of things.

On top of having to deal with encoding and distribution, WAVE also specifies an HTML5 APIs interoperability with the aim of defining baseline web APIs to support media web apps and creating guidelines for media web app developers.

This talk was given at the Seattle Video Tech meetup.

Watch now!
Slides from the presentation
Check out the free CTA specs

Speaker

John Simmons John Simmons
Media Platform Architect,
Microsoft

Video: From WebRTC to RTMP

With the demise of RTMP, what can WebRTC – its closest equivalent – learn from it? RTC stands for Real-Time Communications and hails from the video/voice teleconferencing world. RTC traditionally has ultra-low latency (think sub-second; real-time) so as broadcasters and streaming companies look to reduce latency it’s the obvious technology to look at. However, RTC comes from a background of small meetings, mixed resolutions, mixed bandwidths and so the protocols underpinning it can be lacking what broadcast-style streamers need.

Nick Chadwick from MUX looks at the pros and cons of the venerable RTMP (Real Time Messaging Protocol). What was in it that was used and unused? What did need that it didn’t have? What gap is being left by its phasing out?

Filling these increasing gaps is the focus of the streaming community and whether that comes through WebRTC, fragmented MP4 delivered over web sockets, Low-Latency HLS, Apple’s Low-Latency HLS, SASH, CMAF or something else…it still needs to be fulfilled.

Nick finishes with two demos which show the capabilities of WebRTC which outstrip RTMP – live mixing on a browser. WebRTC clearly has a future for more adventurous services which don’t simply want to deliver a linear channel to sofa-dwelling humans. But surely Nick’s message is WebRTC needs to step up to the plate for broadcasters, in general, to enable them to achieve < 1-second end-to-end latency in a way which is compatible with broadcast workflows.

Watch now!
Speaker

Nick Chadwick Nick Chadwick
Software Engineer,
Mux

Video: Broadcast and OTT monitoring: The challenge of multiple platforms


Is it possible to monitor OTT services to the same standard as traditional broadcast services? How can they be visualised, what are the challenges and what makes monitoring streaming services different?

As with traditional broadcast, some broadcasters outsource the distribution of streaming services to third parties. Whilst this can work well in broadcast, there any channel would be missing out on a huge opportunity if they didn’t also monitor some analytics of the viewer using their streaming service. So, to some extent, a broadcaster always wants to look at the whole chain. Even when the distribution is not outsourced and the OTT system has been developed and is run by the broadcaster, at some point a third party will have to be involved and this is typically the CDN and/or Edge network. A broadcaster would do well to monitor the video provided at all points through the chain including right up to the edge.

The reason for monitoring is to keep viewers happy and, by doing so, reduce churn. When you have analytics from a player telling you something isn’t right, it’s only natural to want too find out what went wrong and to know that, you will need monitoring in your distribution chain. When you have that monitoring, you can be much more pro-active in resolving issues and improve your service overall.

Jeff Herzog from Verizon Digital Media Services explains ways to achieve this and the benefits it can bring. After a primer on HLS streaming, he explains ways to monitor the video itself and also how to monitor everything but the video as a light-touch monitoring solution.

Jeff explains that because HLS is based on playlists and files being available, you can learn a lot about your service just by monitoring these small text files, parsing them and checking that all the files it mentions are available with minimal wait times. By doing this and other tricks, you can successfully gauge how well your service is working without the difficulty of dealing with large volumes of video data. The talk finishes with some examples of what this monitoring can look like in action.

This talk was given at the SMPTE Annual Technical Conference 2018.
For more OTT videos, check out The Broadcast Knowledge’s Youtube OTT playlist.
Speakers

Jeff Herzog Jeff Herzog
Senior Product Manger, Video Monitoring & Compliance,
Verizon Digital Media Services

Video: What’s the Deal with LL-HLS?

Low latency streaming was moving forward without Apple’s help – but they’ve published their specification now, so what does that mean for the community efforts that were already underway and, in some places, in use?

Apple is responsible for HLS, the most prevalent protocol for streaming video online today. In itself, it’s a great success story as HLS was ideal for its time. It relied on HTTP which was a tried and trusted technology of the day, but the fact it was file-based instead of a stream pushed from the origin was a key factor in its wide adoption.

As life has moved on and demands have moved from “I’d love to see some video – any video – on the internet!” to “Why is my HD stream arriving after my flat mate’s TV’s?” we see that HLS isn’t quite up to the task of low-latency delivery. Using pure HLS as originally specified, a latency of less than 20 seconds was an achievement.

Various methods were, therefore, employed to improve HLS. These ideas included cutting the duration of each piece of the video, introducing HTTP 1.1’s Chunked Transfer Encoding, early announcement of chunks and many others. Using these, and other, techniques, Low Latency HLS (LHLS) was able to deliver streams of 9 down to 4 seconds.

Come WWDC this year, Apple announced their specification on achieving low latency streaming which the community is calling ALHLS (Apple Low-latency HLS). There are notable differences in Apple’s approach to that already adopted by the community at large. Given the estimated 1.4 billion active iOS devices and the fact that Apple will use adherence to this specification to certify apps as ‘low latency’, this is something that the community can’t ignore.

Zac Shenker from Comcast explains some of this backstory and helps us unravel what this means for us all. Zac first explains what LHS is and then goes into detail on Apple’s version which includes interesting, mandatory, elements like using HTTP/2. Using HTTP/2 and the newer QUIC (which will become effectively HTTP/3) is very tempting for streaming applications but it requires work both on the server and the player side. Recent tests using QUIC have been, when taken as a whole, inconclusive in terms of working out whether this it has a positive or a negative impact on streaming performance; experiments have shown both results.

The talk is a detailed look at the large array of requirements in this specification. The conclusion is a general surprise at the amount of ‘moving parts’ given there is both significant work to be done on the server as well as the player. The server will have to remember state and due to the use of HTTP/2, it’s not clear that the very small playlist.m3u8 files can be served from a playlist-optimised CDN separately from the video as is often the case today.

There’s a whole heap of difference between serving a flood of large files and delivering a small, though continually updated, file to thousands of endpoints. As such, CDNs currently optimised separately for the text playlists and the media files they serve. They may even be delivered by totally separate infrastructures.

Zac explains why this changes with LL-HLS both in terms of separation but also in the frequency of updating the playlist files. He goes on to explore the other open questions like how easy it will be to integrate Server-Side Ad Insertion (SSAI) and even the appetite for adoption of HTTP/2.

Watch now!
Speaker

Zac Shenker Zac Shenker
Director of Engineering, Video Experience & Optimization,
CBS Interactive