Video: How Libre Can you Go?


Many companies would love to be using free codecs, unencumbered by patents, rather than paying for HEVC or AVC. Phil Cluff shows that, contrary to popular belief, it is possible stream with free codecs and get good coverage on mobile and desktop.

Phil starts off by looking at the codecs available and whether they’re patent encumbered with an eye to how much of the market can actually decode them. Free codecs and containers like WebM, VP8 etc. are not supported by Safari which reduces mobile penetration by half. To prove the point, Phil presents the results of his trials in using HEVC, AVC and VP8 on all major browsers.

Whilst this initially leaves a disappointing result for streaming with libre codecs on mobile, there is a solution! Phil explains how an idea from several years ago is being reworked to provide a free streaming protocol MPAG-SASH which avoids using DASH which is itself based on ISO BMFF which is patent encumbered. He then explains how open video players like video.js can be modified to decode libre codecs.

With these two enhancements, we finally see that coverage of up to 80% on mobile is, in principle, possible.

Watch now!
Speakers

Phil Cuff Phil Cluff
Streaming Specialist,
Mux

Video: Deploying CMAF In 2019

It’s all very good saying “let’s implement CMAF”, but what’s implemented so far and what can you expect in the real world, away from hype and promises? RealEyes took the podium at the Video Engineering Summit to explain.

CMAF represents an evolution of the tried and tested technologies HLS and DASH. With massive scalability and built upon the well-worn tenants of HTTP, Netflix and a whole industry was born and is thriving on these still-evolving technologies. CMAF stands for the Common Media Application Format because it was created to allow both HLS and DASH to be implemented in one common standard. But the push to reduce latency further and further has resulted in CMAF being better known for it’s low-latency form which can be used to deliver streams with five to ten times lower latencies.

John Gainfort tackles explaining CMAF and highlights all the non-latency-related features before then tackling its low-latency form. We look at what it is (a manfest) and where it came from (ISO BMFF before diving in to the current possibilities and the ‘to do list’ of DRM.

Before the Q&A, John then moves on to how CMAF is implemented to deliver low-latency stream: what to expect in terms of latency and the future items which, when achieved, will deliver the full low-latency experience.

Watch now!

Speaker

John Gainfort John Gainfort.
Development Manager,
RealEyes

Video: Tidying Up (Bits on the Internet)

Netflix’s Anne Aaron explains how VMAF came about and how AV1 is going to benefit both the business and the viewers.

VMAF is a method for computers to calculate the quality of a video in a way which would match a human’s opinion. Standing for Video Multi-Method Assessment Fusion, Anne explains that it’s a combination (fusion) of more than one metric each harnessing different aspects. She presents data showing the increased correlation between VMAF and real-life tests.

Anne’s job is to maximise enjoyment of content through efficient use of bandwidth. She explains there are many places with wireless data is limited so getting the maximum amount of video through that bandwidth cap is an essential part of Netflix’s business health.

This ties in with why Netflix is part of the Alliance for Open Media who are in the process of specifying AV1, the new video codec which promises bitrate improvements over-and-above HEVC. Anne expands on this and presents the aim to deliver 32 hours of video using AV1 for 4Gb subscribers.

Watch now!
Speaker

Anne Aaron

Video: Implementing AES67 and ST 2110-30 in Your Plant

AES67 is a flexible standard but with this there is complexity and nuance. Implementing it within ST 2110-30 takes some care and this talk covers lessons learnt in doing exactly that.

AES67 is a standard defined by the Audio Engineering Society to enable high-performance audio-over-IP streaming interoperability between various AoIP systems like Dante, WheatNet-IP and Livewire. It provides comprehensive interoperability recommendations in the areas of synchronization, media clock identification, network transport, encoding and streaming, session description, and connection management.

The SMPTE ST 2110 standards suite makes it possible to separately route and break away the essence streams – audio, video, and ancillary data. ST 2110-30 addresses system requirements and payload formats for uncompressed audio streams and refers to the subset of AES67 standard.

In this video Dominic Giambo from Wheatsone Corporation discusses tips for implementing AES67 and ST 2110-30 standards in a lab environment consisting of over 160 devices (consoles, sufraces, hardware and software I/O blades) and 3 different automation systems. The aim of the test was to pass audio through every single device creating a very long chain to detect any defects.

The following topics are covered:

  • SMPTE ST 2110-30 as a subset of AES67 (support of the PTP profile defined in SMPTE ST 2059-2, an offset value of zero between the media clock and the RTP stream clock, option to force a device to operate in PTP slave-only mode)
  • The importance of using IEEE-1588 PTP v2 master clock for accuracy
  • Packet structure (UDP and RTP header, payload type)
  • Network configuration considerations (mapping out IP and multicast addresses for different vendors, keeping all devices on the same subnet)
  • Discovery and control (SDP stream description files, configuration of signal flow from sources to destinations)

Watch now!

You can download the slides here.

Speaker

Dominic Giambo
Senior Embedded Engineer
Wheatstone Corporation