Video: A Technical Overview of AV1

If there’s any talk that cuts through the AV1 hype, it must be this one. The talk from the @Scale conference starts by re-introducing AV1 and AoM but then moves quickly on to encoding techniques and the toolsets now available in AV1.

Starting by looking at the evolution from VP9 to AV1, Google engineer Yue Chen looks at:

  • Extended Reference Frames
  • Motion Vector Prediction
  • Dynamic Motion Vector Referencing
  • Overlapped Block Motion Compensation
  • Masked Compound Prediction
  • Warped Motion Compensation
  • Transform (TX) Coding, Kernels & Block Partitioning
  • Entropy Coding
  • AV1 Symbol Coding
  • Level-map TX Coefficient Coding
  • Restoration and Post-Processing
  • Constrained Dire. Enhancement Filtering
  • In-loop restoration & super resolution
  • Film Grain Synthesis

The talk finishes by looking at Compression Efficiency of AV1 against both HEVC (x.265) & VP9 (libvpx) then coding complexity in terms of speed plus what’s next on the roadmap!

Watch now!

Speaker

Yue Chen Yue Chen
Senior AV1 Engineer,
Google

Video: PTP Management and Media Flow Monitoring for All IP Infrastructures

Black and burst was always a ‘set and forget’ system. PTP, which replaces it, deserves active monitoring – and the same is true of your uncompressed media streams as we hear in this talk from the IP Showcase.

In professional essence-over-IP systems such as based on SMPTE ST 2110, timing needs to be rock solid. Thanks to asynchronous nature of IP many different flows can be carried across a network without having to be concerned with synchronization, but this presents a challenge in the production environment. To provide the necessary “genlock”, there is a need for a precise timing standard which is provided by SMPTE ST 2059 which defines the way broadcast signals relate to the IEEE 1588-2008 Precision Time Protocol, commonly referred to as PTPv2. This protocol is very different from analogue Black Burst and Tri-Level signals used in SDI world, so new tools and skills are required for fault finding.

In the first part of this presentation Thomas Gunkel from Skyline Communications focuses on the best practices to configure, monitor and manage PTP in an all-IP infrastructure covering the following:

  • PTP protocol vs reality (packet delay variation, network asymmetry, imperfect timestamping)
  • Increasing reliability of PTP (hardware timestamping, using QoS to prioritise PTP traffic, correcting timing intervals)
  • PTP device issues (grandmaster / boundary clock failure, loss of external reference, badly implemented BMCA)
  • PTP network issues (missing / corrupted event messages, increased packet delay variation, network asymmetry, multicast issues)
  • Automating PTP configuration (BMCA settings, messaging rate intervals, communication mode)
  • Automated PTP provisioning (detecting new PDP our devices using IS-04 or proprietary protocols, extracting end-to-end PTP topology with LLDP, applying standard PTP profiles)
  • PTP monitoring and control (monitor every single metric related to PTP like PTP offset, PTP mean path delay and multicast PTP network traffic for all grandmaster, master and slave devices, prevent slave devices from becoming master)

The second part of this video shows how to track uncompressed media flows in an ST 2110 IP-based media facility using a multi-layer approach and to how to pinpoint any potential issues using Network Monitoring System. Topics covered:

  • All IP flows vs SDI signals
  • Essentials for true orchestration (dynamically orchestrated resources and media services, monitoring / controlling infrastructure and media flows, automatic devices detection and provisioning)
  • Detecting issues (wrong DB entries for multicast essences, broadcast controller and SDN controller DBs out of sync, source not active, IGMP join / leave issues, SSM issues, network oversubscription)
  • Media flow tracking (reading cross point status from SDN controller, comparing this status with actual network topology, detecting “ghost” streams, using sFlow / NetFlow to track individual multicast flows)
  • Importance of true end-to-end SDN orchestration rather than SDN control (routing protocols which provides feedback)
  • All IP routing procedure (resolving multicast flow topology in combination with label management, checking source, checking destination route, presenting data for root cause analysis on each of these steps)

Watch now!

You can download the slides from here.

Speaker

Thomas Gunkel
Market Director Broadcast
Skyline Communications

Video: How IP is Revolutionising Sports Video Production

IP Production is very important for sports streaming including esports where its flexibility is a big plus over SDI infrastructure. This panel discusses NDI, SMPTE ST 2110

eSports, in particular, uses many cameras, Point-of-video cameras, PC outputs and the normal camera positions needed to make a good show, so a technology like NDI really helps keeps costs down – since every SDI port is expensive and takes space – plus it allows computer devices to ‘natively’ send video without specific hardware.

NDI is an IP specification from Newtek (now owned by VizRT) which can be licenced for free and is included in Ross, VizRT, Panasonic, OBS, Epiphan and hundreds more. It allows ultra-low-latency video at 100Mbps or low-latency video at 8Mbps.

The panel discusses the right place and use for NDI compared to SDI. In the right places, networking is more convenient as in stadia. And if you have a short distance to run, SDI can often be the best plan. Similarly, until NDI version 4 which includes timing synchronisation, ST 2110 has been a better bet in terms of synchronised video for ISO recordings.

For many events which combine many cameras with computer outputs, whether it be computers playing youtube, Skype or something else, removing the need to convert to SDI allows the production to be much more flexible.

The panel finishes by discussing audio, and taking questions from the floor covering issues such as embedded alpha, further ST 2110 considerations and UHD workflows.

Watch now!
Speakers

Philip Nelson Philip Nelson
President,
Nelco Media
Mark East Mark East
Chief Problem Solver,
090 Media
Victor Borachuk Victor Borachuk
Director/Executive Producer
JupiterReturn
Jack Lave Jack Lavey
Operations Technician,
FloSports
Jon Raidel Jon Raidel
Technical Operations Manager,
NFL Networks

Video: Bandwidth Prediction in Low-Latency Chunked Streaming

How can we overcome one of the last, big, problems in making CMAF generally available: making ABR work properly.

ABR, Adaptive Bitrate is a technique which allows a video player to choose what bitrate video to download from a menu of several options. Typically, the highest bitrate will have the highest quality and/or resolution, with the smallest files being low resolution.

The reason a player needs to have the flexibility to choose the bitrate of the video is mainly due to changing network conditions. If someone else on your network starts watching some video, this may mean you can no longer download video quick enough to keep watching in full quality HD and you may need to switch down. If they stop, then you want your player to switch up again to make the most of the bitrate available.

Traditionally this is done fairly simply by measuring how long each chunk of the video takes to download. Simply put, if you download a file, it will come to you as quickly as it can. So measuring how long each video chunk takes to get to you gives you an idea of how much bandwidth is available; if it arrives very slowly, you know you are close to running out of bandwidth. But in low-latency streaming, your are receiving video as quickly as it is produced so it’s very hard to see any difference in download times and this breaks the ABR estimation.

Making ABR work for low-latency is the topic covered by Ali in this talk at Mile High Video 2019 where he presents some of the findings from his recently published paper which he co-authored with, among others, Bitmovin’s Christian Timmerer and which won the DASH-IF Excellence in DASH award.

He starts by explaining how players currently behave with low-latency ABR showing how they miss out on changing to higher/lower renditions. Then he looks at the differences on the server and for the player between non-low-latency and low-latency streams. This lays the foundation to discuss ACTE – ABR for Chunked Transfer Encoding.

ACTE is a method of analysing bandwidth with the assumption that some chunks will be delivered as fast as the network allows and some won’t be. The trick is detecting which chunks actually show the network speed and Ali explains how this is done and shows the results of their evaluation.

Watch now!

Speaker

Ali C. Begen Ali C. Begen
Technical Consultant and
Computer Science Professor