Video: Keeping Time with PTP

Different from his talk of the same name we covered last week, Mike Waidson from Telestream explains the fundamentals of PTP joined by Leigh Whitcomb from Imagine Communications and Robert Welch from Arista. Very few PTP talks include a live BCMA quiz plus, with more time than the IP Showcase talks, this is a well-paced, deep look into the basics.

Mike starts by reviewing how the measurement of time has been more and more accurately measured with us now, typically using atomic clocks. In the TV-domain analogue video used signals for B&B which gave frequency information in the subcarrier and allowed frequency locking and to keep in sync with other signals. NTP has allowed computers and routers on IP networks to keep lock allowing sub-millisecond synchronisation over LANs. Now we have IEEE 1588 PTP which harnesses hardware for maximum precision providing sub-microsecond precision.

Traditionally an SPG would create many different synchronising signals, distributed by DAs. With PTP however, the idea is creating a single time signal on to the network (as well as older signals if necessary). Although, the important thing to remember is that PTP both sends and receives data from the endpoints. GPS is made from 31 active satellites of which only 4 are needed for a lock. But other systems such as the Russian GLONASS, the Chinese BAIDU Navigational system or the European Galileo can also be used, sometimes in conjunction with each other to improve locking speed or give resilience.

Mike and his co-hosts give an overview of the standards that make all this possible, starting with the PTP standard itself IEEE 1588-2019 which is added to by SMPTE 2059. The latter is two standards that, together ensure broadcast devices can usefully harness PTP which is a general, cross-industry standard and track all signals back to a single point in time in 1970. Whilst this may seem extreme, the benefit of doing this is that if we know that all possible types of signal were in-phase at this one point in time, we can extrapolate how each signal should be phased now and use that information to synchronise the system. Upcoming to PTP, we hear, are standardised ways to monitor PTP plus additional security around the standard.

The next section looks at the types of Grandmaster and the fact that each clock works in its own domain. Typically, all your system will be in the same domain, but if you have incompatible situations such as older Dante networks or if you want to have a testing environment, you can use domains to separate your equipment. The standard, as defined by SMPTE 2059 is 127.

Mike then looks at the different types of PTP Message types: Announce, Sync & Follow up, Delay Request, Delay Response and Management Messages (broadcast information, drop second, time zone etc.) He then brings some of these up in Wireshark and talks us through the structure and what can be found within.

The most original part of the talk is the live walkthrough of three different scenarios where Leigh and Robert talk through their thinking on which clock will be the grandmaster and for what reason. This comes down to their understanding of the order of precedence of the metrics such as the manually-allotted priority, then the class of clock, clock accuracy and other values. One value worth remembering is that if your clock is locked to GPS it will have a class of 6, but if it then loses lock, it will become 7.

PTP talks are not complete without an explanation of the sync message exchanges needed to actually determine the time (and the relative delays in order to compute it) as well as the secondary clock types, boundary and transparent. Boundary clocks take on much of the two-way traffic in PTP protecting the grandmasters from having to speak directly to all the, potentially, thousands of devices. Transparent switches, simply update the time announcements with the delay for the message to move through the switch. Whilst this is useful in keeping the timing accurate, it provides no protection for the grandmasters.

Before the talk finishes with a Q&A, the team finish by explaining the difference between operating in unicast and multicast, prioritising PTP traffic using the differentiated services protocol and adding redundancy to the PTP system.

Watch now!
Free registration required
Speakers

Robert Welch Robert Welch
Technical Solultions Lead,
Arista
Leigh Whitcomb Leigh Whitcomb
Principal Engineer.
Imagine
Michael Waidson Mike Waidson
Application Engineer,
Telestream

Video: AV1 Commercial Readiness Panel

With two years of development and deployments under its belt, AV1 is still emerging on to the codec scene. That’s not to say that it’s no in use billions of times a year, but compared to the incumbents, there’s still some distance to go. Known as very slow to encode and computationally impractical, today’s panel is here to say that’s old news and AV1 is now a real-time codec.

Brought together by Jill Boyce with Intel, we hear from Amazon, Facebook, Googles, Amazon, Twitch, Netflix and Tencent in this panel. Intel and Netflix have been collaborating on the SVT-AV1 encoder and decoder framework for two years. The SVT-AV1 encoder’s goal was to be a high-performance and scalable encoder and decoder, using parallelisation to achieve this aim.

Yueshi Shen from Amazon and Twitch is first to present, explaining that for them, AV1 is a key technology in the 5G area. They have put together a 1440p, 120fps games demo which has been enabled by AV1. They feel that this resolution and framerate will be a critical feature for Twitch in the next two years as computer games increasingly extend beyond typical broadcast boundaries. Another key feature is achieving an end-to-end latency of 1.5 seconds which, he says, will partly be achieved using AV1. His company has been working with SOC vendors to accelerate the adoption of AV1 decoders as their proliferation is key to a successful transition to AV1 across the board. Simultaneously, AWS has been adding AV1 capability to MediaConvert and is planning to continue AV1 integration in other turnkey content solutions.

David Ronca from Facebook says that AV1 gives them the opportunity to reduce video egress bandwidth whilst also helping increase quality. For them, SVT-AV1 has brought using AV1 into the practical domain and they are able to run AV1 payloads in production as well as launch a large-scale decoder test across a large set of mobile devices.

Matt Frost represent’s Google Chrome and Android’s point of view on AV1. Early adopters, having been streaming partly using AV1 since 2018 in resolution small and large, they have recently added support in Duo, their Android video-conferencing application. As with all such services, the pandemic has shown how important they can be and how important it is that they can scale. Their move to AV1 streaming has had favourable results which is the start of the return on their investment in the technology.

Google’s involvement with the Alliance for Open Media (AOM), along with the other founding companies, was born out of a belief that in order to achieve the scales needed for video applications, the only sensible future was with cheap-to-deploy codecs, so it made a lot of sense to invest time in the royalty-free AV1.

Andrey Norkin from Netflix explains that they believe AV1 will bring a better experience to their members. Netflix has been using AV1 in streaming since February 2020 on android devices using a software decoder. This has allowed them to get better quality at lower bitrates than VP9 Testing AV1 on other platforms. Intent on only using 10-bit encodes across all devices, Andrey explains that this mode gives the best efficiency. As well as being founding members of AoM, Netflix has also developed AVIF which is an image format based on AV1. According to Andrey, they see better performance than most other formats out there. As AVIF works better with text on pictures than other formats, Netflix are intending to use it in their UI.

Tencent’s Shan Liu explains that they are part of the AoM because video compression is key for most Tencent businesses in their vast empire. Tencent cloud has already launched an AV1 transcoding service and support AV1 in VoD.

The panel discusses low-latency use of AV1, with Dave Ronca explaining that, with the performance improvements of the encoder and decoders along-side the ability to tune the decode speed of AV1 by turning on and off certain tools, real-time AV1 are now possible. Amazon is paying attention to low-end, sub $300 handsets, according to Yueshi, as they believe this will be where the most 5G growth will occur so site recent tests showing decoding AV1 in only 3.5 cores on a mobile SOC as encouraging as it’s standard to have 8 or more. They have now moved to researching battery life.

The panel finishes with a Q&A touching on encoding speed, the VVC and LCEVC codecs, the Sisvel AV1 patent pool, the next ramp-up in deployments and the roadmap for SVT-AV1.

Watch now!
Please note: After free registration, this video is located towards the bottom of the page
Speakers

Yueshi Shen Yueshi Shen
Principle Engineer
AWS & Twitch
David Ronca David Ronca
Video Infrastructure Team,
Facebook
Matt Frost Matt Frost
Product Manager, Chome Media Technologies,
Google
Andrey Norkin Andrey Norkin
Emerging Technologies Team
Netflix
Shan Liu Dr Shan Liu
Chief Scientist & General Manager,
Tencent Media Lab
Jill Boyce Jill Boyce
Intel

Video: IPMX – The Need for a New ProAV Standard

IPMX is an IP specification for interoperating Pro AV equipment. As the broadcast industry is moving towards increasing IP deployments based on SMPTE 2110 and AMWA’s NMOS protocols, there’s been a recognition that the Pro AV market needs to do many of the same things Broadcast wants to do. Moreover, there is not an open standard in Pro AV to achieve this transformation. Whilst there are a number of proprietary alliances, which enable wide-spread use of a single chip or software core, this interoperability comes at a cost and ultimately is underpinned by one, or a group of companies.

Dave Chiappini from Matrox discusses the work of the AIMS Pro AV working group with Wes Simpson from the VSF. Dave underlines the fact that this is a pull to unify the Pro AV industry to help people avoid investing over and over again in reinventing protocols or reworking their products to interoperate. He feels that ‘open standards help propel markets forward’ adding energy and avoiding vendor lock-in. This is one reason for the inclusion of NMOS, allowing any vendor to make a control system by working to the same open specification, opening up the market to both small and large companies.

Dave is the first to acknowledge that the Pro AV market’s needs are different to broadcast’s, and explains that they have calibrated settings, added some and ‘carefully relaxed’ parts of the standards. The aim is to have a specification which allows one piece of equipment, should the vendor wish to design it this way, that can be used in either an IPMX or ST 2110 system. He explains that the idea of relaxing some aspects of the ST 2110 ecosystem helps simplify implementation which therefore reduces cost.

One key relaxation has been in PTP. A lot of time and effort goes into making the PTP infrastructure work properly within SMPTE 2110 infrastructure. Having to do this at an event whilst setting up in a short timespan is not helpful to anyone and, elaborates Dave, a point to point video link simply doesn’t need high precision timing. IPMX, therefore, is lenient in the need for PTP. It will use it when it can, but will gracefully reduce accuracy and, when there is no grandmaster, will still continue to function.

Another difference in the Pro AV market is the need for compression. Whilst there are times when zero compression is needed in both AV and Broadcast, Pro AV needs the ability to throw some preview video out to an iPad or similar. This isn’t going to work with JPEG XS, the preferred ‘minimal compression’ codec for IPMX, so a system for including H264 or H265 is being investigated which could have knock-on benefits for Broadcast.

HDMI is essential for a Pro AV solution and needs its own treatment. Different from SDI, it has lots of resolutions and frame rates. It also has HDCP so AIMS is now working with the DCP on creating a method of carrying HDCP over 2110. It’s thus hoped that this work will help broadcast use cases. TVs are already replacing SDI monitors, such interoperability with HDMI should bring down the costs of monitoring for non-picture critical environments.

Watch now!
Speakers

David Chiappini David Chiappini
Chair, Pro AV Working Group, AIMS
Executive Vice President, Research & Development,
Matrox Graphics Inc.
Wes Simpson Wes Simpson
RIST AG Co-Chair, VSF
President & Founder, LearnIPvideo.com

Video: Real-time AV1 in WebRTC

AV1 seems to be shaking off its reputation for slow encoding, now only 2x slower than HEVC. How practical, then is it to put AV1 into a real-time codec aiming for sub-second latency? This is exactly what the Alliance for Open Media are working on as parts of AV1 are perfectly suited for the use case.

Dr Alex from CoSMo Software took the podium at the Alliance for Open Media Research Symposium to lay out the whys and wherefores of updating WebRTC to deliver AV1. He started by outlining the different requirements of real-time vs VoD. With non-live content, encoding time is often unrestricted allowing for complex encoding methods to achieve lower bitrates. Even live CMAF streams aiming to achieve a relatively low 3-second latency have time enough for much more complex encoding than real-time. Encoding, ingest, storage and delivery can all be separated into different parts of the workflow for VoD, whereas real-time is forced to collapse logical blocks down as much as possible. Unsurprisingly, Dr Alex outlines latency as the most important driver in the WebRTC use case.

When streaming, ABR isn’t quite as simple as with chunked formats. The different bit rate streams need to be generated at the encoder to save any transcoding delays. There are two ways of delivering these streams. One is to deliver them as separate streams, the other is to deliver only one, layered stream. The latter method is known as Scalable Video Coding (SVC) which sends a base layer of a low-resolution version of the video which can be decoded on its own. Within that stream, is also the information which builds on top of that video to create a higher-resolution version of the same stream. You can have multiple layers and hence provide information for 3, 4 or more streams.

Managing which streams get to the decoder is done through an SFU (Selective Forwarding Unit) which is a server to which WebRTC clients connect to receive just the stream, or parts of a stream, they need for their current bandwidth capability. It’s important to remember that compared to video conferencing solutions based on WebRTC, that streaming using WebRTC scales linearly. Whilst it’s difficult to hold a meeting with 50 people in a room, it’s possible to optimise what video is sent to everyone by only showing the last 5 speakers in full resolution, the others as thumbnails. Such optimisations are not available for video distribution, rather SFUs and media servers need to be scaled and cascaded. This should be simple, but testing can be difficult but it’s necessary to ensure quality and network resilience at scale.

Cisco have already demonstrated the first real-time AV1-based WebRTC system, though without SVC support. Work is ongoing to deliver improvements to RTP encapsulation of AV1 in WebRTC. For instance, providing Decoding Target Information which embeds information about frames without needing to decode the video itself. This information explains how important each frame is and how it relates to the other video. Such metadata can be used by the SFU or the decoder to understand which frames to drop and send/decode.

Watch now!
Download the slides
Speaker

Alex Gouaillard Dr Alex Gouaillard
Video Codec Working Group – Real-time subgroup, Allience for Open Media
Founder, Directory & CEO, CoSMo Software Consulting Pte. Ltd.
Co-founder & CTO, Millicast