Video: Progress Update for the ST 2110 WAN VSF Activity Group

2110 Over WAN Update

Is SMPTE ST 2110 suitable for inter-site connectivity over the WAN? ST 2110 is moving past the early adopter phase with more and more installations and OB vans bringing 2110 into daily use but today, each site works independently. What if we could maintain a 2110 environment between sites. There are a number of challenges still to be overcome and moving a large number of essence flows long distances and between PTP time domains is one of them.

Nevion’s Andy Rayner is chair of the VSF Activity Group looking into transporting SMPTE ST 2110 over WAN and is here to give an update on the work in progress which started 18 months ago. The presentation looks at how to move media between locations which has been the primary focus to date. It then discusses how control over which media are shared will be handled as this is a new aspect to the work. Andy starts by outlining the protection offered in the scheme which supports both 2022-7 and FEC then explains that though FEC is valuable for single links where 2022-7 isn’t viable, only some of the possible ST 2022-5 FEC configurations are supported, in part, to keep latency low.

The headline to carrying 2110 over the WAN is that it will be done over a trunk. GRE is a widely used Cisco trunking technology. Trunking, also known as tunnelling, is a technique of carrying ‘private’ traffic over a network such that a device sending into the trunk doesn’t see any of the infrastructures between the entrance and the exit. It allows, for instance, IPv6 traffic to be carried over IPv4 equipment where the v4 equipment has no idea about the v6 data since it’s been wrapped in a v4 envelope. Similarly, the ipv6 equipment has no idea that the ipv6 data is being wrapped and carried by routers which don’t understand ipv6 since the wrapping and unwrapping of the data is done transparently at the handoff.

In the context of SMPTE ST 2110, a trunk allows one port to be used to create a single connection to the destination, yet carry many individual media streams within. This has the big benefit of simplifying the inter-site connectivity at the IT level, but importantly also means that the single connection is quite high bandwidth. When FEC is applied to a connection, the latency introduced increases as the bit rate reduces. Since ST 2110 carries audio and metadata separately, an FEC-protected stream would have variable latency depending on the type of the of traffic. Bundling them in to one large data stream allows FEC to be applied once and all traffic then suffers the same latency increase. The third reason is to ensure all essences take the same network path. If each connection was separate, it would be possible for some to be routed on a physically different route and therefore be subject to a different latency.

Entering the last part of the talk, Andy switches gears to talk about how site A can control streams in site B. The answer is that it doesn’t ‘control’, rather there is the concept of requesting streams. Site A will declare what is available and site B can state what it would like to connect to and when. In response, site A can accept and promise to have those sources available to the WAN interface at the right time. When the time is right, they are released over the WAN. This protects the WAN connectivity from being filled with media which isn’t actually being used. These exchanges are mediated and carried out with NMOS IS-04 an IS-05.

Watch now!
Speakers

Andy Rayner Andy Rayner
Chief Technologist, Nevion,
Chair, WAN IP Activity Group, VSF
Wes Simpson Moderator: Wes Simpson
Founder, LearnIPVideo.com
Co-chair RIST Activity Group, VSF

Video: LL-HLS Discussion with THEO, Wowza & Fastly

Roundtable discussion with Fastly, Theo and Wowza

iOS 14 has finally started to hit devices and with it, LL-HLS is now available in millions of devices. Low-Latency HLS is Apple’s latest evolution of HLS, a streaming protocol which has been widely used for over a decade. Its typical latency has gradually come down from 60 seconds to, between 6 and 15 seconds now. There are still a lot of companies that want to bring that down further and LL-HLS is Apple’s answer to people who want to operate at around 2-4 seconds total latency, which matches or beats traditional broadcast.

LL-HLS was introduced last year and had a rocky reception. It came after a community-driven low-latency scheme called LHLS and after MPEG DASH announced CMAF’s ability to hit the same 2-4 second window. Famously, this original context, as well as the technical questions over the new proposal, were summed up well in Phil Cluff’s blog post which was soon followed by a series of talks trying to make sense of LL-HLS ahead of implementation. This is the Apple video introducing LL-HLS in its first form. And the reactions from AL Shenker from CBS Interactive, Marina Kalkanis from M2A Media and Akamai’s Will Law which also nicely sums up the other two contenders. Apple have now changed some of the spec in response to their own further reasearch and external feedback which was received positively and summed up in, THEO CTO, Pieter-Jan Speelmans’ recent webinar bringing us the updates.

In this panel, Pieter is joined by Chris Buckley from Fastly Inc. and Wowza’s Jamie Sherry discussing pressing LL-HLS into action. Moderator Alison Kolodny hosts the talk which covers a wide variety of points.

“Wide adoption” is seen as the day-1 benefit. If you support LL-HLS then you’ll know you’re able to hit a large number of iPads, iPhones and Macs. Typically Apple sees a high percentage of the userbase upgrade fairly swiftly and easily see more than 75% of devices updated within four months of release. The panel then discusses how implementation has become easier given the change in protocol where the use of HTTP/2’s push technology was dropped which would have made typical CDN techniques like hosting the playlists separately to the media impossible. Overall, CDN implementation has become more practical since with pre-load hints, a CDN can host many, many connections into to it, all waiting for a certain chunk and collapse that down to a single link to the origin.

One aspect of implementation which has improved, we hear from Pieter-Jan, is building effective Adaptive Bit Rate (ABR) switching. With low-latency protocols, you are so close to live that it becomes very hard to download a chunk of video ahead of time and measure the download speed to see if it arrived quicker than realtime. If it did, you’d infer there was spare bit rate. LL-HLS’s use of rendition reports, however, make that a lot easier. Pieter-Jan also points out SSAI is easier with rendition reports.

The rest of the discussion covers device support for LL-HLS, subtitles workflows, the benefits of TLS 1.3 being recommended, and low-latency business cases.

Watch now!
The webinar is free to watch, on demand, in exchange for your email details. The link is emailed to you immediately.
Speaker

Chris Buckley
Senior Sales Engineer,
Fastly Inc.
Pieter-Jan Speelmans Pieter-Jan Speelmans
CTO,
THEO Technologies
Jamie Sherry Jamie Sherry
Senior Product Manager,
Wowza
Alison Kolodny Moderator: Alison Kolodny
Senior Product Manager of Media Services,
Frame.io

Video: A Snapshot of NMOS: Just the Facts, Please.

NMOS is the open standard for multiple vendors co-operating on a broadcaster network, particularly ST 2110, to announce new devices and configure them. Acting as both a database but also a way of easily describing settings to be shared between systems. Often new ST 2110 systems are specified to be NMOS IS-04 and IS-05 capable.

NMOS IS-04 is the name of the specification which defines the discovery and registration of devices while IS-05 describes the control of said devices. It’s very hard to run a SMPTE ST 2110 system without these or a proprietary protocol which exchanges the same information. It’s not practical to manage any of these tasks at anything more than the smallest scale.

John Mailhot from Imagine Communications delivers a concise summary of these technologies which may be new to you. He explains that an SDP will be generated and John reviews how you would read them. John explains that the stack is open source with the aim of promoting interoperability.

John takes the time needed to look at IS-04 and IS-05 in terms of practically implementing it at the end of this short talk.

Watch now!
Speaker

John Mailhot John Mailhot
Systems Architect, IP Convergence,
Imagine Communications

Video: The Video Codec Landscape 2020

2020 has brought a bevvy of new codecs from MPEG. These codecs represent a new recognition that the right codec is the one that fits your hardware and your business case. We have the natural evolution of HEVC, namely VVC which trades on complexity to achieve impressive bit rate savings. There’s a recognition that sometimes a better codec is one that has lower computation, namely LCEVC which enables a step-change in quality for lower-power equipment. And there’s also EVC which has a license-free mode to reduce the risk for companies that prefer low-risk deployments.

Christian Feldmann from Bitmovin takes the stage in this video to introduce these three new contenders in an increasingly busy codec landscape. Christian starts by talking about the incumbents namely AVC, HEVC, VP9 and AV1. He puts their propositions up against the promises of these new codecs which are all at the point of finalisation/publication. With the current codecs, Christian looks at what the hardware and software support is like as well as the licencing.

EVC (Essential Video Codec) is the first focus of the presentation whose headline feature is more reliably licence landscape. The first offer is the baseline profile which has no licencing as it uses technologies that are old enough to be outside of patents. The main profile does require licencing and does allow much better performance. Furthermore, the advanced tools in the main profile can each be turned off individually hence avoiding patents that you don’t want to licence. The hope is that this will encourage the patent holders to licence the technology in a timely manner else the customer can, relatively easily, walk away. Using the baseline only should provide 32% better than AVC and the main profile can give up to a 25% benefit over HEVC.

LCEVC (Low Complexity Enhancement Video Coding) is next which is a new technique for encoding which is actually two codecs working together. It uses a ‘base’ codec at low resolution like AVC, HEVC, AV1 etc. This low fidelity version is then accompanied by enhancement information so that the low-resolution base can be upscaled to the desired resolution can be corrected with relevant edges etc. added. The overall effect is that complexity is kept low. It’s designed as a software codec that can fit into almost any hardware by using the hardware decoders in SoCs/CPUs (i.e. Intel QuickSync) plus the CPU itself which deals with the enhancement application. This ability to fit around hardware makes the codec ideal for improving the decoding capability to existing hardware. It stands up well against AVC providing at least 36% improvement and at worst improves slightly upon HEVC bitrates but with much-reduced encoder computation.

VVC (Versatile Video Coding) is discussed by Christian but not in great detail as Bitmovin will be covering that separately. As an evolution of HEVC, it’s no surprise that bitrate is reduced by at least 40%, though encoding complexity has gone up 10-fold. This is similar to HEVC compared to its predecessor AVC. VVC has some built-in features not delivered as standard before such as special modes for screen content (such as computer games) and 360-degree video.

Free to watch now!

Speaker

Christian Feldmann Christian Feldmann
Lead encoding engineer,
Bitmovin