Video: Getting Back Into the Game

The pandemic has obviously hurt live broadcaster, sports particularly but as the world starts its slow fight back to normality we’re seeing sports back on the menu. How has streaming suffered and benefited? This video looks at how technology has changed in response, how pirating of content changed, how close we are to business as usual.

Jason Thibeault from the Streaming Video Alliance brings together Andrew Pope from Friend MTS, Brandon Farley from Streaming Global, SSIMWAVE’s Carlos Bacquet, Synamedia’s Nick Fielibert and Will Penson with Conviva to get an overview of the industry’s response to the pandemic over the last year and its plans for the future.

The streaming industry has a range of companies including generalist publishers, like many broadcasters and specialists such as DAZN and NFL Gamepass. During the pandemic, the generalist publishers were able to rely more on their other genres and back catalogues or even news which saw a big increase in interest. This is not to say that the pandemic made life easy for anyone. Sports broadcasters were undoubtedly hit, though companies such as DAZN who show a massive range of sports were able dig deep in less mainstream sports from around the world in contrast with services such as NFL Game Pass which can’t show any new games if the season is postponed. We’ve heard previously how esports benefited from the pandemic

The panel discusses the changes seen over the last year. Mixed views on security with one company seeing little increase in security requests, another seeing a boost in requests for auditing and similar so that people could be ready for when sports streaming was ‘back’. There was a renewed interest in how to make sports streaming better where better for some means better scaling, for others, lower latency, whereas many others are looking to bake in consistency and quality; “you can’t get away with ‘ok’ anymore.”

SSIMWAVE pointed out that some customers were having problems keeping the channel quality high and were even changing encoder settings to deal with the re-runs of their older footage which was less good quality than today’s sharp 1080p coverage. “Broadcast has set the quality mark” and streaming is trying to achieve parity. Netflix has shown that good quality goes on good devices. They’re not alone being a streaming service 50 per cent of whose content is watched on TVs rather than streaming devices. When your content lands on a TV, there’s no room for compromise on quality.

Crucially, the panel agrees that the pandemic has not been a driver for change. Rather, it’s been an accelerant of the intended change already desired and even planned for. If you take the age-old problem of bandwidth in a house with a number of people active with streaming, video calls and other internet usage, any bitrate you can cut out is helpful to everyone.

Next, Carlos from Conviva takes us through graphs for the US market showing how sports streaming dropped 60% at the beginning of the lockdowns only to rebound after spectator-free sporting events started up now running at around 50% higher than before March 2020. News has shown a massive uptick and currently retains a similar increase as sports, the main difference being that it continues to be very volatile. The difficulties of maintaining news output throughout the pandemic are discussed in this video from the RTS.

Before hearing the panel’s predictions, we hear their thoughts on the challenges in improving. One issue highlighted is that sports is much more complex to encode than other genres, for instance, news. In fact, tests show that some sports content scores 25% less than news for quality, according to SSIMWAVE, acknowledging that snooker is less challenging than sailing. Delivering top-quality sports content remains a challenge particularly as the drive for low-latency is requiring smaller and smaller segment sizes which restrict your options for GOP length and bandwidth.

To keep things looking good, the panel suggests content-aware encoding where machine learning analyses the video and provides feedback to the encoder settings. Region of interest coding is another prospect for sports where close-ups tend to want more detail in the centre as you look at the player but wide shots intent to capture all detail. WebRTC has been talked about a lot, but not many implementations have been seen. The panel makes the point that advances in scalability have been noticeable for CDNs specialising in WebRTC but scalability lags behind other tech by, perhaps, 3 times. An alternative, Synamedia points out, is HESP. Created by THEOPlayer, HESP delivers low latency, chunked streaming and very low ‘channel change’ times.

Watch now!
Speakers

Andrew Pope Andrew Pope
Senior Solutions Architect,
Friend MTS
Brandon Farley Brandon Farley
SVP & Chief Revenue Officer,
Streaming Global
Carlos Bacquet Carlos Bacquet
Manager, Sales Engineers,
SSIMWAVE
Nick Fielibert
CTO, Video Network
Synamedia
Will Penson Will Penson
Vice President, GTM Strategy & Operations,
Conviva
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: Overview of MPEG’s Network-Based Media Processing

Building complex services from microservices not simple. While making a static workflow can be practical, though time-consuming, making one that is able to be easily changed to match a business’s changing needs is another matter. If an abstraction layer could be placed over the top of the microservices themselves, that would allow people to concentrate on making the workflow correct and leave the abstraction layer to orchestrate the microservices below. This is what MPEG’s Network-Based Media Processing (NBMP) standard achieves.

Developed to counteract the fragmentation in cloud and single-vendor deployments, NBMP delivers a unified way to describe a workflow with the platform controlled below. Iraj Sodagar spoke at Mile High Video 2020 to introduce NBMP, now published as ISO/IEC 23090-8. NBMP provides a framework that allows you to deploy and control media processing using existing building blocks called functions fed by sources and sinks, also known as inputs and outputs. A Workflow Manager process is used to actually start and control the media processing, fed with a workflow description that describes the processing wanted as well as the I/O formats to use. This is complemented by a Function Discovery API and a Function Repository to discover and get hold of the functions needed. The Workflow Manager gets the function and uses the Task API to initiate the processing of media. The Workflow Manager also deals with finding storage and understanding networking.

Next, Iraj takes us through the framework APIs which allow the abstraction layer to operate, in principle, across multiple cloud providers. The standard contains 3 APIs: Workflow, Task & Function. The APIs use a CRUD architecture each having ‘update’ ‘Discover’ ‘Delete’ and similar actions which apply to Tasks, Functions and the workflows i.e. CreateWorkflow. The APIs can operate synchronously or asynchronously.

Split rendering is possible by splitting up the workflow into sub workflows which allows you to run certain tasks nearer to certain resources, say storage, or in certain locations like in the case of edge computing where you want to maintain low-latency by processing close to the user. In fact, NBMP has been created with a view to being able to be used by 5G operators and is the subject of two study items in 3GPP.

Watch now!
Speaker

Iraj Sodagar Iraj Sodagar
Principal Researcher
Tencent America

Iraj Sodagar,
Tencent America

Video: ATSC 3.0 in 2021

ATSC3.0 is an innovative set of standards that gets closer to the maximum possible throughput, AKA the Shannon limit, than 5G and previous technologies. So flexible a technology it is that it allows convergence with 5G networks, itself in the form of an SFN and inter-transmitter links as well as a seamless handoff for receivers between the internet and the broadcast transmission. ATSC 3.0 is an IP-based technology that is ready to keep up to date with changing practices and standards yet leave viewers to experience the best of broadcast RF transmission, wireless internet and broadband without having to change what they’re doing or even know which one(s) they’re watching.

This SMPTE event looks at a number of ATSC’s innovations, moderated by SMPTE Toronto section chair, Tony Meerakker and kicks off with Orest Sushko from the Humber Broadcast-Broadband Convergence Lab development in Toronto. This is a Canadian initiative to create an environment where real-world ATSC 3.0 testing can be carried out. It’s this type of lab that can help analyse the applications talked about in this video where different applications are brought into a broadcast RF environment including integration with 5G networks. It will also drive the research into ATSC 3.0 adoption in Canada.

Next is the ATSC president, Madeleine Noland, who introduces what ATSC 3.0 is and why she feels its such an innovative standards suite. Created by over 400 engineers throughout the world, Madeleine says that ATSC 3.0 is a state of the art standard with the aims to add value to the broadcast service with the idea that broadcast towers are ‘not just TV anymore. This idea of blurring the lines between traditional RF transmission and other services continues throughout this series of talks.

The aim of ATSC 3.0 is to deliver all television over IP, albeit it uni-directional IP. It also uses a whole stack of existing technologies at the application layer such as HTML5, CSS and JavaScript. These are just three examples of the standards on which ATSC 3.0 is based. Being based on other standards increases the ability to deploy quickly. ATSC 3.0 is a suite of tens of standards that describe the physical layer, transport, video & audio use, apps and more. Having many standards within is another way ATSC 3.0 can keep up with changes; by modifying the relevant standards and updating them but also not being afraid of adding more.

Madeleine says that 62 market areas will be launching which bring the reach of ATSC 3.0 up to 75% of the households in the US under the banner ‘NextGen TV’ which will act as a logo signpost for customers on TVs and associated devices. ATSC 3.0 exists outside the US in Korea where 75% of the population can receive ATSC 3.0. Canada is exploring, Brazil is planning, India’s TSDSI is researching and many other countries like Australia are also engaging with the ATSC to consider their options for national deployment against, presumably DVB-I.

The last point in this section is that when you convert all your transmitters to IP it seems weird to have just a load of ‘node’. Madeleine’s point is that a very effective mesh network could be created if only we could connect all these transmitters together. These could then provide some significant national services which will be discussed later in this video.

Interactive TV

Mark Korl is next talking about his extensive work creating an interactive environment within ATSC 3.0. The aim here was to enhance the viewer/user experience, have better relationships with them and provide an individualised offering including personalised ads and content.

Mark gives an overview of A/244, ATSC 3.0 Interactive Content and ATSC 3.0 standard A/338 talking about signalling, delivery, synchronisation and error protection, service announcement such as EPG, content recovery in redistribution scenarios, watermarking, Application event delivery, security and more.

Key features of interactivity are the aforementioned use of HTML 5, CSS and JavaScript to create seamless and secure delivery of interactive content from broadcast and broadband. Each application lives in its own separate context and features are managed via API.

Mark finishes by outlining the use of the Emergency Advanced Emergence informAtion table which signals everything the receiver needs to know about the AEA message and associated rich media and then looks at how, at the client, content/ads can be replaced by manipulating the .mpd manifest file with locally-downloaded content using XLink references.

Innovateive technologies implemented in ATSC 3.0

Dr. Yiyan Wu takes the podium next explaining the newest RF-based techniques used in ATSC 3.0 which are managing to get ATSC3.0 closer to the Shannon limit than other similar contemporary technologies such as 4F and 5G New Radio (NR). These technologies are such as LDPC – Low Dennsity Parity Codes – which have been in DVB-S2 and DVB-T2 for a long time but also Non-Uniform Constellations such as 4096NUC-QAMas well as Canada-invented Layered-Devision-Multiplexing (LDM) that can efficiently combine robust mobile and high-datarate service on top of each other on a single TV channel. This works by having a high-power, robust-coded signal with a quieter signal underneath which, in good situations, can still be decoded. The idea is that the robust signal is the HD transmitted with HEVC SVC (Scalable Video Coding) meaning that the UHD layer can be an enhancement layer on top of the HD. There is no need to send the whole UHD signal. Dr. Yuyan Wu finishes this section by explaining the LDM offers reduced Tx and Rx power.

Using LDM, however, we’re actually creating more bandwidth than we had before. Dr. Wu points out that this can be used for improved services or be used for an in-band distribution link, i.e. to move live video down through a network of transmitters. While not necessary the fact that an ATSC 3.0 transmitter can operate as part of a single frequency network is very useful as a weak signal from one transmitter can be boosted by the signal from another.

Dr. Wu spends time next talking about 5G use cases detailing the history of failed attempts at ‘broadcast’ versions of 3G, 4G and LTE. With 5G USPs such as network slicing, the current version of the broadcast mode of 5G is more likely than ever to be commercially implemented. Called 5G feMBMS, it’s actually a 4G/LTE-based technology for delivery over a 5G network.

One plan for 5G integration, which is possible as ATSC 3.0 has the same timing reference as 5G networks, is for 5G networks to spot when thousands of people are watching the same things and move that traffic distribution over to the ATSC 3.0 towers who can do multicast would an issue.

Next Gen commercialisation update

Last in the video we have Anne Schelle who works with the ATSC as a board chair of the ATSC 3.0 Security Alliance. She explains that the number of markets announcing for deployment in 2021 is twice that of 2020. Deployment of ATSC 3.0 is going well, the most common initial use has been to test interactive services. The projected TV shipping numbers with ATSC 3.0 internally are positive and, Anne says, the economics for NextGen receiver inclusion is better than it has been previously. Speaking from her security perspective, having in-built content security protection is new for broadcasters who welcome it as it helps reduce piracy

Watch now!
Speakers

Madeleine Noland Madeleine Noland
President,
ATSC
Mark Corl Mark Corl
Chair ATSC S38 Specialist Group on Interactive Environment
SVP Emergent Technology Development, Triveni Digital
Yiyan Wu M.C. Dr. Yiyan Wu
Principal Research Scientist,
Communications Research Centre Canada (CRC)
Anne Schelle Anne Schelle
Board chair of ATSC 3.0 Security Alliance
Board Member, ATSC
Managing Director, Pearl TV
Orest Sushko Orest Sushko
Project Lead, Humber Broadcast-Broadband Conergence Lab
Program Coordinator, Film & Multiplatform Storytelling Program, Humber College
Tony Meerakker Moderator: Tony Meerakker
Chair, SMPTE Toronto Section
Consultant, Meer Tech Systems

Video: AV1 and ARM

AV1’s no longer the slow codec it was when it was released. Real-time encodes and decodes are now practical with open-source software implementations called rav1e for encoding and dav1d for decoding. We’ve also seen in previous talks the SVT-AV1 provides real-time encoding and WebRTC now has a real-time version with the AV1 codec.

In this talk, rav1e contributor Vibhoothi explains more about these projects and how the ARM chipset helps speed up encoding. The Dav1d started project started in 2018 with the intention of being a fast, cross-platform AV1 encoder with a small binary which Vibhoothi says is exactly what we have in 2021. Dav1d is the complementary decoder project. AV1 decoding is found in many places now including in Android Q, in Microsoft’s media extension for it, VLC supports AV1 on linux and macOS thanks to dav1d, AV1 is supported in all major browsers, on NVIDIA and AMD GPUs plus Intel Tiger Lake CPUs. Netflix even use dav1d to stream AV1 onto some mobile devices. Overall, then, we see that AV1 has ‘arrived’ in the sense that it’s in common and increasing use.

The ARM CPU architecture underpins nearly all smartphones and most tablets so ARM is found in a vast number of devices. It’s only relatively recently that ARM has made it into mainstream servers. One big milestone has been the release of Neoverse which is an ARM chip for infrastructure. AWS now offer ARM instances that have a 40% higher performance but a 20% reduced cost. These have been snapped up by Netflix but also by a plethora of non-media companies. Recently Apple has made waves with their introduction of the M1 ARM-based chip for desktops which has benchmarks far in excess of the previous x86 offering which shows that the future for ARM-based implementations of the rav1e encoder and dav1d decoder are bright.

Vibhoothi outlines how dav1d works better on ARM then x86 with improved threading support including hand-written asm optimisations and support for 10-bit assembly. rav1e has wide support in VLC, GStreamer, FFmpeg, libavif and others.

The talk finishes with a range of benchmarks showing how better-than-real-time encoding and decoding is possible and how the number of threads relates to the throughput. Vibhoothi’s final thoughts focus on what’s still missing in the ARM implementations.

Watch now!
Speaker

Vibhoothi Vibhoothi
Developer, VideoLAN
Research Assistant, Trinity College Dublin,
Codec Development, rav1e, Mozilla