Video: CEDIA Talk: ATSC 3.0 is HERE – Why It Matters to You


That last in the current series of ATSC 3.0 posts. This one is a light, but useful talk which aims to introduce people to ATSC 3.0 calling out the features and differences.

Michael, showing off his colour bars jacket, explains how ATSC 3.0 came about and how ATSC 2.0 never came to pass and ‘is on a witness protection program’. He then explains the differences between ATSC 1.0 and 3.0, discussing the fact its IP based and capable of UHD and HDR amongst other things.

The important question is why is it better and we see the modulation scheme is an improvement (note Michael says ATSC 3.0 is based on QAM; it actually based on OFDM.)

The talk finishes talking about what ATSC 3.0 isn’t and implementation details and the frequency repack which is happening in the US.

Watch now!
Speaker

Michael Heiss Michael Heiss
Principal Consultant,
M. Heiss Consulting

Video: Next Generation Broadcast Platform – ATSC 3.0

Continuing our look at ATSC 3.0, our fifth talk straddles technical detail and basic business cases. We’ve seen talks on implementation experience such as in Chicago and Phoenix and now we look at receiving the data in open source.

We’ve covered before the importance of ATSC 3.0 in the North American markets and the others that are adopting it. Jason Justman from Sinclair Digital states the business cases and reasons to push for it despite it being incompatible with previous generations. He then discusses what Software Defined Radio is and how it fits in to the puzzle. Covering the early state of this technology.

With a brief overview of the RF side of ATSC 3.0 which itself is a leap forward, Jason explains how the video layer benefits. Relying on ISO BMMFF, Jason introduces MMT (MPEG Media Transport) explaining what it is and why it’s used for ATSC 3.0.

The next section of the talk showcases libatsc3 whose goal is to open up ATSC 3.0 to talented Software Engineers and is open source which Jason demos. The library allows for live decoding of ATSC 3.0 including MMT material.

Finishing his talk with a Q&A including SCTE 34 and an interesting comparison between DVB-T2 and ATSC 3.0 makes this a very useful talk to fill in technical gaps that no other ATSC 3.0 talk covers.

Complete slide pack

Watch now!
Speakers

Jason Justman Jason Justman
Senior Principal Architect,
Sinclair Digital

Video: The ST 2094 Standards Suite For Dynamic Metadata

Lars Borg explains to us what problems the SMPTE ST 2094 standard sets out to solve. Looking at the different types of HDR and Wide Colour Gamut (WCG) we quickly see how many permutations there are and how many ways there are to get it wrong.

ST 2094 carries the metadata needed to manage the colour, dynamic range and related data. In order to understand what’s needed, Lars takes us through the details of the HDR implementations, touching on workflows and explaining how the ability of your display affects the video.

We then look at midtones and dynamic metadata before a Q&A.

This talk is very valuable in understanding the whole HDR, WCG ecosystem as much as it is ST 2094.

Watch now!

Speaker

Lars Borg Lars Borg
Principal Scientist,
Adobe

Video: WebRTC: The Future Champion of Low Latency


With the continual quest for lower and lower latencies in streamed video, WebRTC is an attractive technology with latencies in the milliseconds rather than seconds. Limelight’s lowest latency offerings are based on WebRTC.

Alex Gouaillard from millicast explains the brief history and current status of WebRTC including which browsers are supported. After talking about optimisations that have been made, he talks about Bandwidth Adaptive Media and other use cases to be solved.

Supported codecs and, importantly, Scalable Video Coding support is discussed along with ways of implementing WebRTC. Alex also talks about the testing that’s gone in to the standard looking at bandwidth and latencies.

Lastly, a key question around WebRTC is ‘does it scale’ which is discussed before the conclusion.

Watch it now!
Speaker

Alex Gouaillard Alex Gouaillard
CTO,
millicast

Video: Recent Experiences with ATSC 3.0 from Seoul to Phoenix

This talk is part of a series of talks on ATSC 3.0 we’re featuring here on The Broadcast Knowledge. ATSC 3.0 is a big change in terrestrial television transmission because even over the air, the signal is IP.

In this talk, Joe Seccia from GatesAir, a company famed for its transmission systems, talks us through where the US (and Seoul) is on its way to deploying this technology.

With major US broadcasters having pledged to be on air with ATSC 3.0 by the end of 2020, trials are turning in to deployments and this is a report back on what’s been going on.

Joe covers the history of previous tests and trials before taking us through the architecture of a typical system. After explaining the significance of the move to IP, Joe also covers other improvements such as using OFDM modulation and thus being able to use a single frequency network (SFN). This combination of technologies improves reception and coverage over the 8VSB transmissions which went before it.

We also hear about the difference between home and broadcast gateways in the system as well as the Early Alert System Augmentation features which allow a broadcaster to ‘wake up’ TVs and other devices when disasters strike or are predicted.

Watch now!

Speakers

Joe Seccia Joe Seccia
Manager, TV Transmission Market and Product Development Strategy,
GatesAir

Video: Colour

With the advent of digital video, the people in the middle of the broadcast chain have little do to with colour for the most part. Yet those in post production, acquisition and decoding/display are finding it life more and more difficult as we continue to expand colour gamut and deliver on new displays.

Google’s Steven Robertson takes us comprehensively though the challenges of colour from the fundamentals of sight to the intricacies of dealing with REC 601, 709, BT 2020, HDR, YUV transforms and all the mistakes people make in between.

An approachable talk which gives a great overview, raises good points and goes into detail where necessary.

An interesting point of view is that colour subsampling should die. After all, we’re now at a point where we could feed an encoded with 4:4:4 video and get it to compress the colour channels more than the luminance channel. Steven says that this would generate more accurate colour than by stripping it of a fixed amount of data like 4:2:2 subsampling does.

Given at Brightcove HQ as part of the San Francisco Video Tech meet-ups.

Watch now!

Speaker

Steven Robertson Steven Robertson
Software Engineer,
Google

Video: Using PTP & SMPTE 2059 A Practical Experience Perspective

NAB 2019 saw another IP Showcase with plenty of talks on the topic on many people’s minds: PTP and timing in IP systems. It seems there’s a lot which needs to be considered and, truth be told, a lot of people don’t feel they have the complete list of questions to be asking and certainly don’t know all the answers.

So, here, Greg Shay from Telos talks about the learnings from his extensive experience with timing IP signals and with PTP under SMPTE 2059. He hits the following topics;

  • Must you always have a GPS reference for the PTP master?
  • Are PTP-aware switches always necessary?
  • Can you safely not use PTP Peer Delay requests / responses?
  • What is the effect of internal oscillator tolerance and stability when designing PTP client equipment?

To my ears, these are 4 well placed questions because I’ve heard these asked; they are current in the minds of people who are grappling with current and prospective IP installations.

Greg treats gives each one of these due time and we see some interesting facts come out:
You don’t always need a time-synchronised PTP master (pretending to be in 1970 can work just fine).
Compensating for PTP Peer Delay can make things worse – which seems counter-intuitive to the point of PTP Peer Delay requests.
We also see why PTP-aware switches matter and a statistical method of managing without.

This is a talk which exemplifies IP talks which ‘go deeper’ than simply explaining the point of standards. Implementation always takes thought – not only in basic architecture but in use-cases and edge-cases. Here we learn about both.

Watch now!

Speakers

Greg Shay Greg Shay
CTO,
The Telos Alliance

Video: ABR Streaming and CDN Performance

Hot on the heel’s of yesterday’s video all about Adaptive Bitrate (ABR) streaming we have research engineer Yuriy Reznik from Brightcove looking at the subject in detail. We outlined the use of ABR yesterday showing how it is fundamental to online streaming.

Brightcove, an online video hosting platform with its own video player, has a lot of experience of delivery over the CDN. We saw yesterday the principles that the player, and to an extent the server, can use to deal with changing network (and to an extent changing client CPU usage) by going up and down through the ABR ladder. However this talk focusses on how the CDN in the middle complicates matters as it tries its best to get the right chunks in the right place at the right time.

How often are there ‘cache misses’ where the right file isn’t already in place? And how can you predict what’s necessary?

Yuriy even goes in to detail about how to work out when HEVC deployment makes sense for you. After all, even if you do deploy HEVC – do you need to do it for all assets? And if you do only deploy for some assets, how do you know which? Also, when does it make sense to deploy CMAF? In this talk, we hear the answers.

The slides for this talk

Watch the video now!

Speaker

Yuriy Reznik Yuriy Reznik
VP, Research
Brightcove

Video: Adaptive Bitrate Algorithms: How They Work

Streaming on the net relies on delivering video at a bandwidth you can handle. Called ‘Adaptive Bitrate’ or ABR, it’s hardly possible to think of streaming without it. While the idea might seem simple initially – just send several versions of your video – it quickly gets nuanced.

Streaming experts Streamroot take us through how ABR works at Streaming Media East from 2016. While the talk is a few years old, the facts are still the same so this remains a useful talk which not only introduces the topic but goes into detail on how to implement ABR.

The most common streaming format is HLS which relies on the player downloading the video in sections – small files – each representing around 3 to 10 seconds of video. For HLS and similar technologies, the idea is simply to allow the player, when it’s time to download the next part of the video, to choose from a selection of files each with the same video content but each at a different bitrate.

Allowing a player to choose which chunk it downloads means it can adapt to changing network conditions but does imply that each file has contain exactly the same frames of video else there would be a jump when the next file is played. So we have met our first complication. Furthermore, each encoded stream needs to be segmented in the same way and in MPEG, where you can only cut files on I-frame boundaries, it means the encoders need to synchronise their GOP structure giving us our second complication.

These difficulties, many more and Streamroot’s solutions are presented by Erica Beavers and Nikolay Rodionov including experiments and proofs of concept they have carried out to demonstrate the efficacy.

Watch now!

Speakers

Erica Beavers Erica Beavers
Head of Marketing & Partnerships,
Streamroot
Nikolay Rodionov Nikolay Rodionov
Co-Founder, CPO
Streamroot

Video: Google Next 19 – Building a Next-Generation Streaming Platform with Sky

Google Cloud, also called GCP – Google Cloud Platform, continues to invest in Media & Entertainment at a time when many broadcasters, having completed their first cloud projects, are considering ways to ensure they are not beholden to any one cloud provider.

Google’s commitment is evident in their still-recent appointment of ex-Discovery CTO John Honeycutt, this month’s announcement of Viacom’s Google Cloud adoption and the launch of their ‘deploy on any cloud platform’ service called Anthos (official site)

So it’s no surprise that, here, Google asked UK broadcaster Sky and their technology partner for the project, Harmonic Inc., to explain how they’ve been delivering channels in the cloud and cutting costs.

Melika Golkaram from Google Cloud sets the scene by explaining some of the benefits of Google Cloud for Media and Entertainment making it clear that, for them, M&E business isn’t simply a ‘nice to have’ on the side of being a cloud platform. Highlighting their investment in undersea cable and globally-distributed edge servers among the others, Melika hands over to Sky’s Jeff Webb to talk about how Sky have leveraged the platform.

Jeff explains some of the ways that Sky deals with live sports. Whilst sports require high quality video, low latency workflows and have high peak live-streaming audiences, they can also cyclical and left unused between events. High peak workload and long times of equipment left fallow play directly into the benefits of cloud. So we’re not surprised when Jeff says it halved the replacement cost of an ageing system, rather, we want to know more about how they did it.

The benefits that Sky saw revolve around fault healing, geographic resilience, devops, speed of deployment, improved monitoring including more options to leverage open source. Jeff describes these, and other, drivers before mentioning the importance of the ability to move this system between on-premise and different cloud providers.

Before handing over to Harmonic’s Moore Macauley, we’re shown the building blocks of the Sky Sports F1 channel in the cloud and discuss ways that fault healing happens. Moore then goes on to show how Harmonic harnessed their ‘VOS’ microservices platform which deals with ingest, compression, encryption, packaging and origin servers. Harmonic delivered this using GTK, Google Cloud’s Kubernetes deployment platform in multiple regions for fault testing, to allow for A/B testing and much more.

Let’s face it, even after all this time, it can still be tricky getting past the hype of cloud. Here we get a glimpse of a deployed-in-real-life system which not only gives an insight into how these services can (and do) work, but it also plots another point on the graph showing major broadcasters embracing cloud, each in their own way.

Watch now!

Speakers

Jeff Webb Jeff Webb
Principal Streaming, Architect
Sky
Moore Macauley Moore Macauley
Director, Product Architecture
Harmonic
Melika Golkram Melika Golkram
Customer Engineer,
Google Cloud Media