Webinar: ATSC 3.0 Signaling, Delivery, and Security Protocols

ATSC 3.0 is bringing IP delivery to terrestrial broadcast. Streaming data live over the air is no mean feat, but nevertheless can be achieved with standard protocols such as MPEG DASH. The difficulty is telling the other end what’s its receiving and making sure that security is maintained ensuring that no one can insert unintended media/data.

In the second of this webinar series from the IEEE BTS, Adam Goldberg digs deep into two standards which form part of ATSC 3.0 to explain how security, delivery and signalling are achieved. Like other recent standards, such as SMPTE’s 2022 and 2110, we see that we’re really dealing with a suite of documents. Starting from the root document A/300, there are currently twenty further documents describing the physical layer, as we learnt last week in the IEEE BTS webinar from Sony’s Luke Fay, management and protocol layer, application and presentation layer as well as the security layer. In this talk Adam, who is Chair of a group on ATSC 3.0 security and vice-chair one on Management and Protocols, explains what’s in the documents A/331 and A/360 which between them define signalling, delivery and security for ATSC 3.0.

Security in ATSC 3.0
One of the benefits of ATSC 3.0’s drive into IP and streaming is that it is able to base itself on widely developed and understood standards which are already in service in other industries. Security is no different, using the same base technology that secure websites use the world over to achieve security. Still colloquially known by its old name, SSL, the encrypted communication with websites has seen several generations since the world first saw ‘HTTPS’ in the address bar. TLS 1.2 and 1.3 are the encryption protocols used to secure and authenticate data within ATSC 3.0 along with X.509 cryptographic signatures.

Authentication vs Encryption
The importance of authentication alongside encryption is hard to overstate. Encryption allows the receiver to ensure that the data wasn’t changed during transport and gives assurance that no one else could have decoded a copy. It provides no assurance that the sender was actually the broadcaster. Certificates are the key to ensuring what’s called a ‘chain of trust’. The certificates, which are also cryptographically signed, match a stored list of ‘trusted parties’ which means that any data arriving can carry a certificate proving it did, indeed, come from the broadcaster or, in the case of apps, a trusted third party.

Signalling and Delivery
Telling the receiver what to expect and what it’s getting is a big topic and dealt with in many places with in the ATSC 3.0 suite. The Service List Table (SLT) provides the data needed for the receiver to get handle on what’s available very quickly which in turn points to the correct Service Layer Signaling (SLS) which, for a specific service, provides the detail needed to access the media components within including the languages available, captions, audio and emergency services.

ATSC 3.0 Receiver Protocol Stack

ATSC 3.0 Receiver Protocol Stack

Media delivery is achieved with two technologies. ROUTE (Real-Time Object Delivery over Unidirectional Transport ) which is an evolution of FLUTE which the 3GPP specified to deliver MPEG DASH over LTE networks. and MMTP (Multimedia Multiplexing Transport Protocol) an MPEG standard which, like MPEG DASH is based on the container format ISO BMFF which we covered in a previous video here on The Broadcast Knowledge

Register now for this webinar to find out how this all connects together so that we can have safe, connected television displaying the right media at the right time from the right source!

Speaker

Adam Goldberg Adam Goldberg
Chair, ATSC 3.0 Specialist Group on ATSC 3.0 Security
Vice-chair, ATSC 3.0 Specialist Group on Management and Protocols
Director Technical Standards, Sony Electronics

Video: Low Latency Streaming

There are two phases to reducing streaming latency. One is to optimise the system you already have, the other is to move to a new protocol. This talk looks at both approaches achieving parity with traditional broadcast media through optimisation and ‘better than’ by using CMAF.

In this video from the Northern Waves 2019 conference, Koen van Benschop from Deutsche Telekom examines the large and low-cost latency savings you can achieve by optimising your current HLS delivery. With the original chunk sizes recommended by Apple being 10 seconds, there are still many services out there which are starting from a very high latency so there are savings to be had.

Koen explains how the total latency is made up by looking at the decode, encode, packaging and other latencies. We quickly see that the player buffer is one of the largest, the second being the encode latency. We explore the pros and cons of reducing these and see that the overall latency can fall to or even below traditional broadcast latency depending, of course, on which type (and which country’s) you are comparing it too.

While optimising HLS/DASH gets you down to a few seconds, there’s a strong desire for some services to beat that. Whilst the broadcasters themselves may be reticent to do this, not wanting to deliver online services quicker than their over-the-air offerings, online sports services such as DAZN can make latency a USP and deliver better value to fans. After all, DAZN and similar services benefit from low-second latency as it helps bring them in line with social media which can have very low latency when it comes to key events such as goals and points being scored in live matches.

Stefan Arbanowski from Fraunhofer leads us through CMAF covering what it is, the upcoming second edition and how it works. He covers its ability to use .m3u8 (from HLS) and .mpd (from DASH) playlist/manifest files and that it works both with fMP4 and ISO BMFF. One benefit from DASH is it’s Common Encryption standard. Using this it can work with PlayReady DRM, Fairplay and others.

Stefan then takes a moment to consider WebRTC. Given it proposes latency of less than one second, it can sound like a much better idea. Stefan outlines concerns he has about the ability to scale above 200,000 users. He then turns his attention back to CMAF and outlines how the stream is composed and how the player logic works in order to successfully play at low latency.

Watch now!
Speakers

Koen van Benschop Koen van Benschop
Senior Manager TV Headend and DRM,
Deutsche Telekom
Stefan Arbanowski Stefan Arbanowski
Director Future Applications and Media,
Fraunhofer FOKUS

Video: Specification of Live Media Ingest

“Standardisation is more than just a player format”. There’s so much to a streaming service than the video, a whole ecosystem needs to work together. In this talk from Comcast’s Mile High Video 2019, we see how different parts of the ecosystem are being standardised for live ingest.

RTMP and Smooth streaming are being phased out – without proper support for HEVC, VVC, HDR etc. they are losing relevance as well as, in the case of RTMP, support from the format itself. Indeed it’s clear that fragmented MP4 (fMP4) and CMAF are taking hold in their place so it makes sense for a new ingest standard to coalesce around these formats.

Rufael Mekuria from Unified streaming explains this effort to create a spec around live media ingest that is happening as part of MPEG DASH-IF. The work itself started at the end of 2017 with the aim of publishing summer 2019 supporting CMAF and DASH/HLS interfaces.

Rufael explains CMAF ingest used HTTP post to move each media stream to the origin packager. The tracks are separated into video, audio, timed text, subtitle and timed metadata. They are all transferred on separate tracks and is compatible with future codecs. He also covers security and timed text before covering DASH/HLS ingest which can also contain CMAF because HLS contains the capability to contain CMAF.

Reference software is available along with the <a href=”http://”https://dashif-documents.azurewebsites.net/Ingest/master/DASH-IF-Ingest.pdf” rel=”noopener noreferrer” target=”_blank”>specification.

Watch now!
Speaker

Rufael Mekuria Rufael Mekuria
Head of Research & Standardisation,
Unified Streaming

Video: DASH Updates

MPEG DASH is a standardised method for encapsulating media for streaming similar to Apple’s HLS. Based on TCP, MPEG DASH is a widely compatible way of streaming video and other media over the internet.

MPEG DASH is now on its 3rd edition, its first standard being in 2011. So this talk starts by explaining what’s new as of July 2019 in this edition. Furthermore, there are amendments already worked on which are soon to add more features.

Iraj Sodagar explains Service Descriptors which will be coming that allow the server to encapsulate metadata for the player which describes how the publisher intended to show the media. Maximum and minimum latency and quality is specified. for instance. The talk explains how these are used and why they are useful.

Another powerful metadata feature is the Initialization Set, Group and Presentation which gives the decoder a ‘heads up’ on what the next media will need in terms of playback. This allows the player to politely decline to play the media if it can’t display it. For instance, if a decoder doesn’t supply AV1, this can be identified before needing to attempt a decode or download a chunk.

Iraj then explains what will be in the 4th edition including the above, signalling leap seconds and much more. This should be published over the next few months.

Amendement 1 is working towards a more accurate timing model of events and defining a specific DASH profile for CMAF (the low-latency streaming technology based on DASH) which Iraj explains in detail.

Finishing off with session based DASH operations, a look over the DASH workplan/roadmap, ad insertion, event and timed metadata processing, this is a great, detailed look at the DASH of today and of 2020.

Watch now!
Speaker

Iraj Sodagar Iraj Sodagar
Independant Consultant