Video: Introduction To AES67 & SMPTE ST 2110

While standardisation of video and audio over IP is welcome, this does leave us with a plethora of standards numbers to keep track of along with interoperability edge cases to keep track of. Audio-over-IP standard AES67 is part of the SMPTE ST-2110 standards suite and was born largely from RAVENNA which is still in use in it’s own right. It’s with this backdrop that Andreas Hildebrand from ALC NetworX who have been developing RAVENNA for 10 years now, takes the mic to explain how this all fits together. Whilst there are many technologies at play, this webinar focusses on AES67 and 2110.

Andreas explains how AES67 started out of a plan to unite the many proprietary audio-over-IP formats. For instance, synchronisation – like ST 2110 as we’ll see later – was based on PTP. Andreas gives an overview of this synchronisation and then we shows how they looked at each of the OSI layers and defined a technology that could service everyone. RTP, the Real-time Transport Protocol has been in use for a long time for transport of video and audio so made a perfect option for the transport layer. Andreas highlights the important timing information in the headers and how it can be delivered by unicast or IGMP multicast.

As for the audio, standard PCM is the audio of choice here. Andreas details the different format options available such as 24-bit with 8 channels and 48 samples per packet. By varying the format permutations, we can increase the sample rate to 96kHz or modify the number of audio tracks. To signal all of this format information, Session Description Protocol messages are sent which are small text files outlining the format of the upcoming audio. These are defined in RFC 4566. For a deeper introduction to IP basics and these topics, have a look at Ed Calverley’s talk.

The second half of the video is an introduction to ST-2110. A deeper dive can be found elsewhere on the site from Wes Simpson.
Andreas starts from the basis of ST 2022-6 showing how that was an SDI-based format where all the audio, video and metadata were combined together. ST 2110 brings the splitting of media, known as ‘essences’, which allows them to follow separate workflows without requiring lots of de-embedding and embedding processes.

Like most modern standards, ATSC 3.0 is another example, SMPTE ST 2110 is a suite of many standards documents. Andreas takes the time to explain each one and the ones currently being worked on. The first standard is ST 2110-10 which defines the use of PTP for timing and synchronisation. This uses SMPTE ST 2059 to relate PTP time to the phase of media essences.

2110-20 is up next and is the main standard that defines use of uncompressed video with headline features such as being raster/resolution agnostic, colour sampling and more. 2110-21 defines traffic shaping. Andreas takes time to explain why traffic shaping is necessary and what Narrow, Narrow-Linear, Wide mean in terms of packet timing. Finishing the video theme, 2110-22 defines the carriage of mezzanine-compressed video. Intended for compression like TICO and JPEG XS which have light, fast compression, this is the first time that compressed media has entered the 2110 suite.

2110-30 marks the beginning of the audio standards describing how AES67 can be used. As Andreas demonstrates, AES67 has some modes which are not compatible, so he spends time explaining the constraints and how to implement this. For more detail on this topic, check out his previous talk on the matter. 2110-31 introduces AES3 audio which, like in SDI, provides both the ability to have PCM audio, but also non-PCM audio like Dolby E and D.

Finishing up the talk, we hear about 2110-40 which governs transport of ancillary metadata and a look to the standards still being written, 2110-23 Single Video essence over multiple 2110-20 streams, 2110-24 for transport of SD signals and 2110-41 Transport of extensible, dynamic metadata.

Watch now!
Speaker

Andreas Hildebrand Andreas Hildebrand
Senior Product Manager,
ALC NetworX Gmbh.

Video: ATSC 3.0 Part II – Cutting Edge OFDM with IP

RF, modulation, Single Frequency Networks (SFNs) – there’s a lot to love about this talk which is the second in a series of ATSC seminars however much is transferable to DVB. Today we’re focussed on transmission showing how ATSC 3.0 improves on DVB-T, how it simultaneously delivers feeds with different levels of robustness, the benefits of SFNs and much more.

In the second in this series of ATSC 3.0 talks, GatesAir’s Joe Seccia leads the proceedings starting by explaining why ATSC 3.0 didn’t simply adopt DVB-T2’s modulation scheme. The answer, explained in detail by Joe, is that by putting in further work, they got closer to the Shannon limit than DVB-T2 does. He continues to highlight the relevant standards which comprise the ATSC 3.0 standard which define the RF physical layer.

After showing how the different processes such as convolutional encoding and multiplexing fit together in the transmission chain, Joe focuses in on Layered Division Multiplexing (LDM) where a high robustness signal can be carefully combined with a lower robustness signal such that where one interferes with the other, there is enough separation to allow it to be decoded.

Next we are introduced to PLPs – Physical Layer Pipes. These can also be found in DVB-T2 and DVB-S2 and are logical channels carrying one or more services, with a modulation scheme and robustness particular to that individual pipe. Within those lie Frames and Subframes and Joe gives a good breakdown of the difference in meaning of the three, the Frame being at the top of the pile containing the other two. We look at how the bootstrap signal at a known modulation scheme and symbol rate details what’s coming next such which allow this very dynamic working with streams being sent with different modulation settings. The bootstrap is also important as it contains Early Alert System (EAS) signalling.

Layered Division Multiplexing is the next hot topic we hit and this elicits questions from the audience. LDM is important because it allows two streams to be sent at the same time with independent or related broadcasts. For instance this could deliver UHD content with HD underneath with the HD modulated to give much better robustness.

Another way of maintaining robustness is to establish an SFN which is now possible with ATSC 3.0. Joe explains how this is possible and how the RF from different antennae can help with reception. Importantly he also outlines how toward out the maximum separation between antennae and talks through different deployment techniques. He then works through some specific cases to understand the transmission power needed.

As the end of the video nears, Joe talks about MIMO transmission explaining how this, among other benefits, can allow channel bonding where two 6Mhz channels can be treated as a single 12Mhz channel. He talks about how PTP can complement GPS in maintaining timing if diverse systems are linked with ethernet and he then finishes with a walkthrough of configuring a system.

Watch now!
Speakers

Joe Seccia Joe Seccia
Manager, TV Transmission Market and Product Development Strategy
GatesAir

Video: Everyone is Streaming; Can the Infrastructure Handle it?

How well is the internet infrastructure dealing with the increase in streaming during the Covid-19 pandemic? What have we learnt in terms of delivering services and have we seen any changes in the way services are consumed? This video brings together carriers, vendors and service providers to answer these questions and give a wider picture.

The video starts off by getting different perspectives on how the pandemic has affected their business sharing key data points. Jeff Budney from Verizon says that carriers have had a ‘whirlwind’ few weeks. Conviva’s José Jesus says that while they are only seeing 3% more devices, there was a 37% increase in hours of video consumed. Peaks due to live sports have done but primetime is now spread and more stable, a point which was made by both Jeff Gilbert from Qwilt as well as José.

“We’ve seen a whole year’s worth of traffic growth…it’s really been incredible” — Jeff Budney, Verizon

So while it’s clear that growth has happened, but the conversation turns to whether this has caused problems. We hear views about how some countries did see reductions in quality of experience and some with none. This experience is showing where bottlenecks are, whether they are part of the ISP infrastructure or in individual players/services which haven’t been well optimised. Indeed, explains Jason Thibeault, Executive Director of the Streaming Video Alliance, the situation seems to be shining a light on the operational resilience, rather than technical capacity of ISPs.

Thierry Fautier from Harmonic emphasises the benefits of content-aware encoding whereby services could reduce bandwidth by “30 to 40 percent” before talking about codec choice. AVC (A.K.A. H.264) accounts for 90%+ of all HD traffic. Thierry contents that by switching to both HEVC and content-aware encoding services could reduce their bandwidth by up to a factor of four.

Open Caching is a working group creating specifications to standardise an interface to allow ISPs to pull information into a local cache from service providers. This moving of content to the edge is one way that we can help avoid bottlenecks by locating content as close to viewers as possible.

The elephant in the room is that Netflix reduced quality/bitrate in order to help some areas cope. Verizon’s Jeff Budney points out that this is contra to the industry’s approach to deployment where they have assumed there is always the capacity to provide the needed scale. If that’s true, how can one tweet from a European Commissioner have had such an impact? The follow on point is that if YouTube and Netflix are now sending 25% less data, as reports suggest, ABR simply means that other providers’ players will take up the slack, as is the intent-free way ABR works. If the rest of the industry benefits from the big providers ‘dialling back’ is this an effective measure and is it fair?

The talk concludes hitting topics on ABR Multicast, having more intelligent ways to manage large-scale capacity issues, more on Open Caching and deliver protocols.

Watch now!
Speakers

Thierry Fautier Thierry Fautier
VP Video Strategy, Harmonic Inc.
President-Chair, Ultra HD Forum
Eric Klein Eric Klein
Director, Content Distribution – Disney+/ESPN+, Disney Streaming Services
Co-Chair, Open Cache Working Group, Streaming Video Alliance
José Jesus José Jesus
Senior Product Manager,
Conviva
Jeffrey Budney Jeff Budney
Manager,
Verizon
Jeffrey Gilbert Jeffrey Gilbert
VP strategy and Business Development, CP,
Qwilt
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance

On Demand Webinar: The Technology of Motion-Image Acquisition

A lot of emphasis is put on the tech specs of cameras, but this misses a lot of what makes motion-image acquisition an art form as much as it is a science. To understand the physics of lenses, it’s vital we also understand the psychology of perception. And to understand what ‘4K’ really means, we need to understand how the camera records the light and how it stores the data. Getting a grip on these core concepts allow us to navigate a world of mixed messages where every camera manufacturer from webcam to phone, from DSLR to Cinema is vying for our attention.

In the first of four webinars produced in conjunction with SMPTE, Russell Trafford-Jones from The Broadcast Knowledge welcomes SMPTE fellows Mark Schubin and Larry Thorpe to explain these fundamentals providing a great intro for those new to the topic, and filling in some blanks for those who have heard it before!

Russell will start by introducing the topic and exploring what makes some cameras suitable for some types of shooting, say, live television and others for cinema. He’ll talk about the place for smartphones and DSLRs in our video-everywhere culture. Then he’ll examine the workflows needed for different genres which drive the definitions of these cameras and lenses; If your live TV show is going to be seen 2 seconds later by 3 million viewers, this is going to determine many features of your camera that digital cinema doesn’t have to deal with and vice versa.

Mark Schubin will be talking about at lighting, optical filtering, sensor sizes and lens mounts. Mark spends some time explaining how light is made up and created whereby the ‘white’ that we see may be made of thousands of wavelengths of light, or just a few. So, the type of light can be important for lighting a scene and knowing about it, important for deciding on your equipment. The sensors, then, are going to receive this light, are also well worth understanding. It’s well known that there are red-, green- and blue-sensitive pixels, but less well-known is that there is a microlens in front of each one. Granted it’s pricey, but the lens we think most about is one among several million. Mark explains why these microlenses are there and the benefits they bring.

Larry Thorpe, from Canon, will take on the topic of lenses starting from the basics of what we’re trying to achieve with a lens working up to explaining why we need so many pieces of glass to make one. He’ll examine the important aspects of the lens which determine its speed and focal length. Prime and zoom are important types of lens to understand as they both represent a compromise. Furthermore, we see that zoom lenses take careful design to ensure that the focus is maintained throughout the zoom range, also known as tracking.

Larry will also examine the outputs of the cameras, the most obvious being the SDI out of the CCU of broadcast cameras and the raw output from cinema cameras. For film use, maintaining quality is usually paramount so, where possible, nothing is discarded hence creating ‘raw’ files which are named as they record, as close as practical, the actual sensor data received. The broadcast equivalent is predominantly RGB with 4:2:2 colour subsampling meaning the sensor data has been interpreted and processed to create RGB pixels and half the colour information has been discarded. This still looks great for many uses, but when you want to put your image through a meticulous post-production process, you need the complete picture.

The SMPTE Core Concepts series of webcasts are both free to all and aim to support individuals to deepen their knowledge. This webinar is in collaboration with The Broadcast Knowledge which, by talking about a new video or webinar every day helps empower each person in the industry by offering a single place to find educational material.

Watch now!
Speakers

Mark Schubin Mark Schubin
Engineer and Explainer
Larry Thorpe Larry Thorpe
Senior Fellow,
Canon U.S.A., Inc.
Russell Trafford-Jones Russell Trafford-Jones
Editor, The Broadcast Knowledge
Manager, Services & Support, Techex
Exec Member, IET Media