Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group

Video: Broadcast 101 – Audio in an IP Infrastructure

Uncompressed audio has been in the IP game a lot longer than uncompressed video. Because of its long history, it’s had chance to create a fair number of formats ahead of the current standard AES67. Since many people were trying to achieve the same thing, we find that some formats are compatible with AES67 – in part, whilst we that others are not compatible.

To navigate this difficult world of compatibility, Axon CTO Peter Schut continues the Broadcast 101 webinar series with this video recorded this month.

Peter starts by explaining the different audio formats available today including Dante, RAVENNA and others and outlines the ways in which they do and don’t interoperate. After spending a couple of minutes summarising each format individually, including the two SMPTE audio formats -30 and -31, he shows a helpful table comparing the,

Timing is next on the list discussing PTP and the way that SMPTE ST 2059 is used then packet time is covered explaining how the RTP payload fits into the equation. This payload directly affects the duration of audio you can fit into a packet. The duration is important in terms of keeping a low latency and is restricted to either 1ms or 125 microseconds by SMPTE ST 2110-30.

Peter finishes up this webinar talking about some further details about the interoperability problems between the formats.

Watch now!

Speaker

Peter Schut Peter Schut
CTO,
Axon

Video: ST 2110-30 and NMOS IS-08 — Audio Transport and Routing

Andreas Hildebrand starts by introducing 2110 and how it works in terms of sending the essences separately using multicast IP. This talk focusses on the ability of audio-only devices to subscribe to the audio streams without needing the video streams. Andreas then goes on to introduce AES67 which is a standard defining interoperability for audio defining timing, session description, encoding, QOS, transport and much more. Of all the things which are defined in AES67, discovery was deliberately not included and Andreas explains why.

Within SMPTE 2110, there are constraints added to AES67 under the sub-standard 2110-30. The different categories A, B and C (and their X counterparts) are explained in terms how how many audios are defined and the sample lengths with their implications detailed.

As for discovery and other aspects of creating a working system, Andreas looks towards AMWA’s NMOS suite summarising the specifications for Discovery & Registration, Connection Management, Network Control, Event & Tally, Audio Channel Mapping. It’s the latter which is the focus of the last part of this talk.

IS-08 defines a way of defining input and output blocks allowing a channel mapping to be defined. Using IS-05, we can determine which source stream should connect to which destination device. Then IS-08 gives the capability to determine which of the audios within this stream can be mapped to the output(s) of the receiving device and on top of this allows mapping from multiple received streams into the output(s) of one device. The talk then finishes with a deeper look at this process including where example code can be found.

Watch now!

Speaker

Andreas Hildebrand Andreas Hildebrand
Senior Product Manager,
ALC NetworX

Video: Implementing AES67 and ST 2110-30 in Your Plant

AES67 is a flexible standard but with this there is complexity and nuance. Implementing it within ST 2110-30 takes some care and this talk covers lessons learnt in doing exactly that.

AES67 is a standard defined by the Audio Engineering Society to enable high-performance audio-over-IP streaming interoperability between various AoIP systems like Dante, WheatNet-IP and Livewire. It provides comprehensive interoperability recommendations in the areas of synchronization, media clock identification, network transport, encoding and streaming, session description, and connection management.

The SMPTE ST 2110 standards suite makes it possible to separately route and break away the essence streams – audio, video, and ancillary data. ST 2110-30 addresses system requirements and payload formats for uncompressed audio streams and refers to the subset of AES67 standard.

In this video Dominic Giambo from Wheatsone Corporation discusses tips for implementing AES67 and ST 2110-30 standards in a lab environment consisting of over 160 devices (consoles, sufraces, hardware and software I/O blades) and 3 different automation systems. The aim of the test was to pass audio through every single device creating a very long chain to detect any defects.

The following topics are covered:

  • SMPTE ST 2110-30 as a subset of AES67 (support of the PTP profile defined in SMPTE ST 2059-2, an offset value of zero between the media clock and the RTP stream clock, option to force a device to operate in PTP slave-only mode)
  • The importance of using IEEE-1588 PTP v2 master clock for accuracy
  • Packet structure (UDP and RTP header, payload type)
  • Network configuration considerations (mapping out IP and multicast addresses for different vendors, keeping all devices on the same subnet)
  • Discovery and control (SDP stream description files, configuration of signal flow from sources to destinations)

Watch now!

You can download the slides here.

Speaker

Dominic Giambo
Senior Embedded Engineer
Wheatstone Corporation