Video: AES67/SMPTE ST 2110 Audio Transport & Routing (NMOS IS-08)

Let’s face it, SMPTE ST 2110 isn’t trivial to get up and running at scale. It carries audio as AES67, though with some restrictions which can cause problems for full interoperability with non-2110 AES67 systems. But once all of this is up and running, you’re still lacking discoverability, control and management. These aspects are covered by AMWA’s NMOS IS-04, IS-05 and IS0-08 projects.

Andreas Hildrebrand, Evangelist at ALX NetworX, takes the stand at the AES exhibition to explain how this can all work together. He starts reiterating one of the main benefits of the move to 2110 over 2022-6, namely that audio devices don’t need to receive and de-embed audio. With a dependency on PTP, SMPTE ST 2110-30 an -31 define carriage of AES67 and AES3.

We take a look at IS-04 and IS-05 which define registration, discovery and configuration. Using an address received from DHCP, usually, new devices on the network will put in an entry into a an IS-04 registry which can be queried by an API to find out what senders and listeners are available in a system. IS-05 can then use this information to create connections between devices. IS-05, Andreas explains, is able to issue a create connection request to endpoints asking them to connect. It’s up to the endpoints themselves to initiate the request as appropriate.

Once a connection has been made, there remains the problem of dealing with audio mapping. Andreas uses the example of a single stream containing multiple channels. Where a device only needs to use one or two of these, IS-08 can be used to tell the receiver which audio it should be decoding. This is ideal when delivering audio to a speaker. Andreas then walks us through worked examples.

Watch now!
Speakers

Andreas Hildebrand Andreas Hildebrand
Ravenna Technology Evangelist,
ALC NetworX

Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group

Video: Routing AES67

Well ahead of video, audio moved to uncompressed over IP and has been reaping the benefits ever since. With more mature workflows and, as has always been the case, a much higher quantity of feeds than video traditionally has, the solutions have a higher maturity.

Anthony from Ward-Beck Systems talks about the advantages of audio IP and the things which weren’t possible before. In a very accessible talk, you’ll hear as much about soup cans as you will about the more technical aspects, like SDP.

Whilst uncompressed audio over IP started a while ago, it doesn’t mean that it’s not still being developed – in fact it’s the interface with the video world where a lot of the focus is now with SMPTE 2110-30 and -31 determining how audio can flow alongside video and other essences. As has been seen in other talks here on The Broadcast Knowledge there’s a fair bit to know.(Here’s a full list.

To simplify this, Anthony, who is also the Vice Chair of AES Toronto, describes the work the AES is doing to certify equipment as AES 67 ‘compatible’ – and what that would actually mean.

This talk finishes with a walk-through of a real world OB deployment of AES 67 which included the simple touches as using google docs for sharing links as well as more technical techniques such as virtual sound card.

Packed full of easy-to-understand insights which are useful even to those who live for video, this IP Showcase talk is worth a look.

Watch now!

Speaker

Anthony P. Kuzub Anthony P. Kuzub
IP Audio Product Manager,
Ward-Beck Systems

Video: AES67 Open Media Standard for Pro-Audio Networks

AES67 is a method of sending audio over IP which was standardised by the Audio Engineering Society as a way of sending uncompressed video over networks between equipment. It’s become widespread and is part of SMPTE’s professional essences-over-IP standards suite, ST 2110.

Here, Conrad Bebbington gives us an introduction to AES67 explaining why AES67 exists and what it tries to achieve. Conrad then goes on to look at interoperability with other competing standards like Dante. After going into some implementation details, importantly, the video then looks the ‘Session Description Protocol’, SDP, and ‘Session Initialisation Protocol’, SIP which are important parts of how AES67 works.

Other topics covered are:

  • Packetisation – how much audio is in a packet, number of channels etc.
  • Synchronisation – using PTP
  • What are SDP and SIP and how are they used
  • Use of IGMP multicast
  • Implementation availability in open source software

Watch now!

For a more in-depth look at AES67, watch this video

Speakers

Conrad Bebbington Conrad Bebbington
Software Engineer,
Cisco