Video: Reinventing Intercom with SMPTE ST 2110-30

Intercom systems form the backbone of any broadcast production environment. There have been great strides made in the advancement of these systems, and matrix intercoms are very mature solution now, with partylines, IFBs and groups, wide range of connectivity options and easy signal monitoring. However, they have flaws as well. Initial cost is high and there’s lack of flexibility as system size is limited by the matrix port count. It is possible to trunk multiple frames, but it is difficult, expensive and takes rack space. Moreover, everything cables back to a central matrix which might be a single point of failure.

In this presentation, Martin Dyster from The Telos Alliance looks at the parallels between the emergence of Audio over IP (AoIP) standards and the development of products in the intercom market. First a short history of Audio over IP protocols is shown, including Telos Livewire (2003), Audinate Dante (2006), Wheatstone WheatNet (2008) and ALC Networks Ravenna (2010). With all these protocols available a question of interoperability has arisen – if you try to connect equipment using two different AoIP protocols it simply won’t work.

In 2010 The Audio Engineering Society formed the x192 Working Group which was the driving force behind the AES67. This standard was ratified in 2013 and allowed interconnecting audio equipment from different vendors. In 2017 SMPTE adapted AES67 as the audio format for ST 2110 standard.

Audio over IP replaces the idea of connecting all devices “point-to-point” with multicast IP flows – all devices are connected via a common fabric and all audio routes are simply messages that go from one device to another. Martin explains how Telos were inspired by this approach to move away from the matrix based intercoms and create a distributed system, in which there is no central core and DSP processing is built in intercom panels. Each panel contains audio mix engines and a set of AES67 receivers and transmitters which use multicast IP flows. Any ST 2110-30 / AES67 compatible devices present on the network can connect with intercom panels without an external interface. Analog and other baseband audio needs to be converted to ST 2110-30 / AES67.

Martin finishes his presentation by highlighting advantages of AoIP intercom systems, including lower entry and maintenance cost, easy expansion (multi studio or even multi site) and resilient operation (no single point of failure). Moreover, adaptation of multicast IP audio flows removes the need for DAs, patch bays and centralised routers, which reduces cabling and saves rack space.

Watch now!

Download the slides.

If you want to refresh your knowledge about AES67 and ST2110-30, we recomend the Video: Deep Dive into SMPTE ST 2110-30, 31 & AES 67 Audio presentation by Leigh Whitcomb.

Speaker

Martin Dyster
VP Business Development
The Telos Alliance

Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group

Video: The 7th Circle of Hell; Making Facility-Wide Audio-over-IP Work

audio-over-ip

When it comes to IP, audio has always been ahead of video. Whilst audio often makes up for it in scale, its relatively low bandwidth requirements meant computing was up to the task of audio-over-IP long before uncompressed video-over-IP. Despite the early lead, audio-over-IP isn’t necessarily trivial. However, this talk aims to give you a heads up to the main hurdles so you can address them right from the beginning.

Matt Ward, Head of Video for UK-based Jigsaw24, starts this talk revising the reasons to go audio over IP (AoIP). The benefits vary for each company. For some, reducing cabling is a benefit, many are hoping it will be cheaper, for others achievable scale is key. Matt’s quick to point out the drawbacks we should be cautious of, not least of which are complexity and skill gaps.

Matt fast-tracks us to better installations by hitting a list of easy wins some of which are basic, but a disproportionately important as the project continues i.e. naming paths and devices and having IP addresses in logical groups. Others are more nuanced like ensuring cable performance. For CAT6 cabling, it’s easy to get companies to test each of your cables to ensure the cable and all terminations are still working at peak performance.

Planning your timing system is highlighted as next on the road to success with smaller facilities more susceptible to problems if they only have one clock. But any facility has to be carefully considered and Matt points out that the Best Master Clock Algorithm (BMCA).

Network considerations are the final stop on the tour, underlining that audio doesn’t have to run in its own network as long as QoS is used to maintain performance. Matt details his reasons to keep Spanning Tree Protocol off, unless you explicitly know that you need it on. The talk finishes by discussing multicast distribution and IGMP snooping.

Watch now!
Speaker

Matt Ward Matt Ward
Head of Audio,
Jigsaw24

Video: How speakers and sound systems work: Fundamentals, plus Broadcast and Cinema Implementations

Many of us know how speakers work, but when it comes to phased arrays or object audio we’re losing our footing. Wherever you are in the spectrum, this dive into speakers and sound systems will be beneficial.

Ken Hunold from Dolby Laboratories starts this talk with a short history of sound in both film and TV unveiling the surprising facts that film reverted from stereo back to mono around the 1950s and TV stayed mono right up until the 80s. We follow this history up to now with the latest immersive sound systems and multi-channel sound in broadcasting.

Whilst the basics of speakers are fairly widely known, Ken with looking at how that’s set up and the different shapes and versions of basic speakers and their enclosures then looking at column speakers and line arrays.

Multichannel home audio continues to offer many options for speaker positioning and speaker type including bouncing audio off the ceilings, so Ken explores these options and compares them including the relatively recent sound bars.

Cinema sound has always been critical to the effect of cinema and foundational to the motivation for people to come together and watch films away from their TVs. There have long been many speakers in cinemas and Ken charts how this has changed as immersive audio has arrived and enabled an illusion of infinite speakers with sound all around.

In the live entertainment space, sound, again, is different where the scale is often much bigger and the acoustics so much different. Ken talks about the challenges of delivering sound to so many people, keeping the sound even throughout the auditorium and dealing with delay of the relatively slow-moving sound waves. The talk wraps up with questions and answers.

Watch now!

Speakers

Ken Hunold Ken Hunold
Sr. Broadcast Services Manager, Customer Engineering
Dolby Laboratories, Inc.