Video: Secrets of Near Field Monitoring

We don’t need to be running a recording studio to care about speaker placement. Broadcast facilities are full of audio monitoring rooms for a range of uses. The principles discussed in this talk by award-winning studio designer Carl Tatz can be put in to practice wherever you want to sit in a room and listen to decent, flat audio.

Joining Producer Mike Rodiguez who moderates this webinar for the Audio Engineering Society (AES), Carl focuses this discussion on getting the right sound in audio control rooms. This is done through the ‘Null Positioning Ensemble’ (NPE) which considers the mixing console, listener and the speakers ‘as one’ that can be moved around the room. The ensemble puts the two speakers at about 1.71m apart behind the console firing across the console. Their audio intersects 45cm in front of the console where the listener can sit forming an equilateral triangle. By sitting between the console and where the speakers cross, Carl says you hear the source rather than the speakers thus giving the best audio reproduction.

This effect works if the tweeters are at the same higher as the listener’s ears, says Carl, so should be adjusted to suit the listener. High frequencies are more directional than lower frequencies so for accurate listening, it’s important the speakers aren’t pointing too far off-axis. Exactly where to place your ensemble can seem daunting, but Carl has a calculator on his website which gives a great start allowing you to model your room as a rectangle and find out where the null points are going to be. The nulls are where sound cancels out due to reflections so moving your ensemble to avoid these nulls is the key to a great sound. Carl details how this is done and how, then, to optimise for the ‘real world’ room rather than the mathematical model.

 

 

Carl talks about the importance of sound treatment to remove reflections and stop the room from being too lively, with some specific suggestions. In general, the aim is to remove first reflections, have the back stony dead, the ceiling dead and bass traps in the corners. This should allow you to clap your hands without hearing reflection. But you can’t fix every problem with such treatment, Carl says, bringing up a frequency chart of a typical monitor setup which shows a 10dB dip around 125Hz. This is found in all monitoring setups and appears to develop from sound from the speakers bouncing off the floor under the console. He says that this needs to be filled in with subwoofers rather than being fixed with EQ or acoustic treatments.

Watch now!
Speakers

Carl Tatz Carl Tatz
Founder,
Carl Tatz Design LLC
Mike Rodriguez Moderator: Mike Rodriguez
Freelance Director & Producer

Video: AES67/ST 2110-30 over WAN

Dealing with professional audio, it’s difficult to escape AES67 particularly as it’s embedded within the SMPTE ST 2110-30 standard. Now, with remote workflows prevalent, moving AES67 over the internet/WAN is needed more and more. This talk brings the good news that it’s certainly possible, but not without some challenges.

Speaking at the SMPTE technical conference, Nicolas Sturmel from Merging Technologies outlines the work being done within the AES SC-02-12M working group to define the best ways of working to enable easy use of AES67 on the WAn. He starts by outlining the fact that AES67 was written to expect short links on a private network that you can completely control which causes problems when using the WAN/internet with long-distance links on which your bandwidth or choice of protocols can be limited.

To start with, Nicolas urges anyone to check they actually need AES67 over the WAN to start with. Only if you need precise timing (for lip sync for example) with PCM quality and low latencies from 250ms down to as a little as 5 milliseconds do you really need AES67 instead of using other protocols such as ACIP, he explains. The problem being that any ping on the internet, even to something fairly close, can easily take 16 to 40ms for the round trip. This means you’re guaranteed 8ms of delay, but any one packet could be as late as 20ms known as the Packet Delay Variation (PDV).

Link

Not only do we need to find a way to transmit AES67, but also PTP. The Precise Time Protocol has ways of coping for jitter and delay, but these don’t work well on WAN links whether the delay in one direction may be different to the delay for a packet in the other direction. PTP also isn’t built to deal with the higher delay and jitter involved. PTP over WAN can be done and is a way to deliver a service but using a GPS receiver at each location is a much better solution only hampered by cost and one’s ability to see enough of the sky.

The internet can lose packets. Given a few hours, the internet will nearly always lose packets. To get around this problem, Nicolas looks at using FEC whereby you are constantly sending redundant data. FEC can send up to around 25% extra data so that if any is lost, the extra information sent can be leveraged to determine the lost values and reconstruct the stream. Whilst this is a solid approach, computing the FEC adds delay and the extra data being constantly sent adds a fixed uplift on your bandwidth need. For circuits that have very few issues, this can seem wasteful but having a fixed percentage can also be advantageous for circuits where a predictable bitrate is much more important. Nicolas also highlights that RIST, SRT or ST 2022-7 are other methods that can also work well. He talks about these longer in his talk with Andreas Hildrebrand

Watch now!
Speakers

Nicolas Sturmel Nicolas Sturmel
Product Manager, Senior Technologist,
Merging Technologies

Video: AES67/SMPTE ST 2110 Audio Transport & Routing (NMOS IS-08)

Let’s face it, SMPTE ST 2110 isn’t trivial to get up and running at scale. It carries audio as AES67, though with some restrictions which can cause problems for full interoperability with non-2110 AES67 systems. But once all of this is up and running, you’re still lacking discoverability, control and management. These aspects are covered by AMWA’s NMOS IS-04, IS-05 and IS0-08 projects.

Andreas Hildrebrand, Evangelist at ALX NetworX, takes the stand at the AES exhibition to explain how this can all work together. He starts reiterating one of the main benefits of the move to 2110 over 2022-6, namely that audio devices don’t need to receive and de-embed audio. With a dependency on PTP, SMPTE ST 2110-30 an -31 define carriage of AES67 and AES3.

We take a look at IS-04 and IS-05 which define registration, discovery and configuration. Using an address received from DHCP, usually, new devices on the network will put in an entry into a an IS-04 registry which can be queried by an API to find out what senders and listeners are available in a system. IS-05 can then use this information to create connections between devices. IS-05, Andreas explains, is able to issue a create connection request to endpoints asking them to connect. It’s up to the endpoints themselves to initiate the request as appropriate.

Once a connection has been made, there remains the problem of dealing with audio mapping. Andreas uses the example of a single stream containing multiple channels. Where a device only needs to use one or two of these, IS-08 can be used to tell the receiver which audio it should be decoding. This is ideal when delivering audio to a speaker. Andreas then walks us through worked examples.

Watch now!
Speakers

Andreas Hildebrand Andreas Hildebrand
Ravenna Technology Evangelist,
ALC NetworX

Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group