Video: How speakers and sound systems work: Fundamentals, plus Broadcast and Cinema Implementations

Many of us know how speakers work, but when it comes to phased arrays or object audio we’re losing our footing. Wherever you are in the spectrum, this dive into speakers and sound systems will be beneficial.

Ken Hunold from Dolby Laboratories starts this talk with a short history of sound in both film and TV unveiling the surprising facts that film reverted from stereo back to mono around the 1950s and TV stayed mono right up until the 80s. We follow this history up to now with the latest immersive sound systems and multi-channel sound in broadcasting.

Whilst the basics of speakers are fairly widely known, Ken with looking at how that’s set up and the different shapes and versions of basic speakers and their enclosures then looking at column speakers and line arrays.

Multichannel home audio continues to offer many options for speaker positioning and speaker type including bouncing audio off the ceilings, so Ken explores these options and compares them including the relatively recent sound bars.

Cinema sound has always been critical to the effect of cinema and foundational to the motivation for people to come together and watch films away from their TVs. There have long been many speakers in cinemas and Ken charts how this has changed as immersive audio has arrived and enabled an illusion of infinite speakers with sound all around.

In the live entertainment space, sound, again, is different where the scale is often much bigger and the acoustics so much different. Ken talks about the challenges of delivering sound to so many people, keeping the sound even throughout the auditorium and dealing with delay of the relatively slow-moving sound waves. The talk wraps up with questions and answers.

Watch now!

Speakers

Ken Hunold Ken Hunold
Sr. Broadcast Services Manager, Customer Engineering
Dolby Laboratories, Inc.

Meeting: How Technology is Changing the Human Voice


Date: May 30th, 18:30 BST
Professor Trevor Cox presents a talk on the changing human voice. The human voice has always been in flux, but over the last hundred years or so, changes have been accelerated by technology. Watch a video of Barcelona, a duet between rock frontman Freddie Mercury and opera soprano Montserrat Caballé, and the difference between an old and new singing style is stark. These differences are not just about taste, they are driven by technology, with amplification freeing pop singers from the athletic task of reaching the back of a venue unaided. This allows someone like Freddie to be much more individualistic.
Actors’ voices have also changed, no longer do we have actor’s projecting their plumy voice using Received Pronunciation. But now viewers complain that they can’t understand the naturalistic accents used in modern TV and film. The talk will begin with examples likes these to explore the changing voice. It will then speculate about the future of the voice. What technologies might be developed to combat the loss of intelligibility caused by mumbling actors? As conversations with computers get more common, how might that change how we speak? Some have already found that Siri is a useful tool to get children to improve their diction. ‘Photoshop for voice’ has already been demonstrated. On the surface this is a useful tool for audio editors, but it also allows unscrupulous individuals to fake speech. Rich in sound examples, the talk will draw on Trevor’s latest popular science book, Now You’re Talking (Bodley Head 2018).

Register now!

Meeting: Designing Sound Studios

Having designed for John Lennon, Pete Townshend and many others, Dr Eddie Veale will identify the various types of studios, review operational needs and the approach to design, examining the similarities and differences in studio design across different types and genres. This meeting is a joint meeting bringing together AES South, IoA Southern Branch and the SMPTE Southampton student chapter for the first time and is open to the general public – tickets can be booked via eventbrite.

Time: 18:00 refreshments for 18:30 start
Date: Tuesday 6th March 2018
Location: Palmerston Lecture Theatre, Spark Building, Southampton Solent University

Register Now!

Meeting: Applications of perceptual psychology and neuroscience to audio engineering problems

Date: 29th January, 18:30 GMT
Location: University of York, Department of Theatre, Film and Television

The AES North of England invite Cleopatra Pike and Amy V. Beeston to talk about how human psychology and neuroscience are involved in the design of many audio products. Firstly, they can be used to determine whether the products suit the needs of the people they aim to serve. ‘Human-technology interaction’ research is conducted to ascertain how humans respond to audio products – where they help and where they hinder. However, issues remain with this research, such as getting reliable reports from people about their experience.

Secondly, psychology and neuroscience can be used to solve engineering problems via ‘human inspired approaches’ (e.g. they can be used produce robots that listen like humans in noisy environments). To fulfil this aim audio engineers and psychologists must determine the biological and behavioural principles behind how humans listen. However, the human hearing system is a black-box which has developed over years of evolution. This makes understanding and applying human principles to technology challenging.

This evening hosts a discussion on some of the benefits and issues involved in an interdisciplinary approach to developing audio products. We include examples from our research investigating how machine listeners might simulate human hearing in compensating for reverberation and spectral distortion, how machine listeners might achieve the perceptual efficiency of humans by optimally combining multiple senses, and how the input from tests on humans can be used to optimise the function of hearing aids.

Register Now!