Thursday February 7th, 10am PST / 1pm EST / 18:00 GMT Now available on-demand!
There is so much talk about HDR, wide colour gamut (WCG), ‘Better Pixels’ and all the TVs seem to interpolate motion up to 100Hz or above, that it’s good to stop and check we know why all of this matters – and crucially when it doesn’t.
SMPTE’s new ‘Essential Technology Concepts Webcasts’ are here to help and for the first webcast, David Long will look at the fundamentals of colour, contrast and motion in terms of what we actually see.
This promises to be a great talk and, the chances are, even people who ‘know it already’ will be reminded of a thing or two!
Date: 29th January, 18:30 GMT
Location: University of York, Department of Theatre, Film and Television
The AES North of England invite Cleopatra Pike and Amy V. Beeston to talk about how human psychology and neuroscience are involved in the design of many audio products. Firstly, they can be used to determine whether the products suit the needs of the people they aim to serve. ‘Human-technology interaction’ research is conducted to ascertain how humans respond to audio products – where they help and where they hinder. However, issues remain with this research, such as getting reliable reports from people about their experience.
Secondly, psychology and neuroscience can be used to solve engineering problems via ‘human inspired approaches’ (e.g. they can be used produce robots that listen like humans in noisy environments). To fulfil this aim audio engineers and psychologists must determine the biological and behavioural principles behind how humans listen. However, the human hearing system is a black-box which has developed over years of evolution. This makes understanding and applying human principles to technology challenging.
This evening hosts a discussion on some of the benefits and issues involved in an interdisciplinary approach to developing audio products. We include examples from our research investigating how machine listeners might simulate human hearing in compensating for reverberation and spectral distortion, how machine listeners might achieve the perceptual efficiency of humans by optimally combining multiple senses, and how the input from tests on humans can be used to optimise the function of hearing aids.