Video: Routing AES67

Well ahead of video, audio moved to uncompressed over IP and has been reaping the benefits ever since. With more mature workflows and, as has always been the case, a much higher quantity of feeds than video traditionally has, the solutions have a higher maturity.

Anthony from Ward-Beck Systems talks about the advantages of audio IP and the things which weren’t possible before. In a very accessible talk, you’ll hear as much about soup cans as you will about the more technical aspects, like SDP.

Whilst uncompressed audio over IP started a while ago, it doesn’t mean that it’s not still being developed – in fact it’s the interface with the video world where a lot of the focus is now with SMPTE 2110-30 and -31 determining how audio can flow alongside video and other essences. As has been seen in other talks here on The Broadcast Knowledge there’s a fair bit to know.(Here’s a full list.

To simplify this, Anthony, who is also the Vice Chair of AES Toronto, describes the work the AES is doing to certify equipment as AES 67 ‘compatible’ – and what that would actually mean.

This talk finishes with a walk-through of a real world OB deployment of AES 67 which included the simple touches as using google docs for sharing links as well as more technical techniques such as virtual sound card.

Packed full of easy-to-understand insights which are useful even to those who live for video, this IP Showcase talk is worth a look.

Watch now!

Speaker

Anthony P. Kuzub Anthony P. Kuzub
IP Audio Product Manager,
Ward-Beck Systems

Video: AES67 Open Media Standard for Pro-Audio Networks

AES67 is a method of sending audio over IP which was standardised by the Audio Engineering Society as a way of sending uncompressed video over networks between equipment. It’s become widespread and is part of SMPTE’s professional essences-over-IP standards suite, ST 2110.

Here, Conrad Bebbington gives us an introduction to AES67 explaining why AES67 exists and what it tries to achieve. Conrad then goes on to look at interoperability with other competing standards like Dante. After going into some implementation details, importantly, the video then looks the ‘Session Description Protocol’, SDP, and ‘Session Initialisation Protocol’, SIP which are important parts of how AES67 works.

Other topics covered are:

  • Packetisation – how much audio is in a packet, number of channels etc.
  • Synchronisation – using PTP
  • What are SDP and SIP and how are they used
  • Use of IGMP multicast
  • Implementation availability in open source software

Watch now!

For a more in-depth look at AES67, watch this video

Speakers

Conrad Bebbington Conrad Bebbington
Software Engineer,
Cisco

Meeting: How Technology is Changing the Human Voice


Date: May 30th, 18:30 BST
Professor Trevor Cox presents a talk on the changing human voice. The human voice has always been in flux, but over the last hundred years or so, changes have been accelerated by technology. Watch a video of Barcelona, a duet between rock frontman Freddie Mercury and opera soprano Montserrat Caballé, and the difference between an old and new singing style is stark. These differences are not just about taste, they are driven by technology, with amplification freeing pop singers from the athletic task of reaching the back of a venue unaided. This allows someone like Freddie to be much more individualistic.
Actors’ voices have also changed, no longer do we have actor’s projecting their plumy voice using Received Pronunciation. But now viewers complain that they can’t understand the naturalistic accents used in modern TV and film. The talk will begin with examples likes these to explore the changing voice. It will then speculate about the future of the voice. What technologies might be developed to combat the loss of intelligibility caused by mumbling actors? As conversations with computers get more common, how might that change how we speak? Some have already found that Siri is a useful tool to get children to improve their diction. ‘Photoshop for voice’ has already been demonstrated. On the surface this is a useful tool for audio editors, but it also allows unscrupulous individuals to fake speech. Rich in sound examples, the talk will draw on Trevor’s latest popular science book, Now You’re Talking (Bodley Head 2018).

Register now!

AES Talk: Launching an Audio Startup


Talk + Social: Wed 18 April 2018, 19:00 – 21:00 BST Add to Calendar
The event will be followed by a social at The Yorkshire Grey on Langham Street.
Location: Fyvie Hall, University of Westminster, 309 Regent Street, W1B 2HW.View Map
A panel discussion exploring issues surrounding launching an audio related startup company in a dynamic forum that features free flowing discussion and debate with both contributions from panel and audience members alike.

Topics:
• What processes are essential in bringing an existing idea to launch?
• What could be considered to be the most challenging aspects of launching an audio related startup?
• What are the most common pitfalls experienced by fledging audio related startups?
• What role does investment play in launching an audio related startup and at what point may it be necessary?
• At what point should the employment of staff be considered?
• How can growth be managed to maximize longevity?

Panel Members:
• Charlie Slee – Managing Director, Big Bear Audio
• Selina Parmar – Talent and People Manager, Founders Factory
• Jon Eades – Director and Co-Founder, The Rattle
• Sarah Yule – Director of Channel Sales, ROLI

PUB: The event will be followed by a social at The Yorkshire Grey on Langham Street.

Register now!

Meeting: Audio over IP and the Future of Radio

Meeting: Thursday 12th April 2018 | 18:00 for an 18:30 start. Ample refreshments from 18:00.
Location: Palmerston Lecture Theatre, The Spark, Southampton Solent University, SO14 0YN
Click here to register in advance

Two presentations from BBC Research and development, by Chris Baume and Jamie Laundon at a joint event from AES South and SMPTE South Section.

Chris Baume: The Mermaid’s Tears – creating the world‘s first live interactive object-based radio drama

Object-based audio is a revolutionary approach to broadcasting that enables the production and delivery of immersive, interactive and accessible listening experiences. Chris will start by presenting an overview of the opportunities and challenges of object-based audio. He will describe how BBC R&D designed and built an experimental radio studio and an end-to-end object-based broadcast chain. Finally, he will discuss how the studio was used to deliver the world’s first live interactive object-based radio drama, as part of the Orpheus collaborative project.

Chris Baume is a Senior Research Engineer at BBC R&D in London, where he leads the BBC’s research into audio production tools and the BBC’s role in the Orpheus EU H2020 project. His research interests include semantic audio analysis, interaction design, object-based audio and spatial audio. Chris is a Chartered Engineer and a PhD candidate at the Centre for Vision, Speech and Signal Processing at the University of Surrey.

Click here to register in advance

Jamie Laundon: Audio over IP and AES67 – learning to play nicely together

As AoIP becomes commonplace across the industry, the BBC’s Jamie Laundon provides an informative summary of the current state of IP audio in the radio studio, how the latest update to AES67 improves interoperability, and how Plugfests are used to identify and resolve issues between different systems. He will also walk us through an example installation to discuss the various options and decisions you need to make to make your next installation fully IP.

Jamie Laundon is a Senior Technologist at BBC Design and Engineering. He delivers complex technology projects for the BBC’s national radio networks, with a focus on connectivity, workflow design, metadata and networked audio. His 16 year radio career began within UK commercial radio at Heart and LBC in London, before becoming Technical Manager at Galaxy Radio in Yorkshire. He later joined Radio Computing Services (RCS) as an integration specialist working with radio networks across Europe and the Middle East. Jamie is a member of the Engineering innovation team researching BBC Radio’s next-generation “Internet Fit Radio Studios”, with a focus on networked audio interoperability.

Click here to register in advance

Meeting: Applications of perceptual psychology and neuroscience to audio engineering problems

Date: 29th January, 18:30 GMT
Location: University of York, Department of Theatre, Film and Television

The AES North of England invite Cleopatra Pike and Amy V. Beeston to talk about how human psychology and neuroscience are involved in the design of many audio products. Firstly, they can be used to determine whether the products suit the needs of the people they aim to serve. ‘Human-technology interaction’ research is conducted to ascertain how humans respond to audio products – where they help and where they hinder. However, issues remain with this research, such as getting reliable reports from people about their experience.

Secondly, psychology and neuroscience can be used to solve engineering problems via ‘human inspired approaches’ (e.g. they can be used produce robots that listen like humans in noisy environments). To fulfil this aim audio engineers and psychologists must determine the biological and behavioural principles behind how humans listen. However, the human hearing system is a black-box which has developed over years of evolution. This makes understanding and applying human principles to technology challenging.

This evening hosts a discussion on some of the benefits and issues involved in an interdisciplinary approach to developing audio products. We include examples from our research investigating how machine listeners might simulate human hearing in compensating for reverberation and spectral distortion, how machine listeners might achieve the perceptual efficiency of humans by optimally combining multiple senses, and how the input from tests on humans can be used to optimise the function of hearing aids.

Register Now!