Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group

Video: Routing AES67

Well ahead of video, audio moved to uncompressed over IP and has been reaping the benefits ever since. With more mature workflows and, as has always been the case, a much higher quantity of feeds than video traditionally has, the solutions have a higher maturity.

Anthony from Ward-Beck Systems talks about the advantages of audio IP and the things which weren’t possible before. In a very accessible talk, you’ll hear as much about soup cans as you will about the more technical aspects, like SDP.

Whilst uncompressed audio over IP started a while ago, it doesn’t mean that it’s not still being developed – in fact it’s the interface with the video world where a lot of the focus is now with SMPTE 2110-30 and -31 determining how audio can flow alongside video and other essences. As has been seen in other talks here on The Broadcast Knowledge there’s a fair bit to know.(Here’s a full list.

To simplify this, Anthony, who is also the Vice Chair of AES Toronto, describes the work the AES is doing to certify equipment as AES 67 ‘compatible’ – and what that would actually mean.

This talk finishes with a walk-through of a real world OB deployment of AES 67 which included the simple touches as using google docs for sharing links as well as more technical techniques such as virtual sound card.

Packed full of easy-to-understand insights which are useful even to those who live for video, this IP Showcase talk is worth a look.

Watch now!

Speaker

Anthony P. Kuzub Anthony P. Kuzub
IP Audio Product Manager,
Ward-Beck Systems

Video: AES67 Open Media Standard for Pro-Audio Networks

AES67 is a method of sending audio over IP which was standardised by the Audio Engineering Society as a way of sending uncompressed video over networks between equipment. It’s become widespread and is part of SMPTE’s professional essences-over-IP standards suite, ST 2110.

Here, Conrad Bebbington gives us an introduction to AES67 explaining why AES67 exists and what it tries to achieve. Conrad then goes on to look at interoperability with other competing standards like Dante. After going into some implementation details, importantly, the video then looks the ‘Session Description Protocol’, SDP, and ‘Session Initialisation Protocol’, SIP which are important parts of how AES67 works.

Other topics covered are:

  • Packetisation – how much audio is in a packet, number of channels etc.
  • Synchronisation – using PTP
  • What are SDP and SIP and how are they used
  • Use of IGMP multicast
  • Implementation availability in open source software

Watch now!

For a more in-depth look at AES67, watch this video

Speakers

Conrad Bebbington Conrad Bebbington
Software Engineer,
Cisco

Meeting: How Technology is Changing the Human Voice


Date: May 30th, 18:30 BST
Professor Trevor Cox presents a talk on the changing human voice. The human voice has always been in flux, but over the last hundred years or so, changes have been accelerated by technology. Watch a video of Barcelona, a duet between rock frontman Freddie Mercury and opera soprano Montserrat Caballé, and the difference between an old and new singing style is stark. These differences are not just about taste, they are driven by technology, with amplification freeing pop singers from the athletic task of reaching the back of a venue unaided. This allows someone like Freddie to be much more individualistic.
Actors’ voices have also changed, no longer do we have actor’s projecting their plumy voice using Received Pronunciation. But now viewers complain that they can’t understand the naturalistic accents used in modern TV and film. The talk will begin with examples likes these to explore the changing voice. It will then speculate about the future of the voice. What technologies might be developed to combat the loss of intelligibility caused by mumbling actors? As conversations with computers get more common, how might that change how we speak? Some have already found that Siri is a useful tool to get children to improve their diction. ‘Photoshop for voice’ has already been demonstrated. On the surface this is a useful tool for audio editors, but it also allows unscrupulous individuals to fake speech. Rich in sound examples, the talk will draw on Trevor’s latest popular science book, Now You’re Talking (Bodley Head 2018).

Register now!