Video: AAC Demystified: How the AAC audio codec works and how to make sense of all its crazy profiles.

The title says it all! Alex Converse speaks here at the San Fancisco Video Tech meet up while he was working at Google discussing the ins and outs of AAC – and since he implemented an AAC decoder himself, he should know a thing or two about it.

Sure enough, Alex delivers by talking about the different version of AAA that have been around since MPEG 2 AAC through to the high efficiency AACs we have seen more recently.

Walking through the AAC Encoder block diagram we look at each of the different sections from sampling, MDCT (a type of Fourier transform) to psychoacoustic processing, stereo processing and more.

We then start to look at the syntax for the way the streams are structured which brings us in to understanding the AAC channel modes, and the enhanced mechanisms for encoding and processing used by the later versions of AAC including HE-AAC V2.

Alex finished with quick look at low delay codecs and a Q&A.

A great, detailed, overview of AAC. Ideal for developers and those who need to fully understand audio.

Watch now!

Speaker

Alex Converse Alex Converse
Senior Software Engineer,
Twitch

Video: CEDIA Talk: ATSC 3.0 is HERE – Why It Matters to You


That last in the current series of ATSC 3.0 posts. This one is a light, but useful talk which aims to introduce people to ATSC 3.0 calling out the features and differences.

Michael, showing off his colour bars jacket, explains how ATSC 3.0 came about and how ATSC 2.0 never came to pass and ‘is on a witness protection program’. He then explains the differences between ATSC 1.0 and 3.0, discussing the fact its IP based and capable of UHD and HDR amongst other things.

The important question is why is it better and we see the modulation scheme is an improvement (note Michael says ATSC 3.0 is based on QAM; it actually based on OFDM.)

The talk finishes talking about what ATSC 3.0 isn’t and implementation details and the frequency repack which is happening in the US.

Watch now!
Speaker

Michael Heiss Michael Heiss
Principal Consultant,
M. Heiss Consulting

Video: Next Generation Broadcast Platform – ATSC 3.0

Continuing our look at ATSC 3.0, our fifth talk straddles technical detail and basic business cases. We’ve seen talks on implementation experience such as in Chicago and Phoenix and now we look at receiving the data in open source.

We’ve covered before the importance of ATSC 3.0 in the North American markets and the others that are adopting it. Jason Justman from Sinclair Digital states the business cases and reasons to push for it despite it being incompatible with previous generations. He then discusses what Software Defined Radio is and how it fits in to the puzzle. Covering the early state of this technology.

With a brief overview of the RF side of ATSC 3.0 which itself is a leap forward, Jason explains how the video layer benefits. Relying on ISO BMMFF, Jason introduces MMT (MPEG Media Transport) explaining what it is and why it’s used for ATSC 3.0.

The next section of the talk showcases libatsc3 whose goal is to open up ATSC 3.0 to talented Software Engineers and is open source which Jason demos. The library allows for live decoding of ATSC 3.0 including MMT material.

Finishing his talk with a Q&A including SCTE 34 and an interesting comparison between DVB-T2 and ATSC 3.0 makes this a very useful talk to fill in technical gaps that no other ATSC 3.0 talk covers.

Complete slide pack

Watch now!
Speakers

Jason Justman Jason Justman
Senior Principal Architect,
Sinclair Digital

Video: The ST 2094 Standards Suite For Dynamic Metadata

Lars Borg explains to us what problems the SMPTE ST 2094 standard sets out to solve. Looking at the different types of HDR and Wide Colour Gamut (WCG) we quickly see how many permutations there are and how many ways there are to get it wrong.

ST 2094 carries the metadata needed to manage the colour, dynamic range and related data. In order to understand what’s needed, Lars takes us through the details of the HDR implementations, touching on workflows and explaining how the ability of your display affects the video.

We then look at midtones and dynamic metadata before a Q&A.

This talk is very valuable in understanding the whole HDR, WCG ecosystem as much as it is ST 2094.

Watch now!

Speaker

Lars Borg Lars Borg
Principal Scientist,
Adobe