Video: How speakers and sound systems work: Fundamentals, plus Broadcast and Cinema Implementations

Many of us know how speakers work, but when it comes to phased arrays or object audio we’re losing our footing. Wherever you are in the spectrum, this dive into speakers and sound systems will be beneficial.

Ken Hunold from Dolby Laboratories starts this talk with a short history of sound in both film and TV unveiling the surprising facts that film reverted from stereo back to mono around the 1950s and TV stayed mono right up until the 80s. We follow this history up to now with the latest immersive sound systems and multi-channel sound in broadcasting.

Whilst the basics of speakers are fairly widely known, Ken with looking at how that’s set up and the different shapes and versions of basic speakers and their enclosures then looking at column speakers and line arrays.

Multichannel home audio continues to offer many options for speaker positioning and speaker type including bouncing audio off the ceilings, so Ken explores these options and compares them including the relatively recent sound bars.

Cinema sound has always been critical to the effect of cinema and foundational to the motivation for people to come together and watch films away from their TVs. There have long been many speakers in cinemas and Ken charts how this has changed as immersive audio has arrived and enabled an illusion of infinite speakers with sound all around.

In the live entertainment space, sound, again, is different where the scale is often much bigger and the acoustics so much different. Ken talks about the challenges of delivering sound to so many people, keeping the sound even throughout the auditorium and dealing with delay of the relatively slow-moving sound waves. The talk wraps up with questions and answers.

Watch now!

Speakers

Ken Hunold Ken Hunold
Sr. Broadcast Services Manager, Customer Engineering
Dolby Laboratories, Inc.

Video: Current Status of ST 2110 over 25 GbE

IT still has catching up to do. The promise of video over IP and ST 2110 is to benefit from the IT industry’s scale and products, but when it comes to bandwidth, there are times when it isn’t there. This talk looks at 25 gigabit (25GbE) network interfaces to see how well they work and if they’ve arrived on the broadcast market.

Koji Oyama from M3L Inc. explains why the move from 10GbE to 25GbE makes sense; a move which allows more scalability with fewer cables. He then looks at the physical characteristics of the signals, both as 25GbE but also linked together into a 100GbE path.

 

We see that the connectors and adapters are highly similar and then look at a cost analysis. What’s actually available on the market now and what is the price difference? Koji also shows us that FPGAs are available with enough capacity to manage several ports per chip.

So if the cost seems to be achievable, perhaps the decision should come down to reliability. Fortunately, Koji has examined the bit error rates and shows the data which indicates that Reed Solomon protection is needed, called RS-FEC. Reed Solomon is a simple protection scheme which has been used in CDs, satellite transmissions and many other places where a light-weight algorithm for error recovery is needed. Koji goes into some detail here explaining RS-FEC for 25GbE.

Koji has also looked into timing both in synchronisation but also jitter and wander. He presents the results of monitoring these parameters in 10GbE and 25GbE scenarios.

Finishing up by highlighting the physical advantages of moving to 25GbE such as density and streams-per-port, Koji takes a moment to highlight many of the 25GbE products available at NAB as final proof that the 25GbE is increasingly available for use today.

Watch now!

Copy of the presentation

Speaker

Koji Oyama Koji Oyama
Director,
M3L

Video: Making Live Streaming More ‘Live’ with LL-CMAF

Squeezing streaming latency down to just a few seconds is possible with CMAF. Bitmovin guides us through what’s possible now and what’s yet to come.

CMAF represents an evolution of the tried and tested technologies HLS and DASH. With massive scalability and built upon the well-worn tenants of HTTP, Netflix and a whole industry was born and is thriving on these still-evolving technologies. But the push to reduce latency further and further has resulted in CMAF which can be used to deliver streams with five to ten times lower latencies.

Paul MacDougall is a Solutions Architect with Bitmovin so is well placed to explain the application of CMAF. Starting with a look at what we mean by low latency, he shows that it’s still quite possible to find HLS latencies of up to a minute but more common latencies now are closer to 30 seconds. But 5 seconds is the golden latency which matches many broadcast mechanisms including digital terrestrial, so it’s no surprise that this is where low latency CMAF is aimed.

CMAF itself is simply a format which unites HLS and DASH under one standard. It doesn’t, in and of itself, mean your stream will be low latency. In fact, CMAF was born out of MPEG’s MP4 standard – officially called ISO BMFF . But you can use CMAF in a low-latency mode which is what this talk focusses on.

Paul looks at what makes up the latency of a typical feed discussing encoding times, playback latency and the other key places. With this groundwork laid, it’s time to look at the way CMAF is chunked and formatted showing that the smaller chunk sizes allow the encoder and player to be more flexible reducing several types of latency down to only a few seconds.

In order to take full advantage of CMAF, the play needs to understand CMAF and Paul explains these adaptations before moving on to the limitations and challenges of using CMAF today. One important change, for instance, is that chunked streaming players (i.e. HLS) have always timed the download of each chunk to get a feel for whether bandwidth was plentiful (download was quicker than time taken to play the chunk) or bandwidth was constrained (the chunk arrived slower than real-time). Based on this, the player could choose to increase or decrease the bandwidth of the stream it was accessing which, in HLS, means requesting a chunk from a different playlist. Due to the improvements in downloading smaller chunks and using real-time transfer techniques such as HTTP/1.1 Chunked Transfer the chunks are all arriving at the download speed. This makes it very hard to make ABR work for LL-CMAF, though there are approaches being tested and trialed not mentioned in the talk.

Watch now!

Speakers

Paul MacDougall Paul MacDougall
Solutions Architect,
Bitmovin

Video: AV1 in video collaboration

AV1 is famous for its promise to deliver better compression than HEVC but also for it being far from real-time. This talk has a demonstration of the world’s first real-time AV1 video call showing that speed improvement are on the way and, indeed, some have arrived.

Encoding is split into ‘tools’ so where you might hear of ‘h.264’ or ‘MPEG 2’, these are names for a whole set of different ways of looking at – and squeezing down – a picture. They also encompass the rules of how they should act together to form a cohesive encoding mechanism. (To an extent, such codecs tend to define only how the decode should happen, leaving encoding open to innovation.) AV1 contains many tools, many of which are complex and so require a lot of time even from today’s fast computers.

Cisco’s Thomas Davies, who created the BBC’s Dirac codec which is now standardised under SMPTE’s VC-2 standard, points out that whilst these tools are complex, AV1 also has a lot of them and this diversity of choice is actually a benefit for speed and in particular for the speed of software codecs.

After demonstrating the latency and bandwidth benefits of their live, bi-directional, AV1 implementation against AVC, Thomas looks at the deployment possibilities and of AV1. The talk finishes with a summary of what AV1 brings in benefits to sum up why this new effort, with the Alliance of Open Media, is worth it.

Watch now!

Speaker

Thomas Davies Thomas Davies
Principal Engineer,
Cisco Media Engineering, UK