Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group

Video: Codec Comparison from TCO and Compression Efficiency Perspective

AVC, now 16 years old, is long in the tooth but supported by billions of devices. The impetus to replace it comes from the drive to serve customers with a lower cost/base and a more capable platform. Cue the new contenders VVC and AV1 – not to mention HEVC. It’s no surprise they comptes better then AVC (also known as MPEG 4 and h.264) but do they deliver a cost efficient, legally safe codec on which to build a business?

Thierry Fautier has done the measurements and presents them in this talk. Thierry explains that the tests were done using reference code which, though unoptimised for speed, should represent the best quality possible from each codec and compared 1080p video all of which is reproduced in the IBC conference paper.

Licensing is one important topic as, by some, HEVC is seen as a failed codec not in terms of its compression but rather in the réticente by many companies to deploy it which has been due to the business risk of uncertain licensing costs and/or the expense of the known licensing costs. VVC faces the challenge of entering the market and avoiding these concerns which MPEG is determined to do.

Thierry concludes by comparing AVC against HEVC, AV1 and VVC in terms of deployment dates, deployed devices and the deployment environment. He looks at the challenge of moving large video libraries over to high-complexity codecs due to cost and time required to re-compress. The session ends with questions from the audience.
Watch now!
Speaker

Thierry Fautier Thierry Fautier
President-Chair at Ultra HD Forum,
VP Video Strategy, Harmonic

Video: Recent trends in live cloud video transcoding using FPGA acceleration

FPGAs are flexible, reprogrammable chips which can do certain tasks faster than CPUs, for example, video encoding and other data-intensive tasks. Once the domain of expensive hardware broadcast appliances, FPGAs are now available in the cloud allowing for cheaper, more flexible encoding.

In fact, according to NGCodec founder Oliver Gunasekara, video transcoding makes up a large percentage of cloud work loads and this increasing year on year. The demand for more video and the demand for more efficiently-compressed video both push up the encoding requirements. HEVC and AV1 both need much more encoding power than AVC, but the reduced bitrate can be worth it as long as the transcoding is quick enough and the right cost.

Oliver looks at the likely future adoption of new codecs is likely to playout which will directly feed into the quality of experience: start-up time, visual quality, buffering are all helped by reduced bitrate requirements.

It’s worth looking at the differences and benefits of CPUs, FPGAs and ASICs. The talk examines the CPU-time needed to encode HEVC showing the difficulty in getting real-time frame rates and the downsides of software encoding. It may not be a surprise that NGCodec was acquired by FPGA manufacturer Xilinx earlier in 2019. Oliver shows us the roadmap, as of June 2019, of the codecs, VQ iterations and encoding densities planned.

The talk finishes with a variety of questions like the applicability of Machine Learning on encoding such as scene detection and upscaling algorithms, the applicability of C++ to Verilog conversion, the need for a CPU for supporting tasks.

Watch now!

Speakers

Former CEO, founder & president, NGCodec
Oliver is now an independent consultant.

Oliver Gunasekara Oliver Gunasekara

Video: Towards a healthy AV1 ecosystem for UGC platforms


Twitch is an ambassador for new codecs and puts its money where its mouth is; it is one of the few live streaming platforms which streams with VP9 – and not only at, with cloud FPGA acceleration thanks to Xylinx’s acquisition of NGCODEC.

As such, they have a strong position on AV1. With such a tech savvy crowd, they stream most of their videos at the highest bitrate (circa 6mbps). With millions of concurrent videos, they are highly motivated to reduce bandwidth where they can and finding new codecs is one way to do that.

Principal Research Engineer, Yueshi discusses Twitch’s stance on AV1 and the work they are doing to contribute in order to get the best product at the end of the process which will not only help them, but the worldwide community. He starts by giving an overview of Twitch which, while many of us are familiar with the site, the scale and needs of the site may be new information and drive the understanding of the rest of the talk.

Reduction in bitrate is a strong motivator, but also the fact that supporting many codecs is a burden. AV1 promises a possibility of reducing the number of supported codecs/formats. Their active contribution in AV1 is also determined by the ‘hand wave’ latency; a simple method of determining the approximate latency of a link which is naturally very important to a live streaming platform. This led to Twitch submitting a proposal for SWITCH_FRAME which is a technique, accepted in AV1, which allows more frequent changes by the player between the different quality/bitrate streams available. This results in a better experience for the user and also reduced bitrate/buffers.

YueShi then looks at the projected AV1 deployment roadmap and discusses when GPU/hardware support will be available. The legal aspect of AV1 – which promises to be a free-to-use codec is also discussed with the news that a patent pool has formed around AV1.

The talk finishes with a Q&A.

Watch now!

Speakers

Yueshi Shen Yueshi Shen
Principal (Level 7) Research Engineer & Engineering Manager,
Twitch