Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group

Video: Codec Comparison from TCO and Compression Efficiency Perspective

AVC, now 16 years old, is long in the tooth but supported by billions of devices. The impetus to replace it comes from the drive to serve customers with a lower cost/base and a more capable platform. Cue the new contenders VVC and AV1 – not to mention HEVC. It’s no surprise they comptes better then AVC (also known as MPEG 4 and h.264) but do they deliver a cost efficient, legally safe codec on which to build a business?

Thierry Fautier has done the measurements and presents them in this talk. Thierry explains that the tests were done using reference code which, though unoptimised for speed, should represent the best quality possible from each codec and compared 1080p video all of which is reproduced in the IBC conference paper.

Licensing is one important topic as, by some, HEVC is seen as a failed codec not in terms of its compression but rather in the réticente by many companies to deploy it which has been due to the business risk of uncertain licensing costs and/or the expense of the known licensing costs. VVC faces the challenge of entering the market and avoiding these concerns which MPEG is determined to do.

Thierry concludes by comparing AVC against HEVC, AV1 and VVC in terms of deployment dates, deployed devices and the deployment environment. He looks at the challenge of moving large video libraries over to high-complexity codecs due to cost and time required to re-compress. The session ends with questions from the audience.
Watch now!
Speaker

Thierry Fautier Thierry Fautier
President-Chair at Ultra HD Forum,
VP Video Strategy, Harmonic

Video: VVC, EVC, LCEVC, WTF? – An update on the next hot codecs from MPEG


The next gen codecs are on their way: VVC, EVC, LCEVC but, given we’re still getting AV1 up and running, why do we need them and when will they be ready?

MPEG are working hard on 3 new video codecs, one in conjunction with the ITU, so Christian Feldmann from Bitmovin is here to explain what each does, the target market, whether it will cost money and when the standard will be finalised.

VVC – Versatile Video Codec – is a fully featured video codec being worked on as a successor to h.265, indeed the ITU call it h.266. MPEG call it MPEG-I Part 3. Christian explains the ways this codec is outperforming its peers including a flexible block partitioning system, motion prediction which can overlap neighbouring macro blocks and triangle prediction to name but three.

EVC is the Essential Video Codec which, intriguingly, offers a baseline which is free to use and a main profile which requires licences. The thinking here is that if you have licensing issues, you have the option of just turning off that feature which could five you extra leverage in patent discussions.

Finally, LCEVC – the Low Complexity Essential Video Codec allows for enhancement layers to be added on top of existing bitstreams. This can allow UHD to be used where only HD was possible before due to being able to share decoding between the ASIC and CPU, for example.

These all have different use cases which Christian explains well, plus he brings some test results along showing the percentage improvement over today’s HEVC encoding.

Watch now!
Speaker

Christian Feldmann Christian Feldmann
Codec Engineer,
Bitmovin

Video: Versatile Video Coding (VVC) Standard on the Final Stretch

We’ve got used to a world of near-universal AVC/h.264 support, but in our desire to deliver better services, we need new codecs. VVC is nearing completion and is attracting increasing attention with its ability to deliver better compression than HEVC in a range of different situations.

Benjamin Bross from the Fraunhofer Institute talks at Mile High Video 2019 about what Versatile Video Coding (VVC) is and the different ways it achieves these results. Benjamin starts by introducing the codec, teasing us with details of machine learning which is used for block prediction and then explains the targets for the video codec.

Next we look at the bitrate curves showing how encoding has improved over the years and where we can expect VVC to fit in before showing results of testing the codec as it exists today which already shows improvement in compression. Encoding complexity and speed are also compared and as expected complexity has increased and speed has reduced. This is always a challenge at the beginning of a new codec standard, but is typically solved in due course. Benjamin also looks at the effect of resolution and frame rate on compression efficiency.

Every codec has sets of tools which can be tuned and used in certain combinations to deal with different types of content so as to optimise performance. VVC is no exception and Benjamin looks at some of the highlights:

  • Screen Content Coding – specific tools to encode computer graphics rather than ‘natural’ video. With the sharp edges on computer screens, different techniques can produce better results
  • Reference Picture Rescaling – allows resolution changes in the video stream. This can also be used to deliver multiple resolutions at the same time
  • Independent Sub Pictures – separate pictures available in same raster. Allows, for instance, sending large resolutions and allowing decoders to only decode part of the picture.

Watch now!

Speaker

Benjamin Bross Benjamin Bross
Product Manager,
Fraunhofer HHI