Video: VVC, EVC, LCEVC, WTF? – An update on the next hot codecs from MPEG


The next-gen codecs are on their way: VVC, EVC, LCEVC but, given we’re still getting AV1 up and running, why do we need them and when will they be ready?

MPEG are working hard on 3 new video codecs, one in conjunction with the ITU, so Christian Feldmann from Bitmovin is here to explain what each does, the target market, whether it will cost money and when the standard will be finalised.

VVC – Versatile Video Codec – is a fully featured video codec being worked on as a successor to H.265, indeed the ITU call it H.266. MPEG call it MPEG-I Part 3. Christian explains the ways this codec is outperforming its peers including a flexible block partitioning system, motion prediction which can overlap neighbouring macroblocks and triangle prediction to name but three.

EVC is the Essential Video Codec which, intriguingly, offers a baseline which is free to use and a main profile which requires licences. The thinking here is that if you have licensing issues, you have the option of just turning off that feature which could five you extra leverage in patent discussions.

Finally, LCEVC – the Low Complexity Essential Video Codec allows for enhancement layers to be added on top of existing bitstreams. This can allow UHD to be used where only HD was possible before due to being able to share decoding between the ASIC and CPU, for example.

These all have different use cases which Christian explains well, plus he brings some test results along showing the percentage improvement over today’s HEVC encoding.

Watch now!
Speaker

Christian Feldmann Christian Feldmann
Codec Engineer,
Bitmovin

Video: Deep Neural Networks for Video Coding

Artificial Intelligence, Machine Learning and related technologies aren’t going to go away…the real question is where they are best put to use. Here, Dan Grois from Comcast shows their transformative effect on video.

Some of us can have a passable attempt at explaining what neural networks, but to start to understand how this technology works understanding how our neural networks work is important and this is where Dan starts his talk. By walking us through the workings of our own bodies, he explains how we can get computers to mimic parts of this process. This all starts by creating a single neuron but Dan explains multi-layer perception by networking many together.

As we see examples of what these networks are able to do, piece by piece, we start to see how these can be applied to video. These techniques can be applied to many parts of the HEVC encoding process. For instance, extrapolating multiple reference frames, generating interpolation filters, predicting variations etc. Doing this we can achieve a 10% encoding improvements. Indeed, a Deep Neural Network (DNN) can totally replace the DCT (Discrete Cosine Transform) widely used in MPEG and beyond. Upsampling and downsampling can also be significantly improved – something that has already been successfully demonstrated in the market.

Dan isn’t shy of tackling the reason we haven’t seen the above gains widely in use; those of memory requirements and high computational costs. But this work is foundational in ensuring that these issues are overcome at the earliest opportunity and in optimising the approach to implementing them to the best extent possible to day.

The last part of the talk is an interesting look at the logical conclusion of this technology.

Watch now!

Speaker

Dan Grois Dan Grois
Principal Researcher
Comcast

Video: The critical importance of user experience

Using the TV used to be very simple, but in recent years the different interfaces we have to viewing content and types of interface have proliferated. So how can we keep these interfaces simple and effective?

This panel from the IBC’s Content Everywhere Hub, hosted by Ian Nock, Chair of IET Media introduces the panel which looks at how to make video ‘just work’ and share their experiences.

Gerald Zankl, from Bitmovin makes the point that in this transitioning market, there is still space for linear news channels even in the midst of our video-on-demand-based market.
“It becomes a one-to-one conversation” agrees Renato Bonomini from ContentWise as he explains that there’s a lot of value in having a service you can turn on and rely on it to give you content you want through personalisation. “Search is the failure of recommendations”, Renato concludes.

Social media is another good example of why recommendation engines are important, explains Gerald. With so much information coming in, it’s not practical and would be boring to simply go through them arbitrarily. Similarly, video services with hundreds of thousands of assets also require a system to manage which content to surface.

Simone Leadlay from You.i TV points out “Customers willingness to pay for 250 services is zero.” meaning people find value in one or two services and are very willing to move to another app if their experience isn’t good enough.

The panel discusses the relevance of weekly episode releases in 2019 and then moves to bringing multiple companies together to form one service.

Bitmovin’s Gerald discusses giving feedback to the user if, for example, you can detect there are issues with the platform/local wifi etc. Giving them actionable feedback allows them to improve their experience, either directly or by pressuring their providers.

Simon, explains that the role of all of the companies on the panel is to fight against the challenges, fragmentation of the market (CDNs, codecs) for instance, so that no one notices they’ve done their job.

This panel concludes with a discussion on (actionable) analytics.

Watch now!

Speakers

Gerald Zankl Gerald Zankl
Global Head of Inside Sales,
Bitmovin
Renato Bonomini Renato Bonomini
VP Global PreSales,
ContentWise
Simon Leadlay Simon Leadlay
VP, Product Market Development
You.i TV
Ian Nock Ian Nock
Chair, IET Media
Chair, Ultra HD Forum

Video: SRT – How the hot new UDP video protocol actually works under the hood

In the west, RTMP is seen as a dying protocol so the hunt is on for a replacement which can be as widely adopted but keep some of it’s best parts including relatively low latency. SRT is a protocol for Secure, Reliable Transport of streams over the internet so does this have a role to play and how does it work?

Alex Converse from Twitch picks up the gauntlet to dive deep into the workings of SRT to show how it compares to RTMP and specifically how it improves upon it.

RTMP fails in many ways, two to focus on are that the spec has stopped moving forward and it doesn’t work well over problematic networks. So Alex takes a few minutes to explain where SRT has come from, the importance of t being open source and how to get hold of the code and more information.

Now, Alex starts his dive into the detail reminding us about UDP, TS Packets and Ethernet MTUs has he goes down. We look at how SRT data packets are formed which helps explain some of the features and sets us up for a more focussed look.

SRT, as with other, similar protocols which create their resilience by retransmitting missing packets, need to use buffers in order to have a chance to send the missing data before it’s needed at the decoder. Alex takes us through how the sender and receiver buffers work to understand the behaviour in different situations.

Fundamental to the whole protocol is packet the packet acknowledgement and negative acknowledgements which feature heavily before we discuss handshaking as we start our ascent from the depths of the protocol. As much as acknowledgements provide the reliability, encryption provides the ‘secure’ in Secure Reliable Transport. We look at the approach taken to encryption and how it relates to current encryption for websites.

Finally, Alex answers a number of questions from the audience as he concludes this talk from the San Francisco Video Tech meet-up.

Watch now!

Speaker

Alex Converse Alex Converse
Streaming Video Software Engineer,
Twitch