Video: Deep Neural Networks for Video Coding

Artificial Intelligence, Machine Learning and related technologies aren’t going to go away…the real question is where they are best put to use. Here, Dan Grois from Comcast shows their transformative effect on video.

Some of us can have a passable attempt at explaining what neural networks, but to start to understand how this technology works understanding how our neural networks work is important and this is where Dan starts his talk. By walking us through the workings of our own bodies, he explains how we can get computers to mimic parts of this process. This all starts by creating a single neuron but Dan explains multi-layer perception by networking many together.

As we see examples of what these networks are able to do, piece by piece, we start to see how these can be applied to video. These techniques can be applied to many parts of the HEVC encoding process. For instance, extrapolating multiple reference frames, generating interpolation filters, predicting variations etc. Doing this we can achieve a 10% encoding improvements. Indeed, a Deep Neural Network (DNN) can totally replace the DCT (Discrete Cosine Transform) widely used in MPEG and beyond. Upsampling and downsampling can also be significantly improved – something that has already been successfully demonstrated in the market.

Dan isn’t shy of tackling the reason we haven’t seen the above gains widely in use; those of memory requirements and high computational costs. But this work is foundational in ensuring that these issues are overcome at the earliest opportunity and in optimising the approach to implementing them to the best extent possible to day.

The last part of the talk is an interesting look at the logical conclusion of this technology.

Watch now!

Speaker

Dan Grois Dan Grois
Principal Researcher
Comcast

Video: The critical importance of user experience

Using the TV used to be very simple, but in recent years the different interfaces we have to viewing content and types of interface have proliferated. So how can we keep these interfaces simple and effective?

This panel from the IBC’s Content Everywhere Hub, hosted by Ian Nock, Chair of IET Media introduces the panel which looks at how to make video ‘just work’ and share their experiences.

Gerald Zankl, from Bitmovin makes the point that in this transitioning market, there is still space for linear news channels even in the midst of our video-on-demand-based market.
“It becomes a one-to-one conversation” agrees Renato Bonomini from ContentWise as he explains that there’s a lot of value in having a service you can turn on and rely on it to give you content you want through personalisation. “Search is the failure of recommendations”, Renato concludes.

Social media is another good example of why recommendation engines are important, explains Gerald. With so much information coming in, it’s not practical and would be boring to simply go through them arbitrarily. Similarly, video services with hundreds of thousands of assets also require a system to manage which content to surface.

Simone Leadlay from You.i TV points out “Customers willingness to pay for 250 services is zero.” meaning people find value in one or two services and are very willing to move to another app if their experience isn’t good enough.

The panel discusses the relevance of weekly episode releases in 2019 and then moves to bringing multiple companies together to form one service.

Bitmovin’s Gerald discusses giving feedback to the user if, for example, you can detect there are issues with the platform/local wifi etc. Giving them actionable feedback allows them to improve their experience, either directly or by pressuring their providers.

Simon, explains that the role of all of the companies on the panel is to fight against the challenges, fragmentation of the market (CDNs, codecs) for instance, so that no one notices they’ve done their job.

This panel concludes with a discussion on (actionable) analytics.

Watch now!

Speakers

Gerald Zankl Gerald Zankl
Global Head of Inside Sales,
Bitmovin
Renato Bonomini Renato Bonomini
VP Global PreSales,
ContentWise
Simon Leadlay Simon Leadlay
VP, Product Market Development
You.i TV
Ian Nock Ian Nock
Chair, IET Media
Chair, Ultra HD Forum

Video: SRT – How the hot new UDP video protocol actually works under the hood

In the west, RTMP is seen as a dying protocol so the hunt is on for a replacement which can be as widely adopted but keep some of it’s best parts including relatively low latency. SRT is a protocol for Secure, Reliable Transport of streams over the internet so does this have a role to play and how does it work?

Alex Converse from Twitch picks up the gauntlet to dive deep into the workings of SRT to show how it compares to RTMP and specifically how it improves upon it.

RTMP fails in many ways, two to focus on are that the spec has stopped moving forward and it doesn’t work well over problematic networks. So Alex takes a few minutes to explain where SRT has come from, the importance of t being open source and how to get hold of the code and more information.

Now, Alex starts his dive into the detail reminding us about UDP, TS Packets and Ethernet MTUs has he goes down. We look at how SRT data packets are formed which helps explain some of the features and sets us up for a more focussed look.

SRT, as with other, similar protocols which create their resilience by retransmitting missing packets, need to use buffers in order to have a chance to send the missing data before it’s needed at the decoder. Alex takes us through how the sender and receiver buffers work to understand the behaviour in different situations.

Fundamental to the whole protocol is packet the packet acknowledgement and negative acknowledgements which feature heavily before we discuss handshaking as we start our ascent from the depths of the protocol. As much as acknowledgements provide the reliability, encryption provides the ‘secure’ in Secure Reliable Transport. We look at the approach taken to encryption and how it relates to current encryption for websites.

Finally, Alex answers a number of questions from the audience as he concludes this talk from the San Francisco Video Tech meet-up.

Watch now!

Speaker

Alex Converse Alex Converse
Streaming Video Software Engineer,
Twitch

Video: Transporting ST 2110 Over WAN

Is SMPTE ST 2110 suitable for inter-site connectivity over the WAN? As ST 2110 continues to mature and the first facilities are going live bringing 2110 into daily use, there are a number of challenges still to be overcome and moving a large number of essence flows long distances and between PTP time domains is one of them.

Nevion’s Andy Rayner presents the work the VSF is doing to recommend transport of ST 2110 over WAN outlining where they have got to and what has been recommended to date.

The talk starts with SMPTE 2022-7 seamless protection which is recommended for dealing with path breaks. For compensating for transmission errors, FEC is recommended and Andy explains the parameters needed.

Key to the inter-site transport is trunking whereby the individual essences are mixed down to one flow. This has a number of advantages: Reducing the number of flows makes life simpler for service providers, all essences will now share the same signal path from site to site and it FEC protection can be more efficiently applied.

The trunks are made using GRE – Generic Routing Encapsulation – which is a pre-existing IT standard for grouping lots of traffic into a single tunnel whilst preserving the data inside. This then appears at the other end of the trunk with the same IP information as if nothing had happened. Andy looks at the extra encapsulation headers needed to make this work and goes on to discuss payload lengths as we need to keep them short so as not to result in fragmented packets.

Timing, as ever, is important meaning that the recommendation is to align all essences before sending them in to the trunk, though Andy looks at alternatives. Also of key concern is compression as there will be times when uncompressed video is simply too high a bandwidth to be carried on the WAN. JPEG 2000 and, now, JPEG XS are available for this task.

Andy covers timing, discovery, control, security and conversion to and from 2022-6 before finishing the talk by taking questions.

Watch now!
Speaker

Andy Rayner Andy Rayner
Chief Technologist,
Nevion