Video: Mobile and Wireless Layer 2 – Satellite/ATSC30/M-ABR/5G/LTE-B

Wireless internet is here to stay and as it improves, it opens new opportunities for streaming and broadcasting. With SpaceX delivering between 20 and 40ms latency, we see that even satellite can be relevant for low-latency streaming. Indeed radio (RF) is the focus of this talk discussing how 5G, LTE, 4G, ATSC and satellite fit into delivering streaming media o everyone.

LTE-B, in the title of this talk refers to LTE Broadcast, also known as eMBMS (Evolved Multimedia Broadcast Multicast Services.) delivered over LTE technology. Matt Stagg underlines the importance of LTE-B saying “Spectrum is finite and you shouldn’t waste it sending unicast”. Using LTE-B, we can achieve a one-to-many push with orchestration on top. ROuters do need to support this and UDP transport, but this is a surmountable challenge.

Matt explains that BT did a trial of LTE-B with BBC. The major breakthrough was they could ‘immediately’ deliver the output of an EVS direct to the fans in the stadium. For BT, the problem came with hitting critical mass. Matt makes the point that it’s not just sports, Love Island can get the same viewership. But with no support from Apple, the number of compatible devices isn’t high enough.

“Spectrum is final and you shouldn’t waste it sending unicast”

Matt Stagg

Turning the attention of the panel which includes Synamedia’s Mark Myslinski and Jack Arky from Verizon Wireless. Matt says that, in general, bandwidth capacity to edges in the UK is not a big issue since there is usually dark fibre, but hosting content at the edge doesn’t hit the spot due to the RAN. 5G has helped us move on beyond that.

Mark from Verizon explains that multi-edge access compute enabled by the low-latency of 5G. We need to move as much as is sensible to the edge to keep the delay down. Later in the video, we hear that XR (mixed reality) and AR (augmented reality) are two technologies which will likely depend on cloud computation to get the level of accurate graphics necessary. This will, therefore, require a low-latency connection.

From Verizon’s perspective, the most important technology being rolled out is actually ATSC 3.0. Much discussed at NAB 2015, stability has come to the standard and it’s now in use in South Korea and increasingly in the US. ATSC 3.0, as Mark explains, is a complimentary fully-IP technology to fit alongside 5G. He even talks about how 5G and ATSC could co-exist due to the open way the standards were created.

The session ends with a Q&A

Watch now!
Speakers

Mark Myslinski Mark Myslinski
Broadcast Solutions Manager,
Synamedia
Jack Arky Jack Arky
Senior Engineer, Product Development
Verizon Wireless
Matt Stagg Matt Stagg
Director, Mobile Strategy
BT Sport
Dom Robinson Dom Robinson
Co-Founder, Director and Creative Firestarter
id3as

Video: DVB-I. Linear Television with Internet Technologies

Outside of computers, life is rarely binary. There’s no reason for all TV to be received online, like Netflix or iPlayer, or all over-the-air by satellite or DVB-T. In fact, by using a hybrid approach, broadcasters can reach more people and deliver more services than before including securing an easier path to higher definition or next-gen pop-up TV channels.

Paul Higgs explains the work DVB have been doing to standardise a way of delivering this promise: linear TV with internet technologies. DVB-I is split into three parts:

1. Service discovery

DVB-I lays out ways to find TV services including auto-discovery and recommendations. The A177 Bluebook provides a mechanism to find IP-based TV services. Service lists bring together channels and geographic information whereas service lists registries are specified to provide a place to go to in order to discover service lists.

2. Delivery
Internet delivery isn’t a reason for low-quality video. It should be as good or better than traditional methods because, at the end of the day, viewers don’t actually care which medium was used to receive the programmes. Streaming with DVB-I is based on MPEG DASH and defined by DVB-DASH (Bluebook A168). Moreover, DVB-I services can be simulcast so they are co-timed with broadcast channels. Viewers can, therefore, switch between broadcast and internet services.

 

 

3.Presentation
Naturally, a plethora of metadata can be delivered alongside the media for use in EPGs and on-screen displays thus including logos, banners, programme guide data and content protection information.

Ian explains that this is brought together with three tools: the DVB-I reference client player which works on Android and HbbTV, DVB-DASH reference streams and a DVB-DASH validator.

Finishing up, Ian adds that network operators can take advantage of the complementary DVB Multicast ABR specification to reduce bitrate into the home. DVB-I will be expanded in 2021 and beyond to include targetted advertising, home re-distribution and delivering video in IP but over traditional over-the-air broadcast networks.

Watch now!
Speaker

Paul Higgs Paul Higgs
Chairman – TM-I Working Group, DVB Project
Vice President, Video Industry Development, Huawei

Video: Cloud Encoding – Overview & Best Practices

There are so many ways to work in the cloud. You can use a monolithic solution which does everything for you which is almost guaranteed by its nature to under-deliver on features in one way or another for any non-trivial workflow. Or you could pick best-of-breed functional elements and plumb them together yourself. With the former, you have a fast time to market and in-built simplicity along with some known limitations. With the latter, you may have exactly what you need, to the standard you wanted but there’s a lot of work to implement and test the system.

Tom Kuppinen from Bitmovin joins Christopher Olekas from SSIMWAVE and host of this Kirchner Waterloo Video Tech talk on cloud encoding. After the initial introduction to ‘middle-aged’ startup, Bitmovin, Tom talks about what ‘agility in the cloud’ means being cloud-agnostic. This is the, yet unmentioned, elephant in the room for broadcasters who are so used to having extreme redundancy. Whether it’s the BBC’s “no closer than 70m” requirement for separation of circuits or the standard deployment methodology for systems using SMPTE’s ST 2110 which will have two totally independent networks, putting everything into one cloud provider really isn’t in the same ballpark. AWS has availability zones, of course, which is one of a number of great ways of reducing the blast radius of problems. But surely there’s no better way of reducing the impact of an AWS problem than having part of your infrastructure in another cloud provider.

Bitmovin have implementations in Azure, Google Cloud and AWS along with other cloud providers. In this author’s opinion, it’s a sign of the maturity of the market that this is being thought about, but few companies are truly using multiple cloud providers in an agnostic way; this will surely change over the next 5 years. For reliable and repeatable deployments, API control is your best bet. For detailed monitoring, you will need to use APIs. For connecting together solutions from different vendors, you’ll need APIs. It’s no surprise that Bitmovin say they program ‘API First’; it’s a really important element to any medium to large deployment.

 

 

When it comes to the encoding itself, per-title encoding helps reduce bitrates and storage. Tom explains how it analyses each video and chooses the best combination parameters for the title. In the Q&A, Tom confirms they are working on implementing per-scene encoding which promises more savings still.

To add to the complexity of a best-of-breed encoding solution, using best-of-breed codecs is part and parcel of the value. Bitmovin were early with AV1 and they support VP9 and HEVC. They can also distribute the encoding so that it’s encoded in parallel by as many cores as needed. This was their initial offering for AV1 encoding which was spread over more than 200 cores.

Tom talks about how the cloud-based codecs can integrate into workflows and reveals that HDR conversion, instance pre-warming, advanced subtitling support and AV1 improvements are on the roadmap while leads on to the Q&A. Questions include whether it’s difficult to deploy on multiple clouds, which HDR standards are likely to become the favourites, what the pain points are about live streaming and how to handle metadata.

Watch now!
Speakers

Tom Kuppinen Tom Kuppinen
Senior Sales Engineer,
Bitmovin
Moderator: Christopher Olekas
Senior Software Engineer,
SSIMWAVE Inc.

Video: Machine Learning for Per-title Encoding

AI’s continues its march into streaming with this new approach to optimising encoder settings to keep down the bitrate and improve quality for viewers. By its more appropriate name, ‘machine learning’, computers learn how to characterise video to avoid hundreds of encodes whilst determining the best way to encode video assets.

Daniel Silhavy from Fraunhofer FOKUS takes the stand at Mile High Video 2020 to detail the latest technique in per-title and per-scene encoding. Daniel starts by outlining the problem with fixed ABR which is that efficiencies are gained by being flexible both with resolution and with bitrate.

Netflix were the best-known pioneers of the per-title encoding idea where, for each different video asset, many, many encodes are done to determine the best overall bitrate to choose. This is great because it will provide for animation-based files to be treated differently than action films or sports. Efficiency is gained.

However, per-title delivers an average benefit. There are still parts of the video which are simple and could see reduced bitrate and arts where complexity isn’t accounted for. When bitrate is higher than necessary to achieve a certain VMAF score, Danel calls this ‘wasted quality’. This means bitrate was used making the quality better than we needed it to be. Whilst better quality sounds like a boon, it’s not always possible for it to be seen, hence having a target VMAF at a lower level.

Naturally, rather than varying the resolution mix and bitrate for each file, it would be better to do it for each scene. Working this way, variations in complexity can be quickly accounted for. This can also be done without machine learning, but more encodes are needed. The rest of the talk looks at using machine learning to take a short-cut through some of that complexity.

The standard workflow is to perform a complexity analysis on the video, working out a VMAF score at various bitrate and resolution combinations. This produces a ‘Convex hull estimation’ allowing determination of the best parameters which then feed in to the production encoding stage.

Machine learning can replace the section which predicts the best bitrate-resolution pairs. Fed with some details on complexity, it can avoid multiple encodes and deliver a list of parameters to the encoding stage. Moreover, it can also receive feedback from the player allowing further optimisation of this prediction module.

Daniel shows a demo of this working where we see that the end result has fewer rungs on the ABS ladder, a lower-resolution top rung and fewer resolutions in general, some repeated at different bitrates. This is in common with the findings of Facebook which we covered last week who found that if they removed their ‘one bitrate per resolution rule’ they could improve viewers’ experience. In total, for an example Fraunhofer received from a customer, they saw a 53% reduction in storage needed.

Watch now!
Download the slides
Speakers

Daniel Silhavy
Scientist & Project Manager,
Fraunhofer FOKUS