Video: Don’t let latency ruin your longtail: an introduction to “dref MP4” caching

So it turns out that simply having an .mp4 file isn’t enough for low-latency streaming. In fact, for low latency streaming, MP4s work well, but for very fast start times, there’s optimisation work to be done.

Unified Streaming’s Boy van Dijk refers to how mp4s are put together (AKA ISO BMFF) to explain how just restructuring the data can speed up your time-to-play.

Part of the motivation to optimise is the financial motivation to store media on Amazon’s S3 which is relatively cheap and can deal with a decent amount of throughput. This costs latency, however. The way to work around this, explains Boy, is to bring the metadata out of the media so you can cache it separately and, if possible, elsewhere. Within the spec is the ability to bring the index information out of the original media and into a separate file called the dref, the Data Reference box.

Boy explains that by working statelessly, we can see why latency is reduced. Typically three requests would be needed, but we can save those if we just make one, moreover, stateless architectures scale better.

The longtail of your video library is affected most by this technique as it is, by proportion, the largest part, but gets the least requests. Storing the metadata closer, or in faster storage ca vastly reduce startup times. DREF files point to media data allowing a system to bring that closer. For a just-in-time packaging system, drefs works as a middle-man. The beauty is that a DREF for a film is only a few 10s of MB for a film of many gigabytes.

Unified Origin, for different tests, saw reductions of 1160ms->15, 185ms->13 and 240ms->160ms. Depending on what exactly was being tested which Boy explains in the talk in more detail. Overall they have shown that there’s a non-trivial improvement in startup delay.

Watch now!
Download a detailed presentation
Speaker

Boy van Dijk Boy van Dijk
Streaming Solutions Engineer,
Unified Streaming

Video: Mobile and Wireless Layer 2 – Satellite/ATSC30/M-ABR/5G/LTE-B

Wireless internet is here to stay and as it improves, it opens new opportunities for streaming and broadcasting. With SpaceX delivering between 20 and 40ms latency, we see that even satellite can be relevant for low-latency streaming. Indeed radio (RF) is the focus of this talk discussing how 5G, LTE, 4G, ATSC and satellite fit into delivering streaming media o everyone.

LTE-B, in the title of this talk refers to LTE Broadcast, also known as eMBMS (Evolved Multimedia Broadcast Multicast Services.) delivered over LTE technology. Matt Stagg underlines the importance of LTE-B saying “Spectrum is finite and you shouldn’t waste it sending unicast”. Using LTE-B, we can achieve a one-to-many push with orchestration on top. ROuters do need to support this and UDP transport, but this is a surmountable challenge.

Matt explains that BT did a trial of LTE-B with BBC. The major breakthrough was they could ‘immediately’ deliver the output of an EVS direct to the fans in the stadium. For BT, the problem came with hitting critical mass. Matt makes the point that it’s not just sports, Love Island can get the same viewership. But with no support from Apple, the number of compatible devices isn’t high enough.

“Spectrum is final and you shouldn’t waste it sending unicast”

Matt Stagg

Turning the attention of the panel which includes Synamedia’s Mark Myslinski and Jack Arky from Verizon Wireless. Matt says that, in general, bandwidth capacity to edges in the UK is not a big issue since there is usually dark fibre, but hosting content at the edge doesn’t hit the spot due to the RAN. 5G has helped us move on beyond that.

Mark from Verizon explains that multi-edge access compute enabled by the low-latency of 5G. We need to move as much as is sensible to the edge to keep the delay down. Later in the video, we hear that XR (mixed reality) and AR (augmented reality) are two technologies which will likely depend on cloud computation to get the level of accurate graphics necessary. This will, therefore, require a low-latency connection.

From Verizon’s perspective, the most important technology being rolled out is actually ATSC 3.0. Much discussed at NAB 2015, stability has come to the standard and it’s now in use in South Korea and increasingly in the US. ATSC 3.0, as Mark explains, is a complimentary fully-IP technology to fit alongside 5G. He even talks about how 5G and ATSC could co-exist due to the open way the standards were created.

The session ends with a Q&A

Watch now!
Speakers

Mark Myslinski Mark Myslinski
Broadcast Solutions Manager,
Synamedia
Jack Arky Jack Arky
Senior Engineer, Product Development
Verizon Wireless
Matt Stagg Matt Stagg
Director, Mobile Strategy
BT Sport
Dom Robinson Dom Robinson
Co-Founder, Director and Creative Firestarter
id3as

Video: Low Latency Live from a Different Vantage Point

Building a low-latency live streaming platform is certainly possible nowadays, but not without its challenges and compromises. Traditionally, HLS-style delivery keeps latency high because of chunk sizes being between 5 and 10 seconds. Pushing that down to 2 seconds, generally seen as the minimum viable chunk size can then cause problems estimating bandwidth and thus breaking ABR.

Tackling these challenges are a host of technologies such as CMAF, Low-Latency HLS (LHLS) and Apple’s LLHLS but this talk takes a different approach to deliver streams with only 3-4 seconds of latency.

Michelle Munson from Eluvio explains that theoretically you could stream chunks in realtime and the delay would be the propagation time over the internet. In reality, though, encoding and transcoding delay add up, plus the CDN can gradually add to a drift of the signal to 15 seconds. ABR is tricky when delivering chunks in a streamed manner because the standard method of determining available bandwidth by measuring the download time gets broken since all chunks come in real-time.

Tackling this, Michelle introduces her to the decentralised fabric which Eluvio have put together which uses dispersed nodes to hold data acting, in some ways as a CDN but the trick here is that the nodes work together to share video. Each node can transcode just in time and also can create playlists on-demand from the distributed metadata in response to client requests. Being able to bring things together dynamically an on the fly removes a lot of latency pinch points from the system.

The result is a system which can deliver content from the encoder to the nodes in around 250ms then a further 50 or so for distribution. To make ABR easier, the player works one segment behind live so it always has a whole segment to download as quickly as it can and thus enabling ABR to work normally.

Michelle finishes by highlighting the results of testing both over time and at scale. The results show that node load stayed low and even in both scenarios delivering 3.5 seconds of latency.

Watch now!
Speakers

Michelle Munson Michelle Munson
CEO and Founder,
Eluvio

Video: DVB-I. Linear Television with Internet Technologies

Outside of computers, life is rarely binary. There’s no reason for all TV to be received online, like Netflix or iPlayer, or all over-the-air by satellite or DVB-T. In fact, by using a hybrid approach, broadcasters can reach more people and deliver more services than before including securing an easier path to higher definition or next-gen pop-up TV channels.

Paul Higgs explains the work DVB have been doing to standardise a way of delivering this promise: linear TV with internet technologies. DVB-I is split into three parts:

1. Service discovery

DVB-I lays out ways to find TV services including auto-discovery and recommendations. The A177 Bluebook provides a mechanism to find IP-based TV services. Service lists bring together channels and geographic information whereas service lists registries are specified to provide a place to go to in order to discover service lists.

2. Delivery
Internet delivery isn’t a reason for low-quality video. It should be as good or better than traditional methods because, at the end of the day, viewers don’t actually care which medium was used to receive the programmes. Streaming with DVB-I is based on MPEG DASH and defined by DVB-DASH (Bluebook A168). Moreover, DVB-I services can be simulcast so they are co-timed with broadcast channels. Viewers can, therefore, switch between broadcast and internet services.

 

 

3.Presentation
Naturally, a plethora of metadata can be delivered alongside the media for use in EPGs and on-screen displays thus including logos, banners, programme guide data and content protection information.

Ian explains that this is brought together with three tools: the DVB-I reference client player which works on Android and HbbTV, DVB-DASH reference streams and a DVB-DASH validator.

Finishing up, Ian adds that network operators can take advantage of the complementary DVB Multicast ABR specification to reduce bitrate into the home. DVB-I will be expanded in 2021 and beyond to include targetted advertising, home re-distribution and delivering video in IP but over traditional over-the-air broadcast networks.

Watch now!
Speaker

Paul Higgs Paul Higgs
Chairman – TM-I Working Group, DVB Project
Vice President, Video Industry Development, Huawei