Video: Transforming the Distribution and Economics of Internet Video

Replacing CDNs in streaming would need a fundamental change in the way we store and access video on the internet, but this is just what Eluvio’s technology offers along with in-built authentication, authorisation and DRM. There’s a lot to unpack about this distributed ‘content fabric’ built on an Ethereum-protocol blockchain.

Fortunately, Eluvio co-founder Michelle Munson is here to explain how this de-centralised technology improves on the status quo and show us what it’s being used for. We know that today’s streaming technology is based on the idea of preparing, packaging, transcoding and pushing data out through CDNs to views at home and whilst this works, it doesn’t necessarily consistent, low delay and, as we saw from Netflix and Facebook reducing their streaming bitrates at the beginning of the pandemic, it can be quite a burden on networks.

This content fabric, Michelle explains, is a different approach to the topic where video is stored natively over the internet creating a ‘software substrate’. The result doesn’t use traditional transcoding services, CDNs and databased. Rather we end up w ith a decentralised data distribution and storage protocol delivering just-in time packaging. The content fabric is split into four layers, one of which deals with metadata, another contains code which controls the transformation and delivery of media. The third layer is the ‘contract’ layer which controls access and proves content with finally a layer for the media itself. This contract layer is based on the Ethereum technology which runs the cryptocurrency of the same name. The fabric is a ledger with the content being versioned within the ledger history.

Michelle points out that with blockchain contracts baked in to all the media data, there is inherently access control at all parts of the network which has the property that viewers only need to have an ethereum-style ‘ticket’ to watch content directly. Their access is view-only and whilst this passes through the data and code layers, there is no extra infrastructure to build on top of your streaming infrastructure and each person can have their own individually-watermarked version as delivered with Eluvio’s work with MGM’s online premier of the recent Bill and Ted film.

Eluvium currently have a group of globally-deployed hubs in internet exchange sites which operate the fabric and contain media shards and blobs of code which can operate on the media to provide just-in-time delvery as necessary with the ability to create slices and overlays inherent in the delivery mechanism. When a player wants access to video, it issues the request with its authorisation information. This meets the fabric which responds to drive the output. Because of the layer of code, the inputs and outputs of the system are industry standard with manipulation done internally.

Before finishing by talking about the technology’s use within MGM and other customers, Michelle summarises the capabilities by saying that it simplifies workflows and can deliver a consistently low, global time to first byte with VoD and Live workflows interchangable. Whilst Michelle asserts that previous distribution protocols have failed at scale, Eluvio’s fabric can scale without the significant burdens of file IO.

Watch now!
Speaker

Michelle Munson Michelle Munson
CEO and Founder,
Eluvio

Video: Don’t let latency ruin your longtail: an introduction to “dref MP4” caching

So it turns out that simply having an .mp4 file isn’t enough for low-latency streaming. In fact, for low latency streaming, MP4s work well, but for very fast start times, there’s optimisation work to be done.

Unified Streaming’s Boy van Dijk refers to how mp4s are put together (AKA ISO BMFF) to explain how just restructuring the data can speed up your time-to-play.

Part of the motivation to optimise is the financial motivation to store media on Amazon’s S3 which is relatively cheap and can deal with a decent amount of throughput. This costs latency, however. The way to work around this, explains Boy, is to bring the metadata out of the media so you can cache it separately and, if possible, elsewhere. Within the spec is the ability to bring the index information out of the original media and into a separate file called the dref, the Data Reference box.

Boy explains that by working statelessly, we can see why latency is reduced. Typically three requests would be needed, but we can save those if we just make one, moreover, stateless architectures scale better.

The longtail of your video library is affected most by this technique as it is, by proportion, the largest part, but gets the least requests. Storing the metadata closer, or in faster storage ca vastly reduce startup times. DREF files point to media data allowing a system to bring that closer. For a just-in-time packaging system, drefs works as a middle-man. The beauty is that a DREF for a film is only a few 10s of MB for a film of many gigabytes.

Unified Origin, for different tests, saw reductions of 1160ms->15, 185ms->13 and 240ms->160ms. Depending on what exactly was being tested which Boy explains in the talk in more detail. Overall they have shown that there’s a non-trivial improvement in startup delay.

Watch now!
Download a detailed presentation
Speaker

Boy van Dijk Boy van Dijk
Streaming Solutions Engineer,
Unified Streaming

Video: Low Latency Live from a Different Vantage Point

Building a low-latency live streaming platform is certainly possible nowadays, but not without its challenges and compromises. Traditionally, HLS-style delivery keeps latency high because of chunk sizes being between 5 and 10 seconds. Pushing that down to 2 seconds, generally seen as the minimum viable chunk size can then cause problems estimating bandwidth and thus breaking ABR.

Tackling these challenges are a host of technologies such as CMAF, Low-Latency HLS (LHLS) and Apple’s LLHLS but this talk takes a different approach to deliver streams with only 3-4 seconds of latency.

Michelle Munson from Eluvio explains that theoretically you could stream chunks in realtime and the delay would be the propagation time over the internet. In reality, though, encoding and transcoding delay add up, plus the CDN can gradually add to a drift of the signal to 15 seconds. ABR is tricky when delivering chunks in a streamed manner because the standard method of determining available bandwidth by measuring the download time gets broken since all chunks come in real-time.

Tackling this, Michelle introduces her to the decentralised fabric which Eluvio have put together which uses dispersed nodes to hold data acting, in some ways as a CDN but the trick here is that the nodes work together to share video. Each node can transcode just in time and also can create playlists on-demand from the distributed metadata in response to client requests. Being able to bring things together dynamically an on the fly removes a lot of latency pinch points from the system.

The result is a system which can deliver content from the encoder to the nodes in around 250ms then a further 50 or so for distribution. To make ABR easier, the player works one segment behind live so it always has a whole segment to download as quickly as it can and thus enabling ABR to work normally.

Michelle finishes by highlighting the results of testing both over time and at scale. The results show that node load stayed low and even in both scenarios delivering 3.5 seconds of latency.

Watch now!
Speakers

Michelle Munson Michelle Munson
CEO and Founder,
Eluvio

Video: The Future of Online Video

There are few people who should build their own CDN, contends Steve Miller-Jones from Limelight Networks. If you want to send a parcel, you use a parcel delivery service. So if you want to stream video, use a content delivery network tuned for video. This video looks at the benefits of using CDNs.

John Porterfield welcomes Steve to YouTube channel JP’sChalkTalks and starting with a basic outline of CDNs. Steve explains that the aim of the CDN is to re-deliver the same content as many times as possible by itself without having to go back to a central store, or even back to the publisher to get the video chunk that’s been requested. If your player is a few seconds behind someone else’s who lives in the same geography, then the CDN should be able to deliver you those same chunks almost instantly from somewhere geographically close to you.

Steve explains that in the Limelight State of Online Video 2020 Annual Report rebuffering remains the main frustration with streaming services and, remaining at approx 44% for the last 3 years when taken as a global average. Contrary to popular belief, the older generation is more tolerant of rebuffering than younger viewers.

As well as maintaining a steady feed, low-latency is remaining important. Limelight is able to deliver CMAF down to around a 3-second latency or WebRTC with sub-second latency. To go along with this sub-second video streaming, Limelight also offer sub-second data sharing between players which Steve explains is a important feature allowing services to develop interactivity, quizzes, community engagement and many other business cases.

Lastly Steve outlines the importance of Edge computing as a future growth area for CDNs. The first iteration of cloud computing was a success by taking computing into central locations and away from individual businesses. This worked well for many for financial reasons, because it freed organisations up from managing some aspects of their own infrastructure and enabled scaling of services. However, the logic of what happened next was always done in this one central place. If you’re in Australia and the cloud location is in the EU, then that’s a long wait until you get the result of the decision that needs to be made. Edge computing allows small parts of logic to live in the closest part of a CDN to you. This could well mean that the majority of a service’s infrastructure is in the US, but some of the CDN be it CloudFront, Limelight or another will be in Australia itself meaning pushing as much of your services as you can to the edge will result in significant improvements in speed/latency reduction.

Watch now!
Speakers

Steve Miller-Jones Steve Miller-Jones
VP Strategy & Industry,
Limelight Networks
John Porterfield John Porterfield
Technology Evangelist,
JP’sChalkTalks YouTube Channel