Video: Transforming the Distribution and Economics of Internet Video

Replacing CDNs in streaming would need a fundamental change in the way we store and access video on the internet, but this is just what Eluvio’s technology offers along with in-built authentication, authorisation and DRM. There’s a lot to unpack about this distributed ‘content fabric’ built on an Ethereum-protocol blockchain.

Fortunately, Eluvio co-founder Michelle Munson is here to explain how this de-centralised technology improves on the status quo and show us what it’s being used for. We know that today’s streaming technology is based on the idea of preparing, packaging, transcoding and pushing data out through CDNs to views at home and whilst this works, it doesn’t necessarily consistent, low delay and, as we saw from Netflix and Facebook reducing their streaming bitrates at the beginning of the pandemic, it can be quite a burden on networks.

This content fabric, Michelle explains, is a different approach to the topic where video is stored natively over the internet creating a ‘software substrate’. The result doesn’t use traditional transcoding services, CDNs and databased. Rather we end up w ith a decentralised data distribution and storage protocol delivering just-in time packaging. The content fabric is split into four layers, one of which deals with metadata, another contains code which controls the transformation and delivery of media. The third layer is the ‘contract’ layer which controls access and proves content with finally a layer for the media itself. This contract layer is based on the Ethereum technology which runs the cryptocurrency of the same name. The fabric is a ledger with the content being versioned within the ledger history.

Michelle points out that with blockchain contracts baked in to all the media data, there is inherently access control at all parts of the network which has the property that viewers only need to have an ethereum-style ‘ticket’ to watch content directly. Their access is view-only and whilst this passes through the data and code layers, there is no extra infrastructure to build on top of your streaming infrastructure and each person can have their own individually-watermarked version as delivered with Eluvio’s work with MGM’s online premier of the recent Bill and Ted film.

Eluvium currently have a group of globally-deployed hubs in internet exchange sites which operate the fabric and contain media shards and blobs of code which can operate on the media to provide just-in-time delvery as necessary with the ability to create slices and overlays inherent in the delivery mechanism. When a player wants access to video, it issues the request with its authorisation information. This meets the fabric which responds to drive the output. Because of the layer of code, the inputs and outputs of the system are industry standard with manipulation done internally.

Before finishing by talking about the technology’s use within MGM and other customers, Michelle summarises the capabilities by saying that it simplifies workflows and can deliver a consistently low, global time to first byte with VoD and Live workflows interchangable. Whilst Michelle asserts that previous distribution protocols have failed at scale, Eluvio’s fabric can scale without the significant burdens of file IO.

Watch now!
Speaker

Michelle Munson Michelle Munson
CEO and Founder,
Eluvio

Video: The Future of Online Video

There are few people who should build their own CDN, contends Steve Miller-Jones from Limelight Networks. If you want to send a parcel, you use a parcel delivery service. So if you want to stream video, use a content delivery network tuned for video. This video looks at the benefits of using CDNs.

John Porterfield welcomes Steve to YouTube channel JP’sChalkTalks and starting with a basic outline of CDNs. Steve explains that the aim of the CDN is to re-deliver the same content as many times as possible by itself without having to go back to a central store, or even back to the publisher to get the video chunk that’s been requested. If your player is a few seconds behind someone else’s who lives in the same geography, then the CDN should be able to deliver you those same chunks almost instantly from somewhere geographically close to you.

Steve explains that in the Limelight State of Online Video 2020 Annual Report rebuffering remains the main frustration with streaming services and, remaining at approx 44% for the last 3 years when taken as a global average. Contrary to popular belief, the older generation is more tolerant of rebuffering than younger viewers.

As well as maintaining a steady feed, low-latency is remaining important. Limelight is able to deliver CMAF down to around a 3-second latency or WebRTC with sub-second latency. To go along with this sub-second video streaming, Limelight also offer sub-second data sharing between players which Steve explains is a important feature allowing services to develop interactivity, quizzes, community engagement and many other business cases.

Lastly Steve outlines the importance of Edge computing as a future growth area for CDNs. The first iteration of cloud computing was a success by taking computing into central locations and away from individual businesses. This worked well for many for financial reasons, because it freed organisations up from managing some aspects of their own infrastructure and enabled scaling of services. However, the logic of what happened next was always done in this one central place. If you’re in Australia and the cloud location is in the EU, then that’s a long wait until you get the result of the decision that needs to be made. Edge computing allows small parts of logic to live in the closest part of a CDN to you. This could well mean that the majority of a service’s infrastructure is in the US, but some of the CDN be it CloudFront, Limelight or another will be in Australia itself meaning pushing as much of your services as you can to the edge will result in significant improvements in speed/latency reduction.

Watch now!
Speakers

Steve Miller-Jones Steve Miller-Jones
VP Strategy & Industry,
Limelight Networks
John Porterfield John Porterfield
Technology Evangelist,
JP’sChalkTalks YouTube Channel

Video: CDNs: Building a Better Video Experience

With European CDN spend estimated to reach $7bn by 2023, an increase in $1.2 in only three years, it’s clear there is no relenting in the march towards IP. In fact, that’s a guiding principle of the BBC’s transmission strategy as we hear from this panel which brings together three broadcasters, beIN, Globo and the BBC to discuss how they’re using CDNs at the moment and their priorities for the future.

Carlos Octavio introduces Globo’s massive scale of programming for Brazil and Latin America. Producing 26,000 hours of content annually, they aim to differentiate themselves as much with the technology of their offerings as with the content. This thirst for differentiation drives their CDN strategy. Brazil is a massive country, so covering the footprint is hard. Octavio explains that they have created their own CDN to support Globo Play which is based on 4 tiers from their two super PoPs in Rio and Sao Paolo down to edge caches. Octavio shows that they are able to achieve the same response times as the major CDN companies in the region. For overflow capacity, Globo uses a multi-CDN approach.

Bhavesh Patel talks about the sports and news output of beIN, both of these being bursty in nature. Whilst traffic for sporting events can forecast, with news this is often not possible. This, plus the wide variability of customers’ home bandwidth are drivers in choosing which CDNs to partner with. Over the next twelve months, Bhavesh explains, beIN’s focus will move to bring down latency on their system as a whole, not on a service by service level. They are also expecting to continue to modify their ABR ladders to follow viewers as they continue their shift from second screens to 60 inch TVs.

The BBC’s approach to distribution is explained by Paul Tweedy. Whilst the BBC is still well known as a linear, public broadcaster, it has been using online distribution for 25 years and continues to innovate in that space. Two important aspects to their strategy are being on as many devices as practical and ensuring the quality of the online experience meets or is comparable to the linear services. The BBC has been using multiple CDNs for many years now. What changes is the balance and what they use CDNs for. They cover a lot of sports, explains Paul, which leads to short-term scaling difficulties, but long term scaling difficulties are equally on his mind due to what the BBC calls the ‘glide path to IP’. This is the acknowledgement that, at some point, it won’t be financially viable to run transmitters and IP will be the wise way to use the licence fee on which the BBC depends. Doing this, clearly, will demand IP delivery of many times what is currently being used. Yesterday’s article on multicast ABR is one way in which this may be mitigated and fits into a multi-CDN strategy.

Watch now! Free registration

Looking at today’s streaming services, Paul and his colleagues aim to get analytics from every player on every device wherever possible. Big data techniques are used to understand these logs along with server-side, client-to-edge and edge-to-origin logs. This information along with sports schedules can lead to capacity planning, though many news events are much less easy to plan. It’s these unplanned, high-peak events which drive the BBC’s build up of internal monitoring tools to help them understand what is working well under load and what’s starting to feel the strain so they can take action to ensure quality is maintained even through these times of intense interest. The BBC manage their capacity with their own CDN, called BIDI, which provides for the baseline needs and allows an easier-to-forecast budget. Mulitple, third-party CDNs are, then, the key to providing the variable and peak capacities needed.

As we head into the Q&A Limelight’s Steve Miller-Jones outlines the company’s strengths including their focus on adding abilities on top of a ‘typical’ CDN. For instance, running applications on the CDN which is particularly useful as part of edge compute and their ability to run WebRTC at scale which not many CDNs are built to do. The Q&A sees the broadcasters outlining what they particularly look out for in a CDN and how they leverage AI. Globo anticipate using AI to help them predict traffic demand, beIN see it providing automated highlights whilst the BBC see it enabling easier access to their deep archives.

Watch now!
Free registration
Speakers

Carlos Octavio Carlos Octavio
Head of Architecture and Analytics,
Globo
Bhavesh Patel Bhavesh Patel
Global Digital Director,
beIN MEDIA GROUP
Paul Tweedy Paul Tweedy
Lead Architect, Online Technology Group,
BBC Design + Engineering
Steve Miller-Jones Steve Miller-Jones
Vice President of Product Strategy,
Limelight Networks

Video: Making a case for DVB-MABR

Multicast ABR (mABR) is a way of delivering traditional HTTP-based streams like HLS and DASH over multicast. On a managed telco network, the services are multicast to thousands of homes and only within the home itself does the stream gets converted back unicast HTTP. Devices in the home then access streaming services in exactly the same way as they would Netflix or iPlayer over the internet, but the content is served locally. Streaming is a point-to-point service so each device takes its own stream. If you have 3 devices in the home watching a service, you’ll be sending 3 streams out to them. With mABR, the core network only ever sees one stream to the home and the linear scaling is done internally. Not only does this help remove peaks in traffic, but it significantly reduces the load on the upstream networks, the origin servers and smooths out the bandwidth use.

This video from DVB lays out the business cases which are enabled by mABR. mABR has approved the specification which is now going for standardisation within ETSI. It’s already gained some traction with deployments in the field, so this talk looks at what the projects that drive the continued growth in mABR may look like.

Williams Tovar starts first by making the case for OTT over satellite. With OTT services continuing to take viewing time away from traditional broadcast services, satellite providers are working to ensure they retain relevance and offer value. Delivering these OTT services is, thus, clearly beneficial, but why would you want to? On top of the mABR benefits briefly outlined above, this business case recognises that not everyone is served by a good internet connection. Distributing OTT by satellite can provide high bitrate, OTT experiences to areas with bad broadband and could also be an efficient way to deliver to large public places such as hotels and ships.

Julian Lemotheux from Orange presents a business case for next-generation IPTV. The idea here is to bring down the cost of STBs by replacing CA security with DRM and replacing the chipset with a cheaper one which is less specialised. As DASH and HLS streaming are cpu-based tasks and well understood, general, mass-produced chipsets can be used which are cheaper and removing CA removes some hardware from the box. Also to be considered is that the OTT ecosystem is continually seeing innovation so delivering services in the same format allows providers to keep their offerings up to date without custom development in the IPTV software stack.

Xavier Leclercq from Broadpeak looks, next, at Scaling ABR Delivery. This business case is a consideration of what the ultimate situation will be regarding MPEG2 TSes and ABR. Why don’t we provide all services as Netflix-style ABR streams? One reason is that the scale is enormous with one connection per device, CDNs and national networks would still not be able to cope. Another is that the QoS for MPEG2 transport streams is very good and, whilst it is possible to have bad reception, there is little else that causes interruption to the stream.

mABR can address both of these challenges. By delivering one stream to each home and having the local gateway do the scaling, mass delivery of streamed content becomes both predictable and practical. Whilst there is still a lot of bandwidth involved, the predictable load on the CDNs is much more controlled and with lower peaks, the CDN cost is reduced as this is normally based on the maximum throughput. mABR can also be delivered with a higher QoS than public internet traffic which allows it to benefit from better reliability which could move it in the realm of the traditional transport-stream based serviced. Xavier explains that if you put the gateway within a TV, you are able to deliver a set-top-box-less service whilst if you want to address all devices in you home, you can provide a separate gateway.

Before the video finishes with a Q&A session, Williams delivers the business case for Backhauling over Satellite for CDNs and IP backhaul for 5G Networks. The use case for both has similarities. The CDN backhauling example looks at using satellite to efficiently deliver directly to CDN PoPs in hard to reach areas which may have limited internet links. The Satellite could deliver a high bandwidth set of streams to many PoPs. A similar issue presents itself as there is so much bandwidth available, there is a concern about getting enough into the transmitter. Whether by satellite or IP Multicast, mABR could be used for CDN backhauling to 5G networks delivering into a Mobile Edge Computing (MEC) cache. A further benefit in doing this is avoiding issues with CDN and core network scalability where, again, keeping the individual requests and streams away from the CDN and the network is a big benefit.

Watch now!
Download the slides from this video
Speakers

Williams Tovar Williams Tovar
Soultion Pre-sales manager,
ENENSYS Technologies
Julien Lemotheux Julien Lemotheux
Standardisation Expert,
Orange Labs
Xavier Leclercq Xavier Leclercq
VP Business Development,
Broadpeak
Christophe Berdinat Moderator: Christophe Berdinat
Chairman CM-I MABR, DVB
Innovation and Standardisation Manager, ENENSYS