Video: The Future of Online Video

There are few people who should build their own CDN, contends Steve Miller-Jones from Limelight Networks. If you want to send a parcel, you use a parcel delivery service. So if you want to stream video, use a content delivery network tuned for video. This video looks at the benefits of using CDNs.

John Porterfield welcomes Steve to YouTube channel JP’sChalkTalks and starting with a basic outline of CDNs. Steve explains that the aim of the CDN is to re-deliver the same content as many times as possible by itself without having to go back to a central store, or even back to the publisher to get the video chunk that’s been requested. If your player is a few seconds behind someone else’s who lives in the same geography, then the CDN should be able to deliver you those same chunks almost instantly from somewhere geographically close to you.

Steve explains that in the Limelight State of Online Video 2020 Annual Report rebuffering remains the main frustration with streaming services and, remaining at approx 44% for the last 3 years when taken as a global average. Contrary to popular belief, the older generation is more tolerant of rebuffering than younger viewers.

As well as maintaining a steady feed, low-latency is remaining important. Limelight is able to deliver CMAF down to around a 3-second latency or WebRTC with sub-second latency. To go along with this sub-second video streaming, Limelight also offer sub-second data sharing between players which Steve explains is a important feature allowing services to develop interactivity, quizzes, community engagement and many other business cases.

Lastly Steve outlines the importance of Edge computing as a future growth area for CDNs. The first iteration of cloud computing was a success by taking computing into central locations and away from individual businesses. This worked well for many for financial reasons, because it freed organisations up from managing some aspects of their own infrastructure and enabled scaling of services. However, the logic of what happened next was always done in this one central place. If you’re in Australia and the cloud location is in the EU, then that’s a long wait until you get the result of the decision that needs to be made. Edge computing allows small parts of logic to live in the closest part of a CDN to you. This could well mean that the majority of a service’s infrastructure is in the US, but some of the CDN be it CloudFront, Limelight or another will be in Australia itself meaning pushing as much of your services as you can to the edge will result in significant improvements in speed/latency reduction.

Watch now!
Speakers

Steve Miller-Jones Steve Miller-Jones
VP Strategy & Industry,
Limelight Networks
John Porterfield John Porterfield
Technology Evangelist,
JP’sChalkTalks YouTube Channel

Video: CDNs: Building a Better Video Experience

With European CDN spend estimated to reach $7bn by 2023, an increase in $1.2 in only three years, it’s clear there is no relenting in the march towards IP. In fact, that’s a guiding principle of the BBC’s transmission strategy as we hear from this panel which brings together three broadcasters, beIN, Globo and the BBC to discuss how they’re using CDNs at the moment and their priorities for the future.

Carlos Octavio introduces Globo’s massive scale of programming for Brazil and Latin America. Producing 26,000 hours of content annually, they aim to differentiate themselves as much with the technology of their offerings as with the content. This thirst for differentiation drives their CDN strategy. Brazil is a massive country, so covering the footprint is hard. Octavio explains that they have created their own CDN to support Globo Play which is based on 4 tiers from their two super PoPs in Rio and Sao Paolo down to edge caches. Octavio shows that they are able to achieve the same response times as the major CDN companies in the region. For overflow capacity, Globo uses a multi-CDN approach.

Bhavesh Patel talks about the sports and news output of beIN, both of these being bursty in nature. Whilst traffic for sporting events can forecast, with news this is often not possible. This, plus the wide variability of customers’ home bandwidth are drivers in choosing which CDNs to partner with. Over the next twelve months, Bhavesh explains, beIN’s focus will move to bring down latency on their system as a whole, not on a service by service level. They are also expecting to continue to modify their ABR ladders to follow viewers as they continue their shift from second screens to 60 inch TVs.

The BBC’s approach to distribution is explained by Paul Tweedy. Whilst the BBC is still well known as a linear, public broadcaster, it has been using online distribution for 25 years and continues to innovate in that space. Two important aspects to their strategy are being on as many devices as practical and ensuring the quality of the online experience meets or is comparable to the linear services. The BBC has been using multiple CDNs for many years now. What changes is the balance and what they use CDNs for. They cover a lot of sports, explains Paul, which leads to short-term scaling difficulties, but long term scaling difficulties are equally on his mind due to what the BBC calls the ‘glide path to IP’. This is the acknowledgement that, at some point, it won’t be financially viable to run transmitters and IP will be the wise way to use the licence fee on which the BBC depends. Doing this, clearly, will demand IP delivery of many times what is currently being used. Yesterday’s article on multicast ABR is one way in which this may be mitigated and fits into a multi-CDN strategy.

Watch now! Free registration

Looking at today’s streaming services, Paul and his colleagues aim to get analytics from every player on every device wherever possible. Big data techniques are used to understand these logs along with server-side, client-to-edge and edge-to-origin logs. This information along with sports schedules can lead to capacity planning, though many news events are much less easy to plan. It’s these unplanned, high-peak events which drive the BBC’s build up of internal monitoring tools to help them understand what is working well under load and what’s starting to feel the strain so they can take action to ensure quality is maintained even through these times of intense interest. The BBC manage their capacity with their own CDN, called BIDI, which provides for the baseline needs and allows an easier-to-forecast budget. Mulitple, third-party CDNs are, then, the key to providing the variable and peak capacities needed.

As we head into the Q&A Limelight’s Steve Miller-Jones outlines the company’s strengths including their focus on adding abilities on top of a ‘typical’ CDN. For instance, running applications on the CDN which is particularly useful as part of edge compute and their ability to run WebRTC at scale which not many CDNs are built to do. The Q&A sees the broadcasters outlining what they particularly look out for in a CDN and how they leverage AI. Globo anticipate using AI to help them predict traffic demand, beIN see it providing automated highlights whilst the BBC see it enabling easier access to their deep archives.

Watch now!
Free registration
Speakers

Carlos Octavio Carlos Octavio
Head of Architecture and Analytics,
Globo
Bhavesh Patel Bhavesh Patel
Global Digital Director,
beIN MEDIA GROUP
Paul Tweedy Paul Tweedy
Lead Architect, Online Technology Group,
BBC Design + Engineering
Steve Miller-Jones Steve Miller-Jones
Vice President of Product Strategy,
Limelight Networks

Video: Making a case for DVB-MABR

Multicast ABR (mABR) is a way of delivering traditional HTTP-based streams like HLS and DASH over multicast. On a managed telco network, the services are multicast to thousands of homes and only within the home itself does the stream gets converted back unicast HTTP. Devices in the home then access streaming services in exactly the same way as they would Netflix or iPlayer over the internet, but the content is served locally. Streaming is a point-to-point service so each device takes its own stream. If you have 3 devices in the home watching a service, you’ll be sending 3 streams out to them. With mABR, the core network only ever sees one stream to the home and the linear scaling is done internally. Not only does this help remove peaks in traffic, but it significantly reduces the load on the upstream networks, the origin servers and smooths out the bandwidth use.

This video from DVB lays out the business cases which are enabled by mABR. mABR has approved the specification which is now going for standardisation within ETSI. It’s already gained some traction with deployments in the field, so this talk looks at what the projects that drive the continued growth in mABR may look like.

Williams Tovar starts first by making the case for OTT over satellite. With OTT services continuing to take viewing time away from traditional broadcast services, satellite providers are working to ensure they retain relevance and offer value. Delivering these OTT services is, thus, clearly beneficial, but why would you want to? On top of the mABR benefits briefly outlined above, this business case recognises that not everyone is served by a good internet connection. Distributing OTT by satellite can provide high bitrate, OTT experiences to areas with bad broadband and could also be an efficient way to deliver to large public places such as hotels and ships.

Julian Lemotheux from Orange presents a business case for next-generation IPTV. The idea here is to bring down the cost of STBs by replacing CA security with DRM and replacing the chipset with a cheaper one which is less specialised. As DASH and HLS streaming are cpu-based tasks and well understood, general, mass-produced chipsets can be used which are cheaper and removing CA removes some hardware from the box. Also to be considered is that the OTT ecosystem is continually seeing innovation so delivering services in the same format allows providers to keep their offerings up to date without custom development in the IPTV software stack.

Xavier Leclercq from Broadpeak looks, next, at Scaling ABR Delivery. This business case is a consideration of what the ultimate situation will be regarding MPEG2 TSes and ABR. Why don’t we provide all services as Netflix-style ABR streams? One reason is that the scale is enormous with one connection per device, CDNs and national networks would still not be able to cope. Another is that the QoS for MPEG2 transport streams is very good and, whilst it is possible to have bad reception, there is little else that causes interruption to the stream.

mABR can address both of these challenges. By delivering one stream to each home and having the local gateway do the scaling, mass delivery of streamed content becomes both predictable and practical. Whilst there is still a lot of bandwidth involved, the predictable load on the CDNs is much more controlled and with lower peaks, the CDN cost is reduced as this is normally based on the maximum throughput. mABR can also be delivered with a higher QoS than public internet traffic which allows it to benefit from better reliability which could move it in the realm of the traditional transport-stream based serviced. Xavier explains that if you put the gateway within a TV, you are able to deliver a set-top-box-less service whilst if you want to address all devices in you home, you can provide a separate gateway.

Before the video finishes with a Q&A session, Williams delivers the business case for Backhauling over Satellite for CDNs and IP backhaul for 5G Networks. The use case for both has similarities. The CDN backhauling example looks at using satellite to efficiently deliver directly to CDN PoPs in hard to reach areas which may have limited internet links. The Satellite could deliver a high bandwidth set of streams to many PoPs. A similar issue presents itself as there is so much bandwidth available, there is a concern about getting enough into the transmitter. Whether by satellite or IP Multicast, mABR could be used for CDN backhauling to 5G networks delivering into a Mobile Edge Computing (MEC) cache. A further benefit in doing this is avoiding issues with CDN and core network scalability where, again, keeping the individual requests and streams away from the CDN and the network is a big benefit.

Watch now!
Download the slides from this video
Speakers

Williams Tovar Williams Tovar
Soultion Pre-sales manager,
ENENSYS Technologies
Julien Lemotheux Julien Lemotheux
Standardisation Expert,
Orange Labs
Xavier Leclercq Xavier Leclercq
VP Business Development,
Broadpeak
Christophe Berdinat Moderator: Christophe Berdinat
Chairman CM-I MABR, DVB
Innovation and Standardisation Manager, ENENSYS

Video: Layer 4 in the CDN

Caching is a critical element of the streaming video delivery infrastructure, but with the proliferation of streaming services, managing caching is complex and problematic. Open Caching is an initiative by the Streaming Video Alliance to bring this under control allowing ISPs and service providers a standard way to operate.

By caching objects as close to the viewer as possible, you can reduce round-trip times which helps reduce latency and can improve playback but, more importantly, moving the point at which content is distributed closer to the customer allows you to reduce your bandwidth costs, and create a more efficient delivery chain.

This video sees Disney Streaming Services, ViaSat and Stackpath discussing Open Caching with Jason Thibeault, Executive Director of the Streaming Video Alliance. Eric Klein from Disney explains that one driver for Open Caching is from content producers which find it hard to scale, to deliver content in a consistent manner across many different networks. Standardising the interfaces will help remove this barrier of scale. Alongside a drive from content producers, are the needs of the network operators who are interested in moving caching on to their network which reduces the back and forth traffic and can help cope with peaks.

Dan Newman from Viasat builds on these points looking at the edge storage project. This is a project to move caching to the edge of the networks which is an extension of the original open caching concept. The idea stretches to putting caching directly into the home. One use of this, he explains, can be used to cache UHD content which otherwise would be too big to be downloaded down lower bandwidth links.

Josh Chesarek from StackPath says that their interest in being involved in the Open Caching initiative is to get consistency and interoperability between CDNs. The Open Caching group is looking at creating these standard APIs for capacity, configuration etc. Also, Eric underlines the interest in interoperability by the close work they are doing with the IETF to find better standards on which to base their work.

Looking at the test results, the average bitrate increases by 10% when using open caching, but also a 20-40% improvement in connection use rebuffer ratio which shows viewers are seeing an improved experience. Viasat have used multicast ABR plus open caching. This shows there’s certainly promise behind the work that’s ongoing. The panel finishes by looking towards what’s next in terms of the project and CDN optimisation.

Watch now!
Speakers

Eric Klein Eric Klein
Director, CDN Technology,
Disney+
Dan Newman Dan Newman
Product Manager,
Viasat
Josh Chesarek Josh Chesarek
VP, Sales Engineering & Support
Stackpath.com
Jason Thibeault Jason Thibeault
Executive Director, Streaming Video Alliance