Video: CDNs: Building a Better Video Experience

With European CDN spend estimated to reach $7bn by 2023, an increase in $1.2 in only three years, it’s clear there is no relenting in the march towards IP. In fact, that’s a guiding principle of the BBC’s transmission strategy as we hear from this panel which brings together three broadcasters, beIN, Globo and the BBC to discuss how they’re using CDNs at the moment and their priorities for the future.

Carlos Octavio introduces Globo’s massive scale of programming for Brazil and Latin America. Producing 26,000 hours of content annually, they aim to differentiate themselves as much with the technology of their offerings as with the content. This thirst for differentiation drives their CDN strategy. Brazil is a massive country, so covering the footprint is hard. Octavio explains that they have created their own CDN to support Globo Play which is based on 4 tiers from their two super PoPs in Rio and Sao Paolo down to edge caches. Octavio shows that they are able to achieve the same response times as the major CDN companies in the region. For overflow capacity, Globo uses a multi-CDN approach.

Bhavesh Patel talks about the sports and news output of beIN, both of these being bursty in nature. Whilst traffic for sporting events can forecast, with news this is often not possible. This, plus the wide variability of customers’ home bandwidth are drivers in choosing which CDNs to partner with. Over the next twelve months, Bhavesh explains, beIN’s focus will move to bring down latency on their system as a whole, not on a service by service level. They are also expecting to continue to modify their ABR ladders to follow viewers as they continue their shift from second screens to 60 inch TVs.

The BBC’s approach to distribution is explained by Paul Tweedy. Whilst the BBC is still well known as a linear, public broadcaster, it has been using online distribution for 25 years and continues to innovate in that space. Two important aspects to their strategy are being on as many devices as practical and ensuring the quality of the online experience meets or is comparable to the linear services. The BBC has been using multiple CDNs for many years now. What changes is the balance and what they use CDNs for. They cover a lot of sports, explains Paul, which leads to short-term scaling difficulties, but long term scaling difficulties are equally on his mind due to what the BBC calls the ‘glide path to IP’. This is the acknowledgement that, at some point, it won’t be financially viable to run transmitters and IP will be the wise way to use the licence fee on which the BBC depends. Doing this, clearly, will demand IP delivery of many times what is currently being used. Yesterday’s article on multicast ABR is one way in which this may be mitigated and fits into a multi-CDN strategy.

Watch now! Free registration

Looking at today’s streaming services, Paul and his colleagues aim to get analytics from every player on every device wherever possible. Big data techniques are used to understand these logs along with server-side, client-to-edge and edge-to-origin logs. This information along with sports schedules can lead to capacity planning, though many news events are much less easy to plan. It’s these unplanned, high-peak events which drive the BBC’s build up of internal monitoring tools to help them understand what is working well under load and what’s starting to feel the strain so they can take action to ensure quality is maintained even through these times of intense interest. The BBC manage their capacity with their own CDN, called BIDI, which provides for the baseline needs and allows an easier-to-forecast budget. Mulitple, third-party CDNs are, then, the key to providing the variable and peak capacities needed.

As we head into the Q&A Limelight’s Steve Miller-Jones outlines the company’s strengths including their focus on adding abilities on top of a ‘typical’ CDN. For instance, running applications on the CDN which is particularly useful as part of edge compute and their ability to run WebRTC at scale which not many CDNs are built to do. The Q&A sees the broadcasters outlining what they particularly look out for in a CDN and how they leverage AI. Globo anticipate using AI to help them predict traffic demand, beIN see it providing automated highlights whilst the BBC see it enabling easier access to their deep archives.

Watch now!
Free registration
Speakers

Carlos Octavio Carlos Octavio
Head of Architecture and Analytics,
Globo
Bhavesh Patel Bhavesh Patel
Global Digital Director,
beIN MEDIA GROUP
Paul Tweedy Paul Tweedy
Lead Architect, Online Technology Group,
BBC Design + Engineering
Steve Miller-Jones Steve Miller-Jones
Vice President of Product Strategy,
Limelight Networks

Video: Making a case for DVB-MABR

Multicast ABR (mABR) is a way of delivering traditional HTTP-based streams like HLS and DASH over multicast. On a managed telco network, the services are multicast to thousands of homes and only within the home itself does the stream gets converted back unicast HTTP. Devices in the home then access streaming services in exactly the same way as they would Netflix or iPlayer over the internet, but the content is served locally. Streaming is a point-to-point service so each device takes its own stream. If you have 3 devices in the home watching a service, you’ll be sending 3 streams out to them. With mABR, the core network only ever sees one stream to the home and the linear scaling is done internally. Not only does this help remove peaks in traffic, but it significantly reduces the load on the upstream networks, the origin servers and smooths out the bandwidth use.

This video from DVB lays out the business cases which are enabled by mABR. mABR has approved the specification which is now going for standardisation within ETSI. It’s already gained some traction with deployments in the field, so this talk looks at what the projects that drive the continued growth in mABR may look like.

Williams Tovar starts first by making the case for OTT over satellite. With OTT services continuing to take viewing time away from traditional broadcast services, satellite providers are working to ensure they retain relevance and offer value. Delivering these OTT services is, thus, clearly beneficial, but why would you want to? On top of the mABR benefits briefly outlined above, this business case recognises that not everyone is served by a good internet connection. Distributing OTT by satellite can provide high bitrate, OTT experiences to areas with bad broadband and could also be an efficient way to deliver to large public places such as hotels and ships.

Julian Lemotheux from Orange presents a business case for next-generation IPTV. The idea here is to bring down the cost of STBs by replacing CA security with DRM and replacing the chipset with a cheaper one which is less specialised. As DASH and HLS streaming are cpu-based tasks and well understood, general, mass-produced chipsets can be used which are cheaper and removing CA removes some hardware from the box. Also to be considered is that the OTT ecosystem is continually seeing innovation so delivering services in the same format allows providers to keep their offerings up to date without custom development in the IPTV software stack.

Xavier Leclercq from Broadpeak looks, next, at Scaling ABR Delivery. This business case is a consideration of what the ultimate situation will be regarding MPEG2 TSes and ABR. Why don’t we provide all services as Netflix-style ABR streams? One reason is that the scale is enormous with one connection per device, CDNs and national networks would still not be able to cope. Another is that the QoS for MPEG2 transport streams is very good and, whilst it is possible to have bad reception, there is little else that causes interruption to the stream.

mABR can address both of these challenges. By delivering one stream to each home and having the local gateway do the scaling, mass delivery of streamed content becomes both predictable and practical. Whilst there is still a lot of bandwidth involved, the predictable load on the CDNs is much more controlled and with lower peaks, the CDN cost is reduced as this is normally based on the maximum throughput. mABR can also be delivered with a higher QoS than public internet traffic which allows it to benefit from better reliability which could move it in the realm of the traditional transport-stream based serviced. Xavier explains that if you put the gateway within a TV, you are able to deliver a set-top-box-less service whilst if you want to address all devices in you home, you can provide a separate gateway.

Before the video finishes with a Q&A session, Williams delivers the business case for Backhauling over Satellite for CDNs and IP backhaul for 5G Networks. The use case for both has similarities. The CDN backhauling example looks at using satellite to efficiently deliver directly to CDN PoPs in hard to reach areas which may have limited internet links. The Satellite could deliver a high bandwidth set of streams to many PoPs. A similar issue presents itself as there is so much bandwidth available, there is a concern about getting enough into the transmitter. Whether by satellite or IP Multicast, mABR could be used for CDN backhauling to 5G networks delivering into a Mobile Edge Computing (MEC) cache. A further benefit in doing this is avoiding issues with CDN and core network scalability where, again, keeping the individual requests and streams away from the CDN and the network is a big benefit.

Watch now!
Download the slides from this video
Speakers

Williams Tovar Williams Tovar
Soultion Pre-sales manager,
ENENSYS Technologies
Julien Lemotheux Julien Lemotheux
Standardisation Expert,
Orange Labs
Xavier Leclercq Xavier Leclercq
VP Business Development,
Broadpeak
Christophe Berdinat Moderator: Christophe Berdinat
Chairman CM-I MABR, DVB
Innovation and Standardisation Manager, ENENSYS

Video: Layer 4 in the CDN

Caching is a critical element of the streaming video delivery infrastructure, but with the proliferation of streaming services, managing caching is complex and problematic. Open Caching is an initiative by the Streaming Video Alliance to bring this under control allowing ISPs and service providers a standard way to operate.

By caching objects as close to the viewer as possible, you can reduce round-trip times which helps reduce latency and can improve playback but, more importantly, moving the point at which content is distributed closer to the customer allows you to reduce your bandwidth costs, and create a more efficient delivery chain.

This video sees Disney Streaming Services, ViaSat and Stackpath discussing Open Caching with Jason Thibeault, Executive Director of the Streaming Video Alliance. Eric Klein from Disney explains that one driver for Open Caching is from content producers which find it hard to scale, to deliver content in a consistent manner across many different networks. Standardising the interfaces will help remove this barrier of scale. Alongside a drive from content producers, are the needs of the network operators who are interested in moving caching on to their network which reduces the back and forth traffic and can help cope with peaks.

Dan Newman from Viasat builds on these points looking at the edge storage project. This is a project to move caching to the edge of the networks which is an extension of the original open caching concept. The idea stretches to putting caching directly into the home. One use of this, he explains, can be used to cache UHD content which otherwise would be too big to be downloaded down lower bandwidth links.

Josh Chesarek from StackPath says that their interest in being involved in the Open Caching initiative is to get consistency and interoperability between CDNs. The Open Caching group is looking at creating these standard APIs for capacity, configuration etc. Also, Eric underlines the interest in interoperability by the close work they are doing with the IETF to find better standards on which to base their work.

Looking at the test results, the average bitrate increases by 10% when using open caching, but also a 20-40% improvement in connection use rebuffer ratio which shows viewers are seeing an improved experience. Viasat have used multicast ABR plus open caching. This shows there’s certainly promise behind the work that’s ongoing. The panel finishes by looking towards what’s next in terms of the project and CDN optimisation.

Watch now!
Speakers

Eric Klein Eric Klein
Director, CDN Technology,
Disney+
Dan Newman Dan Newman
Product Manager,
Viasat
Josh Chesarek Josh Chesarek
VP, Sales Engineering & Support
Stackpath.com
Jason Thibeault Jason Thibeault
Executive Director, Streaming Video Alliance

Video: CDN Trends in FPGAs & GPUs

As technology continues to improve, immersive experiences are all the more feasible. This video looks at how the CDNs can play their part in enabling technologies which seem to rely on fast, local, compute. However, as with many internet services, low latency is very important.

Greg Jones from Nvidia and Nehal Mehta form Intel give us the lowdown in this video on what’s happening today to enable low-latency CDNs and what the future might look like. Intel, owners of FPGA makers Altera, and Nvidia are both interested in how their products can be of as much service at the edge as in the core datacentres.

Greg is involved in XR development at Nvidia. ‘XR’ is a term which refers to an outcome rather than any specific technology. Ostensibly ‘eXtended’ reality, it includes some VR, some augmented reality and anything else which helps improve the immersive experience. Greg explains that the importance of getting the ‘motion to photon’ delay to within 20ms. CDNs can play a role in this by moving compute to the edge. This tracks with current trends on wanting to reduce backhaul, edge computation is already on the rise.

Greg also touches on recent power improvements on newer GPUs. Similar to what we heard the other day from Gerard Phillips from Arista who said that switch manufacturers were still using technology that CPU’s were on several years ago meaning there’s plenty in the bank for speed increases over the coming years. According to Greg, the same is true for GPUs. Moreover, it’s important to compare compute per watt rather than doing it in absolute terms.

Nehal Mehta explains that, in the same way that GPUs can offload certain tasks from the CPU, so do FPGAs. At scale, this can be critical for tasks like deep packet inspection, encryption or even dynamic ad insertion at the edge,

The second half of video looks at what’s happening during the pandemic. Nehal explains that need for encryption has increased and Greg sees that large engineering functions are now, or many are soon likely to be, done in the cloud. Greg sees XR as going a long way to helping people collaborate around a large digital model and may help to reduce travel.

The last point made is regarding video conferencing all day long leaving people wanting “more meaningful interactions”. We are seeing attempts at richer and richer meeting experiences, both with and without XR.
Watch now!
Speakers

Greg Jones Greg Jones
Global Business Development, XR
NVIDIA
Nehal Mehta Nehal Mehta
Direcotr Visiual Cloud, CDN Segment,
Intel
Tim Siglin Moderator: Tim Siglin
Founding Executive Director,
Help Me Stream