Video: Scaling of Live Streaming on the Ingest Side

You can quickly and easily ‘scale up’ in the cloud, but how? Life is seldom as easy as just clicking a button and the times you do find a button, chances are that will help you scale your outputs. But what happens when you need to scale your inputs? What should you consider when creating your scaling architecture in the cloud? Why is scaling down more difficult than scaling up for a peak? This webinar highlights what you need to know.

Karel Boek from Raskenlund starts by explaining that, while CDNs allow scaling for delivery to end-users, there are fewer solutions for scaling up your ingest. Even if you’re streaming using WebRTC, which isn’t cacheable via CDNs, there are companies such as NanoCosmos who will scale that for you. But for ingest, scaling gets more bespoke more quickly.

There is, Karel explains, the option to outsource entire operation to AWS. For many, this is on the face of it, ideal as there’s not that much work to be done. However, you may need to use more customisation than is possible on this general service and, more importantly, there’s a reason which also affects the second option: creating some of your own workflows but using the cloud to scale.

The problem with cloud autoscalers is that they’re built for HTTP. Karel details how they look at metrics from your servers to determine the point at which they need to be scaled. These could be metrics such as the number of HTTP connections, CPU usage, bandwidth etc. Although Google does allow custom metrics, you may quickly find that a key metric such as GPU load isn’t supported leaving you having to scale without the most important data driving the decision making. Worse, when it comes to scaling down, autoscalers don’t understand ingest. As ingest streams stop, the scaler could be looking at a server which is taking a feed but has very low utilisation and therefore gets killed distributing the stream.

Building your own system is the only way to fully mitigate or remove these problems as you’re putting yourself in full control of creating a system sensitive to the ‘unusual’ metrics of ingesting streams which are very different from serving HTTP files that many autoscalers are built around. Karel looks at the elements of scaling a solution including load balancers, proxy servers and creating an algorithm which listens to metrics and makes up- and down-scaling decisions.

Karel advises writing down the logic for when and how to scale up so that’s it’s clear and well thought-through. Similarly, you need a strategy for Load balancing (i.e. why is round-robin the right/wrong choice for you?) and a scaling down plan. In order to scale down with minimal impact, you need to scale up well. You should use as many clues as you can to group similar feeds onto similar servers. This means a whole server is more likely to be free than if you mix and match long-lived and short-lived feeds on the same server, say.

Finally, Karel details the three main Pitfalls: Scaling down, time taken to scale up (can you wait 3 minutes?), and creating upper limits on your scaling to prevent your algorithm autoscaling you into debt by spinning up tens, hundreds or thousands of unnecessary servers.

Watch now!
Speakers

Karel Boek Karel Boek
CEO,
Raskenlund

Video: CMAF And The Future Of OTT

Why is CMAF still ‘the future’ of OTT? Published in 2018, CMAF’s been around for a while now so what are the challenges and hurdles holding up implementation? Are there reasons not to use it at all? CMAF is a way of encoding and packaging media which then can be sent using MPEG DASH and HLS, the latter being the path Disney+ has chosen, for instance.

This panel from Streaming Media West Connect, moderated by Jan Ozer, discusses CMAF use within Akami, Netflix, Disney+, and Hulu. Peter Chave from Akamai starts off making the point that CMAF is important to CDNs because if companies are able to use just one CMAF file as the source for different delivery formats, this reduces storage costs for consumers and makes each individual file more popular thus increasing the chance of having a file available in the CDN (particularly at the edge) and reducing cache misses. They’ve had to do some work to ensure that CMAF is carried throughout the CDN efficiently and ensuring the manifests are correctly checked.

Disney+, explains Bill Zurat, is 100% HLS CMAF. Benefiting from the long experience of the Disney Streaming Services teams (formerly BAMTECH), but also from setting up a new service, Disney were able to bring in CMAF from the start. There are issues ensuring end-device support, but as part of the launch, a number were sunsetted which didn’t have the requirements necessary to support either the protocol or the DRM needed.

Hulu is an aggregator so they have strong motivation to normalise inputs, we hear from Hulu’s Nick Brookins. But they also originate programming along with live streaming so CMAF has an important to play on the way in and the way out. Hulu dynamically regenerates their manifests so can iterate as they roll out easily. They are currently part the way through the rollout and will achieve full CMAF compatibility within the next 18 months.

The conversation turns to DRM. CMAF supports two methods of DRM known as CTR (adopted by Apple) and CBC (also known as CBCS) which has been adopted by others. AV1 supports both, but the recommendation has been to use CBC which appears have been universally followed to date explains Netflix’s Cyril Concolato. Netflix have been using AV1 since it was finalised and are aiming to have most titles transitioned by 2021 to CMAF.

Peter comments from Akamai’s position that they see a number of customers who, like Disney+ and Peacock, have been able to enter the market recently and move straight into CMAF, but there is a whole continuum of companies who are restricted by their workflows and viewer’s devices in moving to CMAF.

Low latency streaming is one topic which invigorates minds and debates for many in the industry. Netflix, being purely video on demand, they are not interested in low-latency streaming. However, Hulu is as is Disney Streaming Services, but Bill cautions us on rushing to the bottom in terms of latency. Quality of experience is improved with extra latency both in terms of reduced rebuffering and, in some cases, picture quality. Much of Disney Streaming Services’ output needs to match cable, rather than meeting over-the-air latencies or less.

The panel session finishes with a quick-fire round of questions from Jan and the audience covering codec strategy, whether their workflows have changed to incorporate CMAF, just-in-time vs static packaging, and what customers get out of CMAF.

Watch now!
Speakers

Cyril Concolato Cyril Concolato
Senior Software Engineer,
Netflix
Peter Chave Peter Chave
Principal Architect,
Akamai
Nick Brookins Nick Brookins
VP, Platform Services Group,
Hulu
Bill Zurat Bill Zurat
VP, Core Technology
Disney Streaming Services
Jan Ozer Moderator: Jan Ozer
Contributing Editor, Streaming Media
Owner, StreamingLearningCenter.com

Video: Super Resolution: What’s the buzz and why does it matter?

“Enhance!” the captain shouts as the blurry image on the main screen becomes sharp and crisp again. This was sci-fi – and this still is sci-fi – but super-resolution techniques are showing that it’s really not that far-fetched. Able to increase the sharpness of video, machine learning can enable upscaling from HD to UHD as well as increasing the frame rate.

Bitmovin’s Adithyan Ilangovan is here to explain the success they’ve seen with super-resolution and though he concentrates on upscaling, this is just as relevant to improving downscaling. Here are our previous articles covering super resolution.

Adithyan outlines two main enablers of super-resolution, allowing it to displace the traditional methods such as bicubic and Lanczos. Enabler one is the advent of machine learning which now has a good foundation of libraries and documentation for coders allowing it to be fairly accessible to a wide audience. Furthermore, the proliferation of GPUs and, particularly for mobile devices, neural engines is a big help. Using the GPUs inside CPUs or in desktop PCI slots allows the analysis to be done locally without transferring great amounts of video to the cloud solely for the purpose of processing or identification. Furthermore, if your workflow is in the cloud, it’s now easy to rent GPUS and FPGAs to handle such workloads.

Using machine learning doesn’t only allow for better upscaling on a frame-by-frame basis, but we are also able to allow it to form a view of the whole file, or at least the whole scene. With a better understanding of the type of video it’s analysing (cartoon, sports, computer screen etc.) it can tune the upscaling algorithm to deal with this optimally.

Anime has seen a lot of tuning for super-resolution. Due to Anime’s long history, there are a lot of old cartoons which are both noisy and low resolution which are still enjoyed now but would benefit from more resolution to match the screens we now routinely used.

Adithyan finishes by asking how we should best take advantage of super-resolution. Codecs such as LCEVC use it directly within the codec itself, but for systems that have pre and post-processing before the encoder, Adithyan suggests it’s viable to consider reducing the bitrate to reduce the CDN costs knowing the using super-resolution on the decoder, the video quality can actually be maintained.

The video ends with a Q&A.

Watch now!
Download the slides
Speaker

Adithyan Ilangovan Adithyan Ilangovan
Encoding Engineer,
Bitmovin

Video: Super-Aggregation & Cloud-Powered IPTV – A Case Study

What is super aggregation and why is it necessary for the media & entertainment industry? Surveys show that 47% of US citizens are frustrated by having to subscribe to multiple services rather than just 1 as they used to. Moreover, 57% are annoyed when content they like moves to another platform. This indicates there’s an opportunity in the market for services which aggregate multiple services into one to meet the needs of consumers and expand the reach of streaming providers.

This webinar from IBC 365 is hosted by Andy Waltenspiel who talks to TiVo, Liberty Latin America and Velocix about how they’ve worked together to launch a super-aggregation service for Latin American countries. Chris Thun from TiVo explains that he feels that ‘universal search’ is essential as well as ‘universal browse’ which creates a recommendation engine which understands you and helps you discover content from all services.

Liberty’s Edwin Elberg explains that this project came about when Liberty Latin America split from Liberty Global. Unusually, this meant they had to learn how to ‘scale down’ their operations. But this gave them an opportunity to differentiate themselves through a super-aggregation project which relied on the strengths of TiVo and Velocix with TiVo handling the customer-facing elements and Velocix managing the technical integration of the project and technical delivery of a number of workflow features such as ingest, recording and delivery to any device.

Jim Brickmeier from Velocix explained that their choice to use the cloud was made from the point of view of flexibility and reactiveness being most important. Whilst telcos which have their own network may still want to use on-prem solutions since delivering to their network is so much cheaper that way plus they can use many of the same technologies that would be in the cloud on-prem.

From a business perspective, having a super-aggregator is a benefit for many of the streaming services because Liberty is a big, trusted name with many subscribers. They already have a commercial and brand relationship with millions of people which significantly reduces the friction in signing up to new content, particularly in a part of the world where credit card ownership is far from universal.

Overall, this service seeks to differentiate itself and maintain subscribers by being the only company in the market with a user-experience at that level and with great content. Edwin Elberg points out it’s far from easy in setting up multiple relationships for the service and involves many contracts and careful planning to deal with international sales tax rules. Onboarding a new partner, he suggests, should be quicker and he thinks a standard for both the content provider and the aggregator to follow or conform to is the future.

Watch now!
Free
Speakers

Chris Thun Chris Thun
Vice President, Product
TiVo
Edwin Elberg Edwin Elberg
Sr. Director, Product Development
Liberty Latin America
Jim Brickmeier Jim Brickmeier
Chief Product and Marketing Officer,
Velocix
Andy Waltenspiel Andy Waltenspiel
General Manager
Waltenspiel Management Consulting