Video: Broadcast in the cloud!

Milan Video Tech’s back with a three takes on putting broadcast into the cloud. So often we see the cloud as ‘for streaming’. That’s not today’s topic; we’re talking ingest and live transmissions in the cloud. Andrea Fassina from videodeveloper.io introduces the three speakers who share their tips for doing cloud well by using KPIs, using the cloud to be efficient, agile & scale and, finally, running your live linear channels through the cloud as part of their transmission path.

First up is Christopher Brähler from Akamai who looks at a how they helped a customer becomes more efficient, be agile and scale. His first example shows how using a cloud workflow in AWS, including many AWS services such as Lambda, the customer was able to reduce human interaction with a piece of content during ingest by 80%. The problem was that every piece of content took two hours to ingest, mainly due to people having to watch for problems. Christopher shows how this process was automated. He highlights some easy wins by front-loading the process with MediaInfo which could easily detect obvious problems like incorrect duration, codec etc. Christopher then shows how the rest of the workflow used AWS components and Lamda to choose to transcode/rewrap files if needed and then pass them on to a whole QC process. The reduction was profound and whilst this could have been achieved with similar MAM-style processing on-premise, being in the cloud allows the next two benefits.

The next example is how the same customer was able to quickly adjust to a new demand on the workflow when they found that some files were arriving and weren’t compatible with their ingest process due to a bug in a certain vendor’s software which was going to take months to fix. Using this same workflow they were able to branch out, using MediaInfo to determine if this vendor’s software was involved. If it was it would be sent down a newly-created path in the workflow that worked around the problem. The benefit of this being in the cloud touches on the third example – scalability. Being in the cloud, it didn’t really matter how much or little this new branch was used. When it wasn’t being used, the cost would be nothing. If it was needed a lot, it would scale up.

The third example is when this customer merged with another large broadcaster, The cloud-based workflow meant that they were able to easily scale up and put a massive library of content through ingest in a matter of two or three months, rather than a year or more than otherwise would be the case on dedicated equipment.

Next up is Luca Moglia from Akamai who’s sharing with his experience on getting great value out of cloud infrastructure. Security should be the basis of any project whether it’s on the internet or not, so it’s no surprise that Luca starts with the mandate to ‘Secure all connections’. Whilst he focuses on the streaming use case, his points can be generalised to programme contribution. He splits up the chain into ‘first mile’ (origin/DC to cloud/CDN), ‘middle mile’ (cloud/CDN to edge) and last mile which is the delivery from the edge to the viewer. Luca looks at options to secure these segments such as ‘AWS Connect’ and other services for Azure & GCP. He looks at using private network interconnections (PNIs) for CDNs and then examines options for the last mile.

His other pieces of advice are to offload as mich ‘origin’ as you can, meaning to reduce the load on your origin server by using an Origin Gateway but also a Multi-CDN strategy. Similarly, he suggests offloading as much logic to the edge as is practical. After all, the viewer’s ping to the edge (RTT) is the lowest practical, so having two-way traffic is best there than deeper into the CDN as the edge is usually in the same ISP.

Another plea is to remember that CMAF is not just there to reduce latency, Luca emphasises all the other benefits which aren’t only important for low-latency use cases such as being able to use the same segments for delivering HLS and DASH streams. Being able to share the same segments allows CDNs to cache better which is a win for everyone. It also reduces storage costs and brings all DRM under CENC, a single mechanism supporting several different DRM methods.

Luca finishes his presentation suggesting looking at the benefits of using HTTP/2 and HTTP/3 to reduce round trips and, in theory, speed up delivery. Similarly, he talks about the TCP algorithm BBR which should improve throughput.

Last to speak is Davide Maggioni from Sky Italia who shows us how they quickly transitioned to a cloud workflow for NOWTV and SKYGO when asked to move to HD, maintain costs and make the transition quickly. They developed a plan to move the metadata enrichement, encryption, encoding and DRM into the cloud. This helped them reduce procurement overhead and allowed them to reduce deployment time.

Key to the project was taking an ‘infrastructure as code’ approach whereby everything is configured by API, run from automated code. This reduces mistakes, increases repeatability and also allowed them to, more easily, deploy popup channels.

Davide takes us through the diagrams and ways in which they are able to deploy permanent and temporary channels showing ‘mezzanine’ encoding on-premise, manipulation done in the cloud, and then a return to on premise ahead of transmission to the CDN.

Watch now!
Speakers

Christopher Brähler Christopher Brähler
Director of Product Management,
SDVI Corporation
Davide Maggioni Davide Maggioni
OTT & Cloud Process and Delivery,
Sky Italia
Luca Moglia Luca Moglia
Media Solutions Engineer,
Akamai
Andrea Fassina Andrea Fassina
Freelance Developer,
https://videodeveloper.io

Video: Player Optimisations

If you’ve ever tried to implement your own player, you’ll know there’s a big gap between understanding the HLS/DASH spec and getting an all-round great player. Finding the best, most elegant, ways of dealing with problems like buffer exhaustion takes thought and experience. The same is true for low-latency playback.

Fortunately, Akamai’s Will Law is here to give us the benefit of his experience implementing his own and helping customers monitor the performance of their players. At the end of the day, the player is the ‘kingpin’ of streaming, comments Will. Without it, you have no streaming experience. All other aspects of the stream can be worked around or mitigated, but if the player’s not working, no one watches anything.

Will’s first tip is to implement ‘segment abandonment’. This is when a video player foresees that downloading the current segment is taking too long; if it continues, it will run out of video to play before the segment has arrived. A well-programmed player will sport this and try to continue the download of this segment from another server or CDN. However, Will says that many will simply continue to wait for the download and, in the meantime, the download will fail.

Tip two is about ABR switching in low-latency, chunked transfer streams. The playback buffer needs to be longer than the chunk duration. Without this precaution, there will not be enough time for the player to make the decision to switch down layers. Will shows a diagram of how a 3-second playback buffer can recover as long as it uses 2-second segments.

Will’s next two suggestions are to put your initial chunk in the manifest by base64-encoding it. This makes the manifest larger but removes the round-trip which would otherwise be used to request the chunk. This can significantly improve the startup performance as the RTT could be a quarter of a second which is a big deal for low-latency streams and anyone who wants a short time-to-play. Similarly, advises Will, make those initial requests in parallel. Don’t wait for the init file to be downloaded before requesting the media segment.

Whilst many of points in this talk focus on the player itself, Will says it’s wise for the player to provide metrics back to the CDN, hidden in the request headers or query args. This data can help the CDN serve media smarter. For instance, the player could send over the segment duration to the CDN. Knowing how long the segment is, the CDN can compare this to the download time to understand if it’s serving the data too slow. Perhaps the simplest idea is for the player to pass back a GUID which the CDN can put in the logs. This helps identify which of the millions of lines of logs are relevant to your player so you can run your own analysis on a player-by-player level.

Will’s other points include advice on how to avoid starting playing at the lowest bandwidth and working up. This doesn’t look great and is often unnecessary. The player could run its own speed test or the CDN could advise based on the initial requests. He advises never trusting the system clock; use an external clock instead.

Regarding playback latency, it pays to be wise when starting out. If you blindly start an HLS stream, then your latency will be variable within the duration of a segment. Will advocates HEAD requests to try to see when the next chunk is available and only then starting playback. Another technique is to vary your playback rate o you can ‘catch up’. The benefit of using rate adjustment is that you can ask all your players to be at a certain latency behind realtime so they can be close to synchronous.

Two great tips which are often overlooked: Request multiple GOPs at once. This helps open up the TCP windows giving you a more efficient download. For mobile, it can also help the battery allowing you to more efficiently cycle the radio on and off. Will mentions that when it comes to GOPs, for some applications its important to look at exactly how long your GOP should be. Usually aligning it with an integer number of audio frames is the way to choose your segment duration.

The talk finishes with an appeal to move to using CMAF containers for streaming ask they allow you to deliver HLS and DASH streams from the same media segments and move to a common DRM. Will says that CBCS encrypted content is now becoming nearly all-pervasive. Finally, Will gives some tips on how players are best to analyse which CDN to use in multi-CDN environments.

Watch now!
Speaker

Will Law Will Law
Chief Architect,
Akamai

Video: CMAF And The Future Of OTT

Why is CMAF still ‘the future’ of OTT? Published in 2018, CMAF’s been around for a while now so what are the challenges and hurdles holding up implementation? Are there reasons not to use it at all? CMAF is a way of encoding and packaging media which then can be sent using MPEG DASH and HLS, the latter being the path Disney+ has chosen, for instance.

This panel from Streaming Media West Connect, moderated by Jan Ozer, discusses CMAF use within Akami, Netflix, Disney+, and Hulu. Peter Chave from Akamai starts off making the point that CMAF is important to CDNs because if companies are able to use just one CMAF file as the source for different delivery formats, this reduces storage costs for consumers and makes each individual file more popular thus increasing the chance of having a file available in the CDN (particularly at the edge) and reducing cache misses. They’ve had to do some work to ensure that CMAF is carried throughout the CDN efficiently and ensuring the manifests are correctly checked.

Disney+, explains Bill Zurat, is 100% HLS CMAF. Benefiting from the long experience of the Disney Streaming Services teams (formerly BAMTECH), but also from setting up a new service, Disney were able to bring in CMAF from the start. There are issues ensuring end-device support, but as part of the launch, a number were sunsetted which didn’t have the requirements necessary to support either the protocol or the DRM needed.

Hulu is an aggregator so they have strong motivation to normalise inputs, we hear from Hulu’s Nick Brookins. But they also originate programming along with live streaming so CMAF has an important to play on the way in and the way out. Hulu dynamically regenerates their manifests so can iterate as they roll out easily. They are currently part the way through the rollout and will achieve full CMAF compatibility within the next 18 months.

The conversation turns to DRM. CMAF supports two methods of DRM known as CTR (adopted by Apple) and CBC (also known as CBCS) which has been adopted by others. AV1 supports both, but the recommendation has been to use CBC which appears have been universally followed to date explains Netflix’s Cyril Concolato. Netflix have been using AV1 since it was finalised and are aiming to have most titles transitioned by 2021 to CMAF.

Peter comments from Akamai’s position that they see a number of customers who, like Disney+ and Peacock, have been able to enter the market recently and move straight into CMAF, but there is a whole continuum of companies who are restricted by their workflows and viewer’s devices in moving to CMAF.

Low latency streaming is one topic which invigorates minds and debates for many in the industry. Netflix, being purely video on demand, they are not interested in low-latency streaming. However, Hulu is as is Disney Streaming Services, but Bill cautions us on rushing to the bottom in terms of latency. Quality of experience is improved with extra latency both in terms of reduced rebuffering and, in some cases, picture quality. Much of Disney Streaming Services’ output needs to match cable, rather than meeting over-the-air latencies or less.

The panel session finishes with a quick-fire round of questions from Jan and the audience covering codec strategy, whether their workflows have changed to incorporate CMAF, just-in-time vs static packaging, and what customers get out of CMAF.

Watch now!
Speakers

Cyril Concolato Cyril Concolato
Senior Software Engineer,
Netflix
Peter Chave Peter Chave
Principal Architect,
Akamai
Nick Brookins Nick Brookins
VP, Platform Services Group,
Hulu
Bill Zurat Bill Zurat
VP, Core Technology
Disney Streaming Services
Jan Ozer Moderator: Jan Ozer
Contributing Editor, Streaming Media
Owner, StreamingLearningCenter.com

Video: Optimising Video for Everyone at Once

CDNs are all about scale. Their raison d’ëtre is to help you scale, but that’s no trivial task which is why companies like Akamai exist so you only have to concentrate on your core product, for this talk, online streaming. Akamai’s main game is to move content you provide to them to the ‘edge’ of the network, as close to the user as possible.

The pandemic certainly put the CDNs, as well as telcos, through their paces. In this talk, Peter Chave from Akami talks about the challenges in the scale they’re achieving on a day-to-day basis. Whilst it’s lucky that 2020 was due to be a ‘big’ year in terms of sporting events, the Winter Olympics being but one example, meaning that large capacity had already been planned for, the whole industry has been iterating to get things right as the load has shifted and increased.

In March, Akamai saw a years-worth of growth. The shift in traffic was not just in magnitude but also it was a rebalancing of upload vs download. With video conferences and VPNs used all the more, the asymmetrical design of consumer internet services was put to the test.

Peter explains that companies like Netflix volunteered to reduce the burden by reducing bitrates. This was done in two main ways. One was to simply remove the top level from manifests. The other was to update the players to be much more conservative as they worked their way up through the bitrates. It’s also made some companies consider a switch to HEVC or otherwise which, whilst not being immediate, can have the effect of reducing overall bitrates across your service.

The CDN can also adjust the manifest which is much more flexible since, rather than editing a central file, in the edge in certain geographies and at certain times of day, the CDN can adjust the manifests on the fly. Lastly, Peter explains how Akamai have been throttling the speed at which video chunks are served. For times when a person has a lot more available bitrate than it needs for a video, there is no reason for it to download chunks at 100Mbps, so throttling the download speed helps reduce peaks.

Watch now!
Speakers

Peter Chave Peter Chave
Principal Architect,
Akamai Technologies