Video: Broadcast in the cloud!

Milan Video Tech’s back with a three takes on putting broadcast into the cloud. So often we see the cloud as ‘for streaming’. That’s not today’s topic; we’re talking ingest and live transmissions in the cloud. Andrea Fassina from videodeveloper.io introduces the three speakers who share their tips for doing cloud well by using KPIs, using the cloud to be efficient, agile & scale and, finally, running your live linear channels through the cloud as part of their transmission path.

First up is Christopher Brähler from Akamai who looks at a how they helped a customer becomes more efficient, be agile and scale. His first example shows how using a cloud workflow in AWS, including many AWS services such as Lambda, the customer was able to reduce human interaction with a piece of content during ingest by 80%. The problem was that every piece of content took two hours to ingest, mainly due to people having to watch for problems. Christopher shows how this process was automated. He highlights some easy wins by front-loading the process with MediaInfo which could easily detect obvious problems like incorrect duration, codec etc. Christopher then shows how the rest of the workflow used AWS components and Lamda to choose to transcode/rewrap files if needed and then pass them on to a whole QC process. The reduction was profound and whilst this could have been achieved with similar MAM-style processing on-premise, being in the cloud allows the next two benefits.

The next example is how the same customer was able to quickly adjust to a new demand on the workflow when they found that some files were arriving and weren’t compatible with their ingest process due to a bug in a certain vendor’s software which was going to take months to fix. Using this same workflow they were able to branch out, using MediaInfo to determine if this vendor’s software was involved. If it was it would be sent down a newly-created path in the workflow that worked around the problem. The benefit of this being in the cloud touches on the third example – scalability. Being in the cloud, it didn’t really matter how much or little this new branch was used. When it wasn’t being used, the cost would be nothing. If it was needed a lot, it would scale up.

The third example is when this customer merged with another large broadcaster, The cloud-based workflow meant that they were able to easily scale up and put a massive library of content through ingest in a matter of two or three months, rather than a year or more than otherwise would be the case on dedicated equipment.

Next up is Luca Moglia from Akamai who’s sharing with his experience on getting great value out of cloud infrastructure. Security should be the basis of any project whether it’s on the internet or not, so it’s no surprise that Luca starts with the mandate to ‘Secure all connections’. Whilst he focuses on the streaming use case, his points can be generalised to programme contribution. He splits up the chain into ‘first mile’ (origin/DC to cloud/CDN), ‘middle mile’ (cloud/CDN to edge) and last mile which is the delivery from the edge to the viewer. Luca looks at options to secure these segments such as ‘AWS Connect’ and other services for Azure & GCP. He looks at using private network interconnections (PNIs) for CDNs and then examines options for the last mile.

His other pieces of advice are to offload as mich ‘origin’ as you can, meaning to reduce the load on your origin server by using an Origin Gateway but also a Multi-CDN strategy. Similarly, he suggests offloading as much logic to the edge as is practical. After all, the viewer’s ping to the edge (RTT) is the lowest practical, so having two-way traffic is best there than deeper into the CDN as the edge is usually in the same ISP.

Another plea is to remember that CMAF is not just there to reduce latency, Luca emphasises all the other benefits which aren’t only important for low-latency use cases such as being able to use the same segments for delivering HLS and DASH streams. Being able to share the same segments allows CDNs to cache better which is a win for everyone. It also reduces storage costs and brings all DRM under CENC, a single mechanism supporting several different DRM methods.

Luca finishes his presentation suggesting looking at the benefits of using HTTP/2 and HTTP/3 to reduce round trips and, in theory, speed up delivery. Similarly, he talks about the TCP algorithm BBR which should improve throughput.

Last to speak is Davide Maggioni from Sky Italia who shows us how they quickly transitioned to a cloud workflow for NOWTV and SKYGO when asked to move to HD, maintain costs and make the transition quickly. They developed a plan to move the metadata enrichement, encryption, encoding and DRM into the cloud. This helped them reduce procurement overhead and allowed them to reduce deployment time.

Key to the project was taking an ‘infrastructure as code’ approach whereby everything is configured by API, run from automated code. This reduces mistakes, increases repeatability and also allowed them to, more easily, deploy popup channels.

Davide takes us through the diagrams and ways in which they are able to deploy permanent and temporary channels showing ‘mezzanine’ encoding on-premise, manipulation done in the cloud, and then a return to on premise ahead of transmission to the CDN.

Watch now!
Speakers

Christopher Brähler Christopher Brähler
Director of Product Management,
SDVI Corporation
Davide Maggioni Davide Maggioni
OTT & Cloud Process and Delivery,
Sky Italia
Luca Moglia Luca Moglia
Media Solutions Engineer,
Akamai
Andrea Fassina Andrea Fassina
Freelance Developer,
https://videodeveloper.io

Video: Player Optimisations

If you’ve ever tried to implement your own player, you’ll know there’s a big gap between understanding the HLS/DASH spec and getting an all-round great player. Finding the best, most elegant, ways of dealing with problems like buffer exhaustion takes thought and experience. The same is true for low-latency playback.

Fortunately, Akamai’s Will Law is here to give us the benefit of his experience implementing his own and helping customers monitor the performance of their players. At the end of the day, the player is the ‘kingpin’ of streaming, comments Will. Without it, you have no streaming experience. All other aspects of the stream can be worked around or mitigated, but if the player’s not working, no one watches anything.

Will’s first tip is to implement ‘segment abandonment’. This is when a video player foresees that downloading the current segment is taking too long; if it continues, it will run out of video to play before the segment has arrived. A well-programmed player will sport this and try to continue the download of this segment from another server or CDN. However, Will says that many will simply continue to wait for the download and, in the meantime, the download will fail.

Tip two is about ABR switching in low-latency, chunked transfer streams. The playback buffer needs to be longer than the chunk duration. Without this precaution, there will not be enough time for the player to make the decision to switch down layers. Will shows a diagram of how a 3-second playback buffer can recover as long as it uses 2-second segments.

Will’s next two suggestions are to put your initial chunk in the manifest by base64-encoding it. This makes the manifest larger but removes the round-trip which would otherwise be used to request the chunk. This can significantly improve the startup performance as the RTT could be a quarter of a second which is a big deal for low-latency streams and anyone who wants a short time-to-play. Similarly, advises Will, make those initial requests in parallel. Don’t wait for the init file to be downloaded before requesting the media segment.

Whilst many of points in this talk focus on the player itself, Will says it’s wise for the player to provide metrics back to the CDN, hidden in the request headers or query args. This data can help the CDN serve media smarter. For instance, the player could send over the segment duration to the CDN. Knowing how long the segment is, the CDN can compare this to the download time to understand if it’s serving the data too slow. Perhaps the simplest idea is for the player to pass back a GUID which the CDN can put in the logs. This helps identify which of the millions of lines of logs are relevant to your player so you can run your own analysis on a player-by-player level.

Will’s other points include advice on how to avoid starting playing at the lowest bandwidth and working up. This doesn’t look great and is often unnecessary. The player could run its own speed test or the CDN could advise based on the initial requests. He advises never trusting the system clock; use an external clock instead.

Regarding playback latency, it pays to be wise when starting out. If you blindly start an HLS stream, then your latency will be variable within the duration of a segment. Will advocates HEAD requests to try to see when the next chunk is available and only then starting playback. Another technique is to vary your playback rate o you can ‘catch up’. The benefit of using rate adjustment is that you can ask all your players to be at a certain latency behind realtime so they can be close to synchronous.

Two great tips which are often overlooked: Request multiple GOPs at once. This helps open up the TCP windows giving you a more efficient download. For mobile, it can also help the battery allowing you to more efficiently cycle the radio on and off. Will mentions that when it comes to GOPs, for some applications its important to look at exactly how long your GOP should be. Usually aligning it with an integer number of audio frames is the way to choose your segment duration.

The talk finishes with an appeal to move to using CMAF containers for streaming ask they allow you to deliver HLS and DASH streams from the same media segments and move to a common DRM. Will says that CBCS encrypted content is now becoming nearly all-pervasive. Finally, Will gives some tips on how players are best to analyse which CDN to use in multi-CDN environments.

Watch now!
Speaker

Will Law Will Law
Chief Architect,
Akamai

Video: Fibre Optics in the LAN and Data Centre

Fibres are the lifeblood of the major infrastructure broadcasters have today. But do you remember your SC from your LC connectors? Do you know which cable types are allowed in permenant installations? Did you know you can damage connectors by mating the wrong fibre endings? For some buildings, there’s only one fibre and connector type making patch cable selection all the easier. However there are always exceptions and when it comes to ordering more, do you know what to look out for to get exactly the right ones?

This video from Lowell Vanderpool takes a swift, but comprehensive, look at fibre types, connector types, light budget, ferrule types and SFPs. Delving straight in, Lowell quickly establishes the key differences between single-mode and multi-mode fibre with the latter using wider-diameter fibres. This keeps the costs down, but compared to single-mode fibre can’t transmit as far. Due to their cost, multi-mode fibres are common within the datacentre so Lowell takes us through the multimode cable types from the legacy OM1 to the latest OM5 cable.

OM1 cable was rated for 1GB, but the currently used OM3 and 4 fibre types can carry 10Gb up to 550m. Multimode fibres are typically colour-coded with OM3 an 4 being ‘aqua’. OM5 is the latest cable to standardised which can support Short Wavelength Division Multiplexing (SWDM) whereby 4 frequencies are sent down the same fibre giving an overall bandwidth of 10Gbx4 = 40GbE. For longer-distance, the yellow OS1 and, more recently, OS2 fibre types will achieve up to 10km distance.

Lowell explains that whilst 10km is far enough for many inter-building links, the distance quoted is a maximum which excludes the losses incurred as light leaves one fibre and enters another at connection points. Lowell has an excellent graphic which shows the overall light ‘budget’, how each connector represents a major drop in signal and how each interface will also reflect small amounts of the signal back up the fibre.

Having dealt with the inside of the cables, Lowell brings up the important topic of the outer jacket. All cables have different options for the outer jacket (for electrical cables, usually called insulation). These outer jackets allow for varying amounts of flexibility, water-tightness and armouring. Sometimes forgotten is that they have also got different properties in the event of fire. Depending on where a cable is, there are different rules on how flame retardant the cable can be. For instance, in the plenum of a room (false ceiling/wall) and a riser there are different requirements than patching between racks. Some areas keeping smoke low is important, in others ensuring fire doesn’t travel between areas is the aim so Lowell cautions us to check the local regulations.

The final part of the video covers connectors, ferrules and SFPs. Connectors come in many types, although as Lowell points out, LC is most popular in server rooms. LC connectors can come in pairs, locked together and called ‘duplex’ or individually, known as ‘simplex’. Lowell looks at pretty much every type of connector you might encounter from the legacy, metal bayonet & screw connectors (FC, ST) to the low-insertion loss, capped EC2000 connector for single mode cables and popular for telco applications. Lowell gives a close look at MPT and MPO connectors which combine 1×12 or 2×12 fibres into one connector making for a very high capacity connection. We also see how the fibres can be broken out individually at the other end into a breakout cassette.

The white, protruding end to a connector is called the ferrule and contains the fibre in the centre. The solid surround is shaped and polished to minimise gaps between the two fibre ends and to fully align the fibre ends themselves. Any errors will lead to loss of light due to it spilling out of the fibre or to excessive light bouncing back down the cable. Lowell highlights the existence of angled ferrules which will cause damage if mated with flat connectors.

The video finishes with a detailed talk through the make up of an SFP (Small Form-factor Pluggable) transceiver looking and what is going on inside. We see how the incoming data needs to be serialised, how heat dissipation and optical lanes are handled plus how that affects the cost.

Watch now!
Speaker

Lowell Vanderpool Lowell Vanderpool
Technical Trainger,
Lowell Vanderpool YouTube Channel

Video: NMOS – Ready, Steady, Go!

We have NMOS IS-04,-05, 6, 7…all the way to 10. Is it possibly too complex? Each NMOS specification brings an important feature to an IP/SMPTE ST-2022 workflow and not every system needs each one so life can become confusing. To help, NVIDIA (who own Mellanox) have been developing an open-source project which allows for quick and easy deployment of an NMOS test system.

Kicking off the presentation, Félix Poulin, explains how the EBU Pyramid for Media Nodes shows how SMPTE ST 2110 depends on a host of technologies surrounding it to create a large system. These are such as ‘Discovery and registration; channel mapping, event and tally, Network control, security and more. Félix shows how AMWA’s BCP-003-01 gives guidelines on securing NMOS comms. How IS-09 allows nodes to join the system and collect system parameters and then register itself in the IS-04 database. IS-05 and IS-06 allow end-points to be connected either through IGMP with IS-05 or by an SDN controller, using -06. IS-08 allows for audio mapping/shuffling with BCP-002-01 marking which streams belong to each other and can be taken as a bundle. IS-07 gives a way for event and tally information to be passed from place to place.

There’s a lot going on, already published and getting started can seem quite daunting. For that reason, there is an ‘NMOS at a glance‘ document now on the NMOS website. Gareth Sylvester-Bradley from Sony looks at the ongoing work within NMOS such as finalising IS-10 and BCP-003-02 both of which will enable secure authorisation of clients in the system and explains how AMWA works and ensures the correct direction of the NMOS activity groups with sufficient business cases and participation. He also outlines the importance of the NMOS testing tool and the criteria used for quality and adoption. Gareth finishes by discussing the other in-progress work from NMOS including work on EDID connection management as part of the pro AV IPMX project.

Finally, Richard Hastie introduces the ‘Easy-NMOS’ which provides very easy deployment of IS-04, 05 & 09 along with BCP-003-01 and BCP-002-01. Introduced in 2019, Mellanox – now part of NVIDIA – developed this easy-to-deploy, containerised set of 3 ‘servers’ which quickly and easily deploy these technologies including a test suite. This doesn’t move media, but it creates valid NMOS nodes and includes an MQTT broker. One container contains the NMOS Registry, controller and MQTT broker. One is a virtual mode and the last is an NMOS testing service. Richard walks us through the 4-line install and brief configuration ahead of installing this and demonstrating how to use it.

Watch now!
Speakers

Félix Poulin Félix Poulin
Director, Media Transport Architecture & Lab
CBC/Radio-Canada
Gareth Sylvester-Bradley Gareth Sylvester-Bradley
Principal engineer,
Sony EPE
Richard Hastie Richard Hastie
Senior Sales Director, Mellanox Business Development
NVIDIA