Video: Building Media Systems in the Cloud: The Cloud Migration Challenge

Peter Wharton from TAG V.S. starts us on our journey to understanding how we can take real steps to deploying a project in the cloud. He outlines five steps starting with evaluation, building a knowledge base, building for scale, optimisation and finishing with ‘realising full cloud potential’. Peter says that the first step which he dubs ‘Will It Work?’ is about scoping out what you see cloud delivering to you; what is the future that the move to cloud will give you? You can then evaluate the activities in your organisation that are viable options to move to the cloud with the aim of finding quick, easy wins.

Peter’s next step in embracing the cloud in a company is to begin the transformation in earnest by owning the transformation and starting the move not through technical actions, but through the people. It’s a case of addressing the culture of your organisation, changing the lens through which people think and for the larger companies creating a ‘centre of excellence around cloud deployments. A big bottleneck for some organisations is siloing which is sometimes deliberate, sometimes intentional. When a broadcast workflow needs to go to the cloud, this can bring together many different parts of the company, often more than if it were on-prem, so Peter identifies ‘cross-functional leadership’ as an important step in starting the transformation. He also highlights cost modelling as an important factor at this stage. A clear understanding of the costs, and savings, that will be realised in the move is an important motivational factor, but should also be used to correctly set expectations. Not getting the modelling right at this stage can significantly weaken traction as the process continues. Peter talks about the importance of creating ‘key tenets’ of your migration.

Direct link

End-to-End Migration is the promise if you can bring your organisation along with you on this journey when you start looking at actually bringing full workflows into the cloud and deploying them in production. To do that, Peter suggests validating your solution when working at scale, finding ways of testing it way above the levels you need on day one. Another aspect is creating workflows that are cloud-first and translating your current workflows to the cloud rather than taking existing workflows and making the cloud follow the same procedures – to do so would be to miss out on much of the value of the cloud transition. This step will mark the start of you seeing the value of setting your key tenets but you should feel free to ‘break rules and make new ones’ as you adapt to your changing understanding.

The last two stages revolve around optimising and achieving the ‘full potential’ of the cloud. As such, this means taking what you’ve learnt to date and using that to remake your solutions in a better, more sustainable way. Doing this allows you to hone them to your needs but also introduce a more stable approach to implementation such as using an infrastructure-as-code philosophy. This is all topped off by the last stage which is adding cloud-only functionality to the workflows you’ve created such as using machine learning or scaling functions in ways that are seldom practical for on-prem solutions.

These steps are important for any organisation wanting to embrace the cloud, but Peter reminds us that it’s not just end users who are making the transition, vendors also are. Most technology suppliers have products that pre-date today’s cloud technologies and are having to make their own journey which can start with short-term fixes to ‘make it work’ and move their existing code to the cloud. They then will need to work on their pricing models and cloud security which Peter calls the ‘Make it Viable’ stage. It’s only then that they start to be able to leverage cloud capabilities such as scaling properly and if they are able to progress further they will become a cloud-native solution and fully cloud-optimised. However, these latter two steps can take a long time for some suppliers.

Peter finishes the video talking about the difference in perspective between legacy vendors and cloud-native vendors. For example, legacy vendors may still be thinking about site visits, whereas cloud-native vendors don’t need that. They will be charging using a subscription model, rather than large Capex pricing. Peter summarises his talk by underlining the need to set your vision, agree on your key tenets for migration, invest in the team, keep your teams accountable & small and seek partners that not only understand the cloud but that match your aims for the future.

Watch now!

Speakers

Peter Wharton Peter Wharton
Director of Corporate Strategy,
TAG V.S.

Video: Standardising Microservices on the Basis of MCMA

Microservices are a way of splitting up large programs and systems into many, many, smaller parts. Building up complex workflows from these single-function modules makes has many benefits including simplifying programming and testing, upgrading your system seamlessly with no downtime, scalability and the ability to run on the cloud. Microservices were featured last week on The Broadcast Knowledge. Microservices do present challenges, such as orchestrating hundreds of processes into a coherent media workflow.

The EBU is working with SMPTE and the Open Services Alliance for Media on a cloud-agnostic open source project called MCMA, Media Cloud Microservice Architecture. The MCMA project isn’t a specification, rather it a set of software providing tools to enable a move to microservices. We hear from Alexandre Rouxel from the EBU and Loïc Barbou from Bloomberg that this project started out of a need from some broadcasters to create a scalable infrastructure that could sit on a variety of cloud infrastructure.

Direct link

What is a service? Created a standard idea of a service that contains standard operations. Part of the project is a set of libraries that work with NodeJS and .net which deal with the code needed time and time again such as logging, handling data repositories, security etc. Joost Rovers explains how the Job Processor and Service Registry work together to orchestrate the media workflows and ensure there’s a list of every microservice available, and how to communicate with it. MCMA places shims in front of cloud services on GCP, AWS, Azure etc in order that each service looks the same. Joost outlines the libraries and modules available for MCMA and how they could be used.

Watch now!
Speakers

Loic Barbou Loïc Barbou
Consultant,
Bloomberg
Alexandre Rouxel Alexandre Rouxel
Data Scientist & Project Coordinator,
EBU
Joost Rovers Joost Rovers
Managing Director,
Rovers IT

Video: Best Practices for End-to-End Workflow and Server-Side Ad Insertion Monitoring

This video from the Streaming Video Alliance, presented at Mile High Video 2020 looks at the result of recent projects document the best practice for two important activities: server-side ad insertion (SSAI) and end-to-end (E2E) workflow monitoring. First off, is E2E monitoring which defines a multi-faceted approach to making sure you’re delivering good-quality content well.

The this part of the talk is given by Christopher Kulbakas who introduces us to the document published by the Streaming Video Alliance covering monitoring best practices. The advice surrounds three principles: Creating a framework, deciding on metrics, and correlation. Christopher explains the importance of monitoring video quality after a transcode or encode since it’s easy to take a sea of green from your transport layer to indicate that viewers are happy. If your encode looks bad, viewers won’t be happy just because the DASH segments were delivered impeccably.

The guidance helps your monitor your workflow. ‘End to end’ doesn’t imply the whole delivery chain, only how to ensure the part you are responsible for is adequately monitored.

Christopher unveils the principles behind the modular monitoring across the workflow and tech stack:
1) Establish monitoring scope
Clearly delineate your responsibility from that of other parties. Define exactly how and to what standard data will be handled between the parties.

2) Partition workflow with monitoring points
Now your scope is clear, you can select monitoring points before and after key components such as the transcoder.

3) Decompose tech stage
Here, think of each point in the workflow to be monitored as a single point in a stack of technology. There will be content needing a perceptual quality monitor, Quality of Service (QoS) and auxiliary layers such as player events, logs and APIs which can be monitored.

4) Describe Methodology
This stage calls for documenting the what, where, how and why of your choices, for instance explaining that you would like to check the manifest and chunks on the output of the packager. You’d do this with HTTP-GET requests for the manifest and chunks for all rungs of the ladder. After you have finished, you will have a whole set of reasoned monitoring points which you can document and also share with third parties.

5) Correlate results
The last stage is bringing together this data, typically by using an asset identifier. This way, all alarms for an asset can be grouped together and understood as a whole workflow.

End-to-End Server-Side Ad Monitoring

The last part of this talk is from Mourad Kioumgi from Sky who walks us through a common scenario and how to avoid it. An Ad Buyer complains their ad didn’t make it to air. Talking to every point in the chain, everyone checks their own logs and says that their function was working, from the schedulers to the broadcast team inserting the SCTE markers. The reality is that if you can’t get to the bottom of this, you’ll lose money as you lose business and give refunds.

The Streaming Video Alliance considered how to address this through better monitoring and are creating a blueprint and architecture to monitor SSAI systems.

Mourad outlines these possible issues that can be found in SSAI systems:
1) Duration of content is different to the ad duration.
2) Chunks/manifest are not available or poorly hosted
3) The SCTE marker fails to reach downstream systems
4) Ad campaigns are not fulfilled despite being scheduled
5) Ad splicing components fail to create personalised manifests
6) Over-compression of the advert.

Problems 2,3, 5 and 6 are able to be caught by the monitoring proposed which revolves around adding the Creative ID and AdID into the manifest file. This way, problems can be correlated which particularly improves the telemetry back from the player which can deliver a problem report and specify which asset was affected. Other monitoring probes are added to monitor the manifests and automatic audio and video quality metrics. Sky successfully implemented this as a proof of concept with two vendors working together resulting in a much better overview of their system.

Mourad finishes his talk looking at the future creating an ad monitoring framework to distribute an agreed framework document for. best practices.

Watch now!
Speakers

Christopher Kulbakas Christopher Kulbakas
Project Lead, Senior Systems Designer, Media Technology & infrastructure,
CBC/Radio Canada
Mourad Kioumgi Mourad Kioumgi
VOD Solutions Architect.
Sky

Video: A Broadcasters Guide To Microservices

Promising to simplify programming, improve resilience and redundancy, Microservices have been moving into the media and broadcast space for several years now adopted by names such as EVS and Imagine Communications as well as being the focus of a joint project between SMPTE, EBU and Open Services Alliance. This video explains what microservices are and why they can work well for media & entertainment.

In this video from nxtedition, Roger Persson introduces colleagues Robert Nagy and Jesper Ek to answer questions on microservices. Microservices are individual, independent programs that talk together to create a whole system. They stand in contrast to monolithic programs which have many functions in one piece of software. Splitting each function of a program into its own microservice is similar, but better than modularising a monolithic program since you can run any number of microservices on any hardware leaving you able to create a dynamic and scalable program across many servers and even locations.

Jesper explains that microservices are a type of modular software design, but with the lines between the modules better defined. The benefit comes from the simplicity of microservices. Modules can still be complex but microservices are small and simple having only one function. This makes testing microservices as part of the development workflow simpler and when it comes to extending software, the work is easier. Using microservices does require a well-organised development department, but with that organisation comes many benefits for the company. One thing to watch out for, though, is that although microservices themselves are simple, the more you have, the more complex your system is. Complex systems require careful planning no matter how they are implemented. The idea, though, is that that’s made all the easier due to the modular approach of microservices.

With so many small services being spawned, finishing and being respawned, Roger asks whether an orchestration layer is necessary. Robert agrees this is for the best, but points out that their approach to an orchestration can take many forms from ‘schedulers’ such as Docker Swarm or Kubernetes which take your instruction on which microservices are needed on which hardware with which properties. Up to more complex options which abstract the workflow from the management of the microservices itself. This can work in real-time ensuring that the correct microservices are created for the workflow options being requested.

The ease of managing a live microservice-based system is explored next. Each part is so small, and will typically be running several times, that services can be updated while they are live with no impact to the service running. You can bring up a new version of the microservice and once that is running kill off the old ones either naturally as part of the workflow (in general services will never run more than 15 minutes before being closed) or explicitly. Failover, similarly, can work simply by seeing a hardware failure and spawning new services on another server.

Because of this indifference to the underlying hardware, Microservices can be spun up anywhere whether on-premise or in-cloud. Cloud-only services are certainly an option, but many companies do find that low-latency, high-bandwidth operations are still best done on-premise close to the source of the video. The cloud can be used to offload peaks or for storage.

As ever, there’s no one solution that fits everyone. The use of microservices is a good option and should be considered by vendors creating software. For customers, typically other aspects of the solution will be more important than the microservice approach, but deal-breaking features may be made possible or improved by the vendor’s choice to use microservices.

Watch now!
Speakers

Robert Nagy Robert Nagy
Lead developer & Co-founder
nxtedition
Jesper Ek Jesper Ek
Senior Developer,
nxtedition
Roger Persson Roger Persson
Sales Marketing Manager,
nxtedition