Video: Investigating Media Over IP Multicast Hurdles in Containerized Platforms

As video infrastructures have converged with enterprise IT, they started incorporating technologies and methods typical for data centres. First came virtualisation allowing for COTS (Common Off The Shelf) components to be used. Then came the move towards cloud computing, taking advantage of scale economies.

However, these innovations did little to address the dependence on monolithic projects that impeded change and innovation. Early strategies for Video over IP were based on virtualised hardware and IP gateway cards. As the digital revolution took place with emergence of OTT players, the microservices based on containers have been developed. The aim was to shorten the cycle of software updates and enhancements.

Containers allow to insulate application software from underlying operating systems to remove the dependence on hardware and can be enhanced without changing the underlying operational fabrics. This provides the foundation for more loosely coupled and distributed microservices, where applications are broken into smaller, independent pieces that can be deployed and managed dynamically.

Modern containerized server software methods such as Docker are very popular in OTT and cloud solution, but not in SMPTE ST 2110 systems. In the video above, Greg Shay explains why.

Docker can package an application and its dependencies in a virtual container that can run on any Linux server. It uses the resource isolation features of the Linux kernel and a union-capable file system to allow containers to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines. Docker can get more applications running on the same hardware than comparing with VMs, makes it easy for developers to quickly create ready-to-run containered applications and makes managing and deploying applications much easier.

However, currently there is a huge issue with using Docker for ST 2110 systems, because Docker containers do not work with Multicast traffic. The root of the multicast problem is the specific design of the way that the Linux kernel handles multicast routing. It is possible to wrap a VM around each Docker container just to achieve the independence of multicast network routing by emulating the full network interface, but this defeats capturing and delivering the behaviour of the containerized product in a self-contained software deliverable.

There is a quick and dirty partial shortcut which enable container to connect to all the networking resources of the Docker host machine, but it does not isolate containers into their own IP addresses and does not isolate containers to be able to use their own ports. You don’t really get a nice structure of ‘multiple products in multiple containers’, which defeats the purpose of containerized software.

You can see the slides here.

Watch now!

Speaker

Greg Shay Greg Shay
CTO
The Telos Alliance

Video: Microservices & Media: Are we there yet?

Microservices split large applications into many small, simple, autonomous sections. This can be a boon, but this simplicity hides complexity. Chris Lennon looks at both sides to find the true value in microservices.

By splitting up a program/service into many small blocks, each of those blocks become simpler so testing each block becomes simpler. Updating one block hardly affects the system as a whole leading to quicker and more agile development and deployment. In fact, using microservices has many success stories attributed to it. Less vocal are those who have failures or increased operational problems due to their use.

Like any technology, there are ‘right’ and ‘wrong’ times and places to deploy it. Chris, from MediAnswers, explains where he sees the break-even line between non-deploying and deploying microservices and explains his reasons which include hidden comlexity, your teams’ ability to deal with these many services and covers some of the fallacies at play which tend to act against you.

A group has started up within SMPTE who want to reduce the friction in implementing microservices which include general interoperability and also interoperability across OSes. This should reduce the work needed to get microservices from different vendors working together as one.

Chris explains the work to date and the plans for the future for this working group.

Watch now!
Speakers

Chris Lennon Chris Lennon
President & CEO,
MediAnswers

Video: OTT Moves Toward Microservices


 

Using microservices is a way of architecting your software platform to be nimble, simple and is just as applicable to on-premise platforms as cloud. As scaling is important for OTT providers, it’s not surprising that much work is being done in the OTT sector to utilise microservice architectures.

Even companies that are not yet actively operating on a microservices architecture are looking for vendors who at least have a strategy to cater to it for the future. This session will examine the core benefits (including redundancy, dev ops, scalability, and self-healing), the different approaches (including containerisation and orchestration via Docker, Kubernetes, and Mesos, as well as native microservices models like Erlang), and the complexities of migrating a generic architecture to a microservices architecture.

This panel covers:

    • Why is OTT so suited to microservices?
    • How microservices enable companies to be flexible to changing customer demands
    • How microservices reduce complexity
    • Benefits of continuous deployment

plus much more!

Watch now!

Moderator: Dom Robinson, Director and Creative Firestarter – id3as, UK & Contributing Editor, StreamingMedia.com, UK
Stefan Lederer, CEO & Co-Founder – Bitmovin, USA
Steve Miller-Jones, Vice President of Product Strategy – Limelight Networks, UK
Xiaomei Lio, Senior Software Engineer, Netflix
Mark Russell, Chief Technology & Strategy Officer, MediaKind
Olivier Karra, Directory of OTT & IPTV Solutions, Marketing, Harmonic Inc.

Video: How the BBC Built a Massive Media Pipeline Using Microservices

The BBC iPlayer is the biggest audio and video-on-demand service in the UK. It receives 10 million video playback requests every day and the service publishes over 10,000 hours of media every week.

Moving iPlayer to the cloud has enabled the BBC to shorten the time-to-market of content from 10 hours to 15 minutes.

In this session, the BBC’s lead architect, Stephen Godwin, describes the approach behind creating iPlayer architecture, which uses Amazon SQS and Amazon SNS in several ways to improve elasticity, reliability, and maintainability. You see how BBC uses AWS messaging to choreograph the 200 microservices in the iPlayer pipeline, maintain data consistency as media traverses the pipeline, and refresh caches to ensure timely delivery of media to users.

This is a rare opportunity to see the internal workings and best practices of one of the largest on-demand content delivery systems operating today.
Watch now!