Video: Open Source Streaming

Open source software can be found powering streaming solutions everywhere. Veterans of the industry on this panel at Streaming Media West, give us their views on how to successfully use open source in on-air projects whilst minimising risk.

The Streaming Video Alliance’s Jason Thibeault starts by finding out how much the panelists and their companies use open source in their work and expands upon that to ask how much the support model matters. After all, some projects have paid support but based on free software whereas others have free community-provided support. The feeling is that it really depends on the community; is it large and is it active? Not least of the considerations is that, in a corporate setting, if the community is quick to accuse, is it right to ask your staff to go through layers of ‘your a newbie’ and other types of pushback each time they need to get an answer?

Another key question is whether we give should back to the open source community and, if so, how. The panels discusses the difficulties in contributing code but also covers the importance of other ways of contributing – particularly when the maintainer is one individual. Contribution of money is an obvious, but often forgotten way to help but writing documentation is also really helpful as is contributing on the support forums. This all makes for a vibrant community and increases the chances that other companies will adopt the project into their workflows…which then makes the community all the stronger.

With turn-key proprietary solutions ready to to be deployed, Jason asks whether open source actually saves money on the occasions that you can, indeed, find a proprietary solution that fits your requirements.

Lastly, the panel talks about the difficulty in balancing adherence to the standards compared with the speed at which open source communities can move. They can easily deliver the full extent of the standard to date and then move on to fixing the remaining problems so far not addressed by the developing standard. Whilst this is good, they risk implementing in ways which may cause issues in the future when the standard finally catches up.

The panel session finishes with questions from the audience.

Watch now!
Speakers

Steve Heffernan Steve Heffernan
Head of Product
Mux
Yuriy Reznik Yuriy Reznik
Head of Research,
Brightcove
Rob Dillon Rob Dillon
Dillon Media Ventures
Rema Morgan-Aluko Rema Morgan-Aluko
Engineering Dango
FandangoNOW
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: What to do after per-title encoding

Per-title encoding is a common method of optimising quality and compression by changing the encoding options on a file-by-file basis. Although some would say the start of per-scene encoding is the death knell for per-title encoding, either is much better than the more traditional plan of applying exactly the same settings to each video.

This talk with Mux’s Nick Chadwick and Ben Dodson looks at what per-title encoding is and how to go about doing it. The initial work involves doing many encodes of the same video and analysing each for quality. This allows you to out which resolutions and bitrates to encode at and how to deliver the best vide.

Ben Dodson explains the way they implemented this at Mux using machine learning. This was done by getting computers to ‘watch’ videos and extract metadata. That metadata can then be used to inform the encoding parameters without the computer watching the whole of a new video.

Nick takes some time to explain MUX’s ‘convex hulls’ which give a shape to the content’s performance at different bitrates and helps visualise the optimum encoding parameters the content. Moreover we see that using this technique, we can explore how to change resolution to create the best encode. This doesn’t always mean reducing the resolution; there are some surprising circumstances when it makes sense to start at high resolutions, even for low bitrates.

The next stage after per-title encoding is to segment the video and encode each segment differently which Nick explores and explains how to deliver different resolutions throughout the stream seamlessly switching between them. Ben takes over and explains how this can be implemented and how to chose the segment boundaries correctly, again, using a machine learning approach to analysis and decision making.

Watch now!
Speakers

Nick Chadwick Nick Chadwick
Software Engineer,
Mux
Ben Dodson Ben Dodson
Data Scientist,
Mux

Video: A Standard for Video QoE Metrics

A standard in progress for quality of experience networks, rebufereing time etc. Under the CTA standards body wanting to create a standard around these metrics. The goal of the group is to come up with a standard set of player events, metrics & terminology around QoE streaming. Concurrent viewers, isn’t that easy to define? If the user is paused, are they concurrently viewing the video? Buffer underruns is called rebuffering, stalling, waiting. Intentionally focussing on what the viewers actually see and experience. QoS is a measurement of how well the platform is performing, not necessarily the same as what they are experiencing.

The standard has ideas of different levels. There are player properties and events which are standardised ways of signalling that certain things are happening. Also Session Metrics are defined which then can feed into Aggregate Metrics. The first set of metrics include things such as playback failure percentage, average playback stalled rate, average startup time and playback rate with the aim of setting up a baseline and to start to get feedback from companies as they implement these, seemingly simple, metrics.

This first release can be found on github.

Watch now!
Speaker

Steve Heffernan Steve Heffernan
Co-Founder, Head of Product,
Mux

Video: The Evolution of Video APIs

APIs underpin our modern internet and particularly our online streaming services which all. An API is a way for two different programs or services to communicate with each other; allowing access, sharing locations of videos, providing recommendations etc.

Phil Cluff from Mux, takes a look at the evolution of these APIs, showing the simple ones, the complex and how they have changed as time has gone on, culminating in advice to the APIs writers of today and tomorrow.

Security is a big deal and increasingly is in focus for video companies. Whilst the API itself is usually sent over secure means, the service still needs to authenticate users and the use of DRM needs to be considered. Phil talks about this and ultimately the question comes down to what you are trying to protect and your attack surface.

APIs tend to come in two types, explains Phil, Video Platform vs ‘Encoding’ APIs. Encoding APIs a more than pure encoding APIs, there is transcoding, packaging, file transfer and other features built in to most ‘encoding’ services. Video Platform APIs are typically for a whole platform so also include CDN, Analytics, Cataloguing, playback and much more

In terms of advice, Phil explains that APIs can enable ‘normal’ coders – meaning people who aren’t interested specifically in video – to use video in their programs. This can be done through well thought out APIs which make good decisions behind the scenes and use sensible defaults.

API is so important, asserts Phil, that it should be considered as part of the product so treated with similar care. It should be planned, resourced properly, be created as part of a dialogue with customers and, most importantly, revisited later to be upgraded and improved.

Phil finishes the talk with a number of other pieces of advice and answers questions from the floor.

Watch Now!

Speaker

Phil Cluff Phil Cluff
Streaming Specialist,
Mux