Video: How Big Buck Bunny helped track down bugs

Early in the development of a streaming service, using a ‘mock’ player is a good way to be able to quickly test your code, in fact, it’s common to create mock API endpoints. The trouble with mocks is that they are too perfect as they lack the complex behaviour of a real player playing real media.

Evan Farina from LinkedIn explains what they’ve found while developing the LinkedIn video player. Communication from media players comes in the form of tracking events that indicate status such as ‘playing’ or ‘duration’. So when a player initiates playing of a video stream, a tracking event would come back to indicate success or otherwise. When using a mock, if your program works, so will the tracking event. In the real world, these values might be empty, have invalid data in or be functioning just as expected.
 

 

Once you’re in production, swapping your mock for real media has benefits. Evan explains that no longer needing to create/maintain a mock is a time saver. After all, it is code and you need to make sure it’s working and keeping up with production changes. But the main benefit is catching player integration issues and race conditions before releasing to production. Evan’s team found errors in tracking events, inefficiencies in the way media was being loaded and a dependency on free CPU which was previously unknown. All of this was possible before deploying into production.

How can you switch from mocks to real media? One goal is to ensure your tests aren’t reliant on any timing meaning your testing and the player need to be fully asynchronous. To do this, Evan’s created ‘helpers’ which control media playback (play, pause, seek etc) and also wait for a result with a timeout in case they never receive the resultant tracking event.

Evan finishes with some tips for scaling from his team. Firstly, there’s no point making your test take longer than necessary so keep media short. Second: store media locally to avoid network traffic not directly related to the test. Log all events since, at some point, these will be critical to understanding a problem. Lastly, he says to make sure you monitor your test servers since it’s important to make sure they’re not struggling to cope with the load which could affect the results of some of your tests.

Watch now!
Speakers

Evan Farina Evan Farina
Software Engineer,
LinkedIn

Video: How to Up Your Sports Streaming Game

As countries seek to wrest themselves from lockdowns, however long that takes, we see the name of the game will be come out big and make the most of the renewed freedoms. Streaming has certainly seen a boost over the last year despite the challenges, but in order to make the most of that, as we switch up a gear in public life, now’s the time up your game. Sports streaming is likely to see gradual improvement in the number of live fixtures to cover and employees should be able to find protuctivity gains in working more closely with their colleagues when the time is right the share space again.

In this panel from Streaming Media Connect, Jeff Jacobs from VENN talks to Magnus Svensson, from Eyevinn Technology, Ali Hodjat from Intertrust Technologies, Live Sports’ Jef Kethley and Darcy Lorincz from Engine Media. Magnus kicks off the discussion highlighting the state of the sports streaming industry and the trends he’s seeing. Magnus says that streaming providers are moving away from mimicing broadcast services and inovating in their own right. The younger audience are still more interested in highlights clips then older viewers and esports wiith its on-screen chat and interactivity represents a big departure from what we are used to from broadcasters. Low-latency streaming remains important but keeping feeds synchronised within the home is often seen as more important than the absolute latency.
 

 
Jef speaks about the complete cloud infrastructure he built for the Drone Racing League (DRL) which gave a computer to each player and ran the program and drone simulation in the cloud. Looking to the future, he sees streaming as now allowing monetisation of newer sports. Now that it’s easier and/or cheaper to produce lower-interest sports, they can be economoical to monetise and deliver even to a small audience.

Darcy represents workflows where AI is doing the work. AI’s understanding the goals, the numbers on shirts and much of the action within a game. Darcy’s trying to find as many things AI can do to reduce our reliance on humans. Visualisation of data is grown in demand making these stats easily digestable for viewers by overlaying information in new ways on to the screen.

Ali’s view is from the security angle. He’s been focussed on protecting live sports. Weith the push to lower and lower latencies, the value of the streams has increased as they’re more useful to use for betting. At the same time, lower latency makes it harder to add encryption. On top of encryption watermarking individual feeds and quickly identifying them online is a major focus. Protection, though, needs to extend from the media back to the web site itself, the payment gateway, the applications and much else.

The panel session finishes after discussing low-latency, the pros and cons of remote working, co-streaming, low-latency for backhaul/contribution and finishes with a round of advice to use with your service.

Watch now!
Speakers

Magnus Svensson Magnus Svensson
VP Sales and Business Development,
Eyevinn Technology
Ali Hodjat Ali Hodjat
Director Product Marketing,
Intertrust
Jef Kethley Jef Kethley
Executive Director / President
LiveSports, LLC
Darcy Lorincz Darcy Lorincz
Global head of Esports & Business Development,
Engine Media Inc.
Jeff Jacobs Moderator: Jeff Jacobs
General Manager,
VENN

Video: Time and timing at VidTrans21

Timing is both everything and nothing. Although much fuss is made of timing, often it’s not important. But when it is important, it can be absolutely critical. Helping us navigate through the broadcast chains varying dependence on a central co-ordinated time source is Nevion’s Andy Rayner in this talk at the VSF’s VidTrans21. When it comes down to it, you need time for coordination. In the 1840s, the UK introduced ‘Railway time’ bringing each station’s clock into line with GMT to coordinate people and trains.

For broadcast, working with multiple signals in a low-latency workflow is the time we’re most likely to need synchronisation such as in a vision or audio mixer. Andy shows us some of the original television technology where the camera had to be directly synchronised to the display. This is the era timing came from, built on by analogue video and RF transmission systems which had components whose timing relied on those earlier in the chain. Andy brings us into the digital world reminding us of the ever-useful blanking areas of the video raster which we packed with non-video data. Now, as many people move to SMPTE’s ST 2110 there is still a timing legacy as we see that some devices are still generating data with gaps where the blanking of the video would be even though 2110 has no blanking. This means we have to have timing modes for linear and non-linear delivery of video.
 

 
In ST 2110 every packet is marked with a reduced resolution timestamp from PTP, the Precision Time Protocol (or See all our PTP articles). This allows highly accurate alignment of essences when bringing them together as even a slight offset between audios can create comb filters and destroy the sound. The idea of the PTP timestamp is to stamp the time the source was acquired. But Andy laments that in ST 2110 it’s hard to keep this timestamp since interim functions (e.g. graphics generators) may restamp the PTP breaking the association.

Taking a step back, though, there are delays now up to a minute later delivering content to the home. Which underlines that relative timing is what’s most important. A lesson learnt many years back when VR/AR was first being used in studios where whole sections of the gallery were running several frames delayed to the rest of the facility to account for the processing delay. Today this is more common as is remote production which takes this fixed time offset to the next level. Andy highlights NMOS IS-07 which allows you timestamp button presses and other tally info allowing this type of time-offset working to succeed.

The talk finishes by talking about the work of the GCCG Activity Group at the VSF of which Andy is the co-chair. This group is looking at how to get essences into and out of the cloud. Andy spends some time talking about the tests done to date and the fact that PTP doesn’t exist in the cloud (it may be available for select customers). In fact you may have live with NTP-derived time. Dealing with this is still a lively discussion in progress and Andy is welcoming participants.

Watch now!
Speakers

Andy Rayner Andy Rayner
Co-Chair, Ground-Cloud-Cloud-Ground Activity Group, VSF
Chief Technologist, Nevion

Video: A Broadcasters Guide To Microservices

Promising to simplify programming, improve resilience and redundancy, Microservices have been moving into the media and broadcast space for several years now adopted by names such as EVS and Imagine Communications as well as being the focus of a joint project between SMPTE, EBU and Open Services Alliance. This video explains what microservices are and why they can work well for media & entertainment.

In this video from nxtedition, Roger Persson introduces colleagues Robert Nagy and Jesper Ek to answer questions on microservices. Microservices are individual, independent programs that talk together to create a whole system. They stand in contrast to monolithic programs which have many functions in one piece of software. Splitting each function of a program into its own microservice is similar, but better than modularising a monolithic program since you can run any number of microservices on any hardware leaving you able to create a dynamic and scalable program across many servers and even locations.

Jesper explains that microservices are a type of modular software design, but with the lines between the modules better defined. The benefit comes from the simplicity of microservices. Modules can still be complex but microservices are small and simple having only one function. This makes testing microservices as part of the development workflow simpler and when it comes to extending software, the work is easier. Using microservices does require a well-organised development department, but with that organisation comes many benefits for the company. One thing to watch out for, though, is that although microservices themselves are simple, the more you have, the more complex your system is. Complex systems require careful planning no matter how they are implemented. The idea, though, is that that’s made all the easier due to the modular approach of microservices.

With so many small services being spawned, finishing and being respawned, Roger asks whether an orchestration layer is necessary. Robert agrees this is for the best, but points out that their approach to an orchestration can take many forms from ‘schedulers’ such as Docker Swarm or Kubernetes which take your instruction on which microservices are needed on which hardware with which properties. Up to more complex options which abstract the workflow from the management of the microservices itself. This can work in real-time ensuring that the correct microservices are created for the workflow options being requested.

The ease of managing a live microservice-based system is explored next. Each part is so small, and will typically be running several times, that services can be updated while they are live with no impact to the service running. You can bring up a new version of the microservice and once that is running kill off the old ones either naturally as part of the workflow (in general services will never run more than 15 minutes before being closed) or explicitly. Failover, similarly, can work simply by seeing a hardware failure and spawning new services on another server.

Because of this indifference to the underlying hardware, Microservices can be spun up anywhere whether on-premise or in-cloud. Cloud-only services are certainly an option, but many companies do find that low-latency, high-bandwidth operations are still best done on-premise close to the source of the video. The cloud can be used to offload peaks or for storage.

As ever, there’s no one solution that fits everyone. The use of microservices is a good option and should be considered by vendors creating software. For customers, typically other aspects of the solution will be more important than the microservice approach, but deal-breaking features may be made possible or improved by the vendor’s choice to use microservices.

Watch now!
Speakers

Robert Nagy Robert Nagy
Lead developer & Co-founder
nxtedition
Jesper Ek Jesper Ek
Senior Developer,
nxtedition
Roger Persson Roger Persson
Sales Marketing Manager,
nxtedition