Video: Synchronising Geo-Redundant Origins

Synchronised origins in streaming means that a player can switch from one origin to another without any errors or having to restart decoding allowing a much more seamless viewing experience. Adam Ross, speaking from his experience on the Comcast linear video packing team, takes us through the pros and cons of two approaches to synchronisation. This discussion centres around video going into an encoder, transcoder and then packager. This video is either split from a single source which helps keep the video and audio clocks aligned or the clocks are aligned in the encoder or transcoder through communication site A and B.

Keeping segments aligned isn’t too difficult as we just need to keep naming the same and keep them timed together. Whilst not trivial, manifests have many more layers of metadata to synchronised in the form of short-term metadata like content currently present in the manifest and long-term metadata like the dash period. For DASH streams, the Period@ID and Period@Start need to be the same. SegmentTimelines need to have the same start number mapping to the same content. For HLS, variant playlists need to be the same as well as the sequence numbering.

 

 

Adam proposes two methods of doing this. the first is Co-operative Packaging where each site sends metadata between the packagers so that they each make the same, more informed decisions. However, this is complicated to implement and produces a lot of cross-site traffic which can live-point introduce latency. The alternative is a Minimal Synchronisation strategy which relies much more on determinism. Given the same output from the transcoder, the packagers should make the same decisions. Each packager does still need to look at the other’s manifest to ensure it stays in sync and it can resync if not deemed impactful. Overall this second method is much simpler.

Watch now!
Speaker

Adam Ross Adam Ross
Formerly Software Engineer, Comcast

Video: Digital Media Trends of 2020

Research from before and during the pandemic paints a clear picture of how streaming has changed. This Deloitte research looked at ad-supported and subscription VOD across demographics as well as looking at how the film industry has faired as cinemas have remained closed in most places.

Jeff Loucks presents the results of surveys taken in the United States before the lockdown and then again in May and October 2020. The youngest demographic tracked is Gen Z born between 1997 and 2006, the oldest being ‘matures’ who are older than 73. The most critical measurement is the amount of money people have in their pocket. Around half said their finances were unchanged, up to 39% said their pay packet had reduced either somewhat or significantly, though this reduced to only 29% in October.

When including streaming music, video games audiobooks, US consumers had an average of 12 entertainment subscriptions which reduced to 11 by October. Concentrating on paid video subscriptions only, the average grew from 3 to 5 over the period of the research, with millennials leading the charge up to 7 services. However, churn also increased. Jeff explains that this is partly because free trials come to an end but also because people are judging services as too expensive. It seems that there is a certain amount of experimentation going on with people testing new combinations of services to find the mix that suits them.

 

 

Jeff makes the point that there are around 300 paid streaming services in the US market which is ‘too many to stick around’. Whilst it’s clear that streaming providers are giving consumers the types of services they’ve been wanting from cable providers for years, they are bringing a burden of complexity with them, too.

Hulu and YouTube are two services that give the flexibility of watching an ad-supported version or an ad-free version of the service. Across the market, 60% of people use at least one free ad-supported service. Whilst Hulu’s ad-supported service isn’t free, giving these options is a great way to cater to different tastes. the Deloitte research showed that whilst Gen Z and Millenials would prefer to pay for an ad-free service, older ‘boomers and ‘matures’ would rather use an ad-supported service. Furthermore, when given the option to pay a little for half the ads, customers prefer the extremes rather than the halfway house. Overall, 7 minutes of ads an hour is the number which people say is the right balance, with 14 being too many,

Films have been hit hard by the pandemic, but by the end of the pandemic, 35% of people said they had paid to watch a new release on a streaming platform up 13% from May and 90% said they would likely do it again. Theatrical release windows have been under examination for many years now, but the pandemic really forced the subject. The percentage of revenue made during the ‘DVD release’ period has gone down over the decades. Nowadays, a film makes most of its money, 45%, during its theatrical release window with the ‘TV’ revenue being squeezed down 10% to 18% of the overall revenue. It’s clear then, that studios will be careful with that 45% share to ensure it’s suitably replaced as they move ahead with their 2022 plans.

Each genre has its own fingerprint with comedy and dramas making less money in the box office, proportionally than animations and action movies, for instance. So whilst we may see notable changes in distribution windows, they may be more aggressive for some releases than others when the pandemic has less of a say in studios’ plans.

This video is based on research that can be read in much more detail here:

Digital Media Trends Consumption Habits Survey

Future of the Movie Industry

Watch now!
Speakers

Jeff Loucks Dr. Jeff Loucks
Executive Director,
Deloitte Center for Technology, Media & Telecommunications

Video: Best Practices for End-to-End Workflow and Server-Side Ad Insertion Monitoring

This video from the Streaming Video Alliance, presented at Mile High Video 2020 looks at the result of recent projects document the best practice for two important activities: server-side ad insertion (SSAI) and end-to-end (E2E) workflow monitoring. First off, is E2E monitoring which defines a multi-faceted approach to making sure you’re delivering good-quality content well.

The this part of the talk is given by Christopher Kulbakas who introduces us to the document published by the Streaming Video Alliance covering monitoring best practices. The advice surrounds three principles: Creating a framework, deciding on metrics, and correlation. Christopher explains the importance of monitoring video quality after a transcode or encode since it’s easy to take a sea of green from your transport layer to indicate that viewers are happy. If your encode looks bad, viewers won’t be happy just because the DASH segments were delivered impeccably.

The guidance helps your monitor your workflow. ‘End to end’ doesn’t imply the whole delivery chain, only how to ensure the part you are responsible for is adequately monitored.

Christopher unveils the principles behind the modular monitoring across the workflow and tech stack:
1) Establish monitoring scope
Clearly delineate your responsibility from that of other parties. Define exactly how and to what standard data will be handled between the parties.

2) Partition workflow with monitoring points
Now your scope is clear, you can select monitoring points before and after key components such as the transcoder.

3) Decompose tech stage
Here, think of each point in the workflow to be monitored as a single point in a stack of technology. There will be content needing a perceptual quality monitor, Quality of Service (QoS) and auxiliary layers such as player events, logs and APIs which can be monitored.

4) Describe Methodology
This stage calls for documenting the what, where, how and why of your choices, for instance explaining that you would like to check the manifest and chunks on the output of the packager. You’d do this with HTTP-GET requests for the manifest and chunks for all rungs of the ladder. After you have finished, you will have a whole set of reasoned monitoring points which you can document and also share with third parties.

5) Correlate results
The last stage is bringing together this data, typically by using an asset identifier. This way, all alarms for an asset can be grouped together and understood as a whole workflow.

End-to-End Server-Side Ad Monitoring

The last part of this talk is from Mourad Kioumgi from Sky who walks us through a common scenario and how to avoid it. An Ad Buyer complains their ad didn’t make it to air. Talking to every point in the chain, everyone checks their own logs and says that their function was working, from the schedulers to the broadcast team inserting the SCTE markers. The reality is that if you can’t get to the bottom of this, you’ll lose money as you lose business and give refunds.

The Streaming Video Alliance considered how to address this through better monitoring and are creating a blueprint and architecture to monitor SSAI systems.

Mourad outlines these possible issues that can be found in SSAI systems:
1) Duration of content is different to the ad duration.
2) Chunks/manifest are not available or poorly hosted
3) The SCTE marker fails to reach downstream systems
4) Ad campaigns are not fulfilled despite being scheduled
5) Ad splicing components fail to create personalised manifests
6) Over-compression of the advert.

Problems 2,3, 5 and 6 are able to be caught by the monitoring proposed which revolves around adding the Creative ID and AdID into the manifest file. This way, problems can be correlated which particularly improves the telemetry back from the player which can deliver a problem report and specify which asset was affected. Other monitoring probes are added to monitor the manifests and automatic audio and video quality metrics. Sky successfully implemented this as a proof of concept with two vendors working together resulting in a much better overview of their system.

Mourad finishes his talk looking at the future creating an ad monitoring framework to distribute an agreed framework document for. best practices.

Watch now!
Speakers

Christopher Kulbakas Christopher Kulbakas
Project Lead, Senior Systems Designer, Media Technology & infrastructure,
CBC/Radio Canada
Mourad Kioumgi Mourad Kioumgi
VOD Solutions Architect.
Sky

Video: How Big Buck Bunny helped track down bugs

Early in the development of a streaming service, using a ‘mock’ player is a good way to be able to quickly test your code, in fact, it’s common to create mock API endpoints. The trouble with mocks is that they are too perfect as they lack the complex behaviour of a real player playing real media.

Evan Farina from LinkedIn explains what they’ve found while developing the LinkedIn video player. Communication from media players comes in the form of tracking events that indicate status such as ‘playing’ or ‘duration’. So when a player initiates playing of a video stream, a tracking event would come back to indicate success or otherwise. When using a mock, if your program works, so will the tracking event. In the real world, these values might be empty, have invalid data in or be functioning just as expected.
 

 

Once you’re in production, swapping your mock for real media has benefits. Evan explains that no longer needing to create/maintain a mock is a time saver. After all, it is code and you need to make sure it’s working and keeping up with production changes. But the main benefit is catching player integration issues and race conditions before releasing to production. Evan’s team found errors in tracking events, inefficiencies in the way media was being loaded and a dependency on free CPU which was previously unknown. All of this was possible before deploying into production.

How can you switch from mocks to real media? One goal is to ensure your tests aren’t reliant on any timing meaning your testing and the player need to be fully asynchronous. To do this, Evan’s created ‘helpers’ which control media playback (play, pause, seek etc) and also wait for a result with a timeout in case they never receive the resultant tracking event.

Evan finishes with some tips for scaling from his team. Firstly, there’s no point making your test take longer than necessary so keep media short. Second: store media locally to avoid network traffic not directly related to the test. Log all events since, at some point, these will be critical to understanding a problem. Lastly, he says to make sure you monitor your test servers since it’s important to make sure they’re not struggling to cope with the load which could affect the results of some of your tests.

Watch now!
Speakers

Evan Farina Evan Farina
Software Engineer,
LinkedIn