Video: Synchronising Geo-Redundant Origins

Synchronised origins in streaming means that a player can switch from one origin to another without any errors or having to restart decoding allowing a much more seamless viewing experience. Adam Ross, speaking from his experience on the Comcast linear video packing team, takes us through the pros and cons of two approaches to synchronisation. This discussion centres around video going into an encoder, transcoder and then packager. This video is either split from a single source which helps keep the video and audio clocks aligned or the clocks are aligned in the encoder or transcoder through communication site A and B.

Keeping segments aligned isn’t too difficult as we just need to keep naming the same and keep them timed together. Whilst not trivial, manifests have many more layers of metadata to synchronised in the form of short-term metadata like content currently present in the manifest and long-term metadata like the dash period. For DASH streams, the Period@ID and Period@Start need to be the same. SegmentTimelines need to have the same start number mapping to the same content. For HLS, variant playlists need to be the same as well as the sequence numbering.

 

 

Adam proposes two methods of doing this. the first is Co-operative Packaging where each site sends metadata between the packagers so that they each make the same, more informed decisions. However, this is complicated to implement and produces a lot of cross-site traffic which can live-point introduce latency. The alternative is a Minimal Synchronisation strategy which relies much more on determinism. Given the same output from the transcoder, the packagers should make the same decisions. Each packager does still need to look at the other’s manifest to ensure it stays in sync and it can resync if not deemed impactful. Overall this second method is much simpler.

Watch now!
Speaker

Adam Ross Adam Ross
Formerly Software Engineer, Comcast

Video: Best Practices for End-to-End Workflow and Server-Side Ad Insertion Monitoring

This video from the Streaming Video Alliance, presented at Mile High Video 2020 looks at the result of recent projects document the best practice for two important activities: server-side ad insertion (SSAI) and end-to-end (E2E) workflow monitoring. First off, is E2E monitoring which defines a multi-faceted approach to making sure you’re delivering good-quality content well.

The this part of the talk is given by Christopher Kulbakas who introduces us to the document published by the Streaming Video Alliance covering monitoring best practices. The advice surrounds three principles: Creating a framework, deciding on metrics, and correlation. Christopher explains the importance of monitoring video quality after a transcode or encode since it’s easy to take a sea of green from your transport layer to indicate that viewers are happy. If your encode looks bad, viewers won’t be happy just because the DASH segments were delivered impeccably.

The guidance helps your monitor your workflow. ‘End to end’ doesn’t imply the whole delivery chain, only how to ensure the part you are responsible for is adequately monitored.

Christopher unveils the principles behind the modular monitoring across the workflow and tech stack:
1) Establish monitoring scope
Clearly delineate your responsibility from that of other parties. Define exactly how and to what standard data will be handled between the parties.

2) Partition workflow with monitoring points
Now your scope is clear, you can select monitoring points before and after key components such as the transcoder.

3) Decompose tech stage
Here, think of each point in the workflow to be monitored as a single point in a stack of technology. There will be content needing a perceptual quality monitor, Quality of Service (QoS) and auxiliary layers such as player events, logs and APIs which can be monitored.

4) Describe Methodology
This stage calls for documenting the what, where, how and why of your choices, for instance explaining that you would like to check the manifest and chunks on the output of the packager. You’d do this with HTTP-GET requests for the manifest and chunks for all rungs of the ladder. After you have finished, you will have a whole set of reasoned monitoring points which you can document and also share with third parties.

5) Correlate results
The last stage is bringing together this data, typically by using an asset identifier. This way, all alarms for an asset can be grouped together and understood as a whole workflow.

End-to-End Server-Side Ad Monitoring

The last part of this talk is from Mourad Kioumgi from Sky who walks us through a common scenario and how to avoid it. An Ad Buyer complains their ad didn’t make it to air. Talking to every point in the chain, everyone checks their own logs and says that their function was working, from the schedulers to the broadcast team inserting the SCTE markers. The reality is that if you can’t get to the bottom of this, you’ll lose money as you lose business and give refunds.

The Streaming Video Alliance considered how to address this through better monitoring and are creating a blueprint and architecture to monitor SSAI systems.

Mourad outlines these possible issues that can be found in SSAI systems:
1) Duration of content is different to the ad duration.
2) Chunks/manifest are not available or poorly hosted
3) The SCTE marker fails to reach downstream systems
4) Ad campaigns are not fulfilled despite being scheduled
5) Ad splicing components fail to create personalised manifests
6) Over-compression of the advert.

Problems 2,3, 5 and 6 are able to be caught by the monitoring proposed which revolves around adding the Creative ID and AdID into the manifest file. This way, problems can be correlated which particularly improves the telemetry back from the player which can deliver a problem report and specify which asset was affected. Other monitoring probes are added to monitor the manifests and automatic audio and video quality metrics. Sky successfully implemented this as a proof of concept with two vendors working together resulting in a much better overview of their system.

Mourad finishes his talk looking at the future creating an ad monitoring framework to distribute an agreed framework document for. best practices.

Watch now!
Speakers

Christopher Kulbakas Christopher Kulbakas
Project Lead, Senior Systems Designer, Media Technology & infrastructure,
CBC/Radio Canada
Mourad Kioumgi Mourad Kioumgi
VOD Solutions Architect.
Sky

Video: Overview of MPEG’s Network-Based Media Processing

Building complex services from microservices not simple. While making a static workflow can be practical, though time-consuming, making one that is able to be easily changed to match a business’s changing needs is another matter. If an abstraction layer could be placed over the top of the microservices themselves, that would allow people to concentrate on making the workflow correct and leave the abstraction layer to orchestrate the microservices below. This is what MPEG’s Network-Based Media Processing (NBMP) standard achieves.

Developed to counteract the fragmentation in cloud and single-vendor deployments, NBMP delivers a unified way to describe a workflow with the platform controlled below. Iraj Sodagar spoke at Mile High Video 2020 to introduce NBMP, now published as ISO/IEC 23090-8. NBMP provides a framework that allows you to deploy and control media processing using existing building blocks called functions fed by sources and sinks, also known as inputs and outputs. A Workflow Manager process is used to actually start and control the media processing, fed with a workflow description that describes the processing wanted as well as the I/O formats to use. This is complemented by a Function Discovery API and a Function Repository to discover and get hold of the functions needed. The Workflow Manager gets the function and uses the Task API to initiate the processing of media. The Workflow Manager also deals with finding storage and understanding networking.

Next, Iraj takes us through the framework APIs which allow the abstraction layer to operate, in principle, across multiple cloud providers. The standard contains 3 APIs: Workflow, Task & Function. The APIs use a CRUD architecture each having ‘update’ ‘Discover’ ‘Delete’ and similar actions which apply to Tasks, Functions and the workflows i.e. CreateWorkflow. The APIs can operate synchronously or asynchronously.

Split rendering is possible by splitting up the workflow into sub workflows which allows you to run certain tasks nearer to certain resources, say storage, or in certain locations like in the case of edge computing where you want to maintain low-latency by processing close to the user. In fact, NBMP has been created with a view to being able to be used by 5G operators and is the subject of two study items in 3GPP.

Watch now!
Speaker

Iraj Sodagar Iraj Sodagar
Principal Researcher
Tencent America

Iraj Sodagar,
Tencent America

Video: Comparison of EVC and VVC against HEVC and AV1

AV1’s royalty-free status continues to be very appealing, but in raw compression is it losing ground now to the newer codecs such as VVC? EVC has also introduced a royalty-free model which could also detract from AV1’s appeal and certainly is an improvement over HEVC’s patent debacle. We have very much moved into an ecosystem of patents rather than the MPEG2/AVC ‘monoculture’ of the 90s within broadcast. What better way to get a feel for the codecs but to put them to the test?

Dan Grois from Comcast has been looking at the new codecs VVC and EVC against AV1 and HEVC. VVC and EVC were both released last year and join LCEVC as the three most recent video codecs from MPEG (VVC was a collaboration between MPEG and ITU). In the same way, HEVC is known as H.265, VVC can be called H.266 and it draws its heritage from the HEVC too. EVC, on the other hand, is a new beast whose roots are absolutely shared with much of MPEG’s previous DCT-based codecs, but uniquely it has a mode that is totally royalty-free. Moreover, its high-performant mode which does include patented technology can be configured to exclude any individual patents that you don’t wish to use thus adding some confidence that businesses remain in control of their liabilities.

Dan starts by outlining the main features of the four codecs discussing their partitioning methods and prediction capabilities which range from inter-picture, intra-picture and predicting chroma from the luma picture. Some of these techniques have been tackled in previous talks such as this one, also from Mile High Video and this EVC overview and, finally, this excellent deep dive from SMPTE in to all of the codecs discussed today plus LCEVC.

Dan explains the testing he did which was based on the reference encoder models. These are encoders that implement all of the features of a codec but are not necessarily optimised for speed like a real-world implementation would be. Part of the work delivering real-world implementations is using sophisticated optimisations to get the maths done quickly and some is choosing which parts of the standard to implement. A reference encoder doesn’t skimp on implementation complexity, and there is seldom much time to optimise speed. However, they are well known and can be used to benchmark codecs against each other. AV1 was tested in two configurations since

AV1 needs special treatment in this comparison. Dan explains that AV1 doesn’t have the same approach to GOPs as MPEG so it’s well known that fixing its QP will make it inefficient, however, this is what’s necessary for a fair comparison so, in addition to this, it’s also run in VBR mode which allows it to use its GOP structure to the full such as AV1’s invisible frames which carry data which can be referenced by other frames but which are never actually displayed.

The videos tested range from 4K 10bit down to low resolution 8 bit. As expected VVC outperforms all other codecs. Against HEVC, it’s around 40% better though carrying with it a factor of 10 increase in encoding complexity. Note that these objective metrics tend to underrepresent subjective metrics by 5-10%. EVC consistently achieved 25 to 30% improvements over HEVC with only 4.5x the encoder complexity. As expected AV1’s fixed QP mode underperformed and increased data rate on anything which wasn’t UHD material but when run in VBR mode managed 20% over HEVC with only a 3x increase in complexity.

Watch now!
Speaker

Dan Grois Dan Grois
Principal Researcher,
Comcast