Video: Examining the OTT Technology Stack

This video looks at the whole streaming stack asking what’s now, what trends are coming to the fore and how are things going to be done better in the future? Whatever part of the stack you’re optimising, it’s vital to have a way to measure the QoE (Quality of Experience) of the viewer. In most workflows, there is a lot of work done to implement redundancy so that the viewer sees no impact despite problems happening upstream.

The Streaming Video Alliance’s Jason Thibeault diggs deeper with Harmonic’s Thierry Fautier, Brenton Ough from Touchstream, SSIMWAVE’s Hojatollah Yeganeh and Damien Lucas from Ateme.

Talking about Codecs, Thierry makes the point that only 7% of devices can currently support AV1 and with 10 billion devices in the world supporting AVC, he sees a lot of benefit in continuing to optimise this rather than waiting for VVC support to be commonplace. When asked to identify trends in the marketplace, moving to the cloud was identified as a big influencer that is driving the ability to scale but also the functions themselves. Gone are the days, Brenton says, that vendors ‘lift and shift’ into the cloud. Rather, the products are becoming cloud-native which is a vital step to enable functions and products which take full advantage of the cloud such as being able to swap the order of functions in a workflow. Just-in-time packaging is cited as one example.

Examining the OTT Technology Stack from Streaming Video Alliance on Vimeo.

Other changes are that server-side ad insertion (SSAI) is a lot better in the cloud and sub partitioning of viewers, where you do deliver different ads to different people, is more practical. Real-time access to CDN data allowing you near-immediate feedback into your streaming process is also a game-changer that is increasingly available.

Open Caching is discussed on the panel as a vital step forward and one of many areas where standardisation is desperately needed. ISPs are fed up, we hear, of each service bringing their own caching box and it’s time that ISPs took a cloud-based approach to their infrastructure and enabled multiple use servers, potentially containerised, to ease this ‘bring your own box’ mentality and to take back control of their internal infrastructure.

HDR gets a brief mention in light of the Euro soccer championships currently on air and the Japan Olympics soon to be. Thierry says 38% of Euro viewership is over OTT and HDR is increasingly common, though SDR is still in the majority. HDR is more complex than just upping the resolution and requires much more care over which screen it’s watched. This makes adopting HDR more difficult which may be one reason that adoption is not yet higher.

The discussion ends with a Q&A after talking about uses for ‘edge’ processing which the panel agrees is a really important part of cloud delivery. Processing API requests at the edge, doing SSAI or content blackouts are other examples of where the lower-latency response of edge compute works really well in the workflow.

Watch now!
Speakers

Thierry Fautier Thierry Fautier
VP Video Strategy.
Harmonic Inc.
Damien Lucas Damien Lucas
CTO,
Ateme
Hojatollah Yeganeh Hojatollah Yeganeh
Research Team Lead
SSIMWAVE
Brenton Ough Brenton Ough
CEO & Co-Founder,
Touchstream
Jason Thibeault Moderator: Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: Getting Back Into the Game

The pandemic has obviously hurt live broadcaster, sports particularly but as the world starts its slow fight back to normality we’re seeing sports back on the menu. How has streaming suffered and benefited? This video looks at how technology has changed in response, how pirating of content changed, how close we are to business as usual.

Jason Thibeault from the Streaming Video Alliance brings together Andrew Pope from Friend MTS, Brandon Farley from Streaming Global, SSIMWAVE’s Carlos Bacquet, Synamedia’s Nick Fielibert and Will Penson with Conviva to get an overview of the industry’s response to the pandemic over the last year and its plans for the future.

The streaming industry has a range of companies including generalist publishers, like many broadcasters and specialists such as DAZN and NFL Gamepass. During the pandemic, the generalist publishers were able to rely more on their other genres and back catalogues or even news which saw a big increase in interest. This is not to say that the pandemic made life easy for anyone. Sports broadcasters were undoubtedly hit, though companies such as DAZN who show a massive range of sports were able dig deep in less mainstream sports from around the world in contrast with services such as NFL Game Pass which can’t show any new games if the season is postponed. We’ve heard previously how esports benefited from the pandemic

The panel discusses the changes seen over the last year. Mixed views on security with one company seeing little increase in security requests, another seeing a boost in requests for auditing and similar so that people could be ready for when sports streaming was ‘back’. There was a renewed interest in how to make sports streaming better where better for some means better scaling, for others, lower latency, whereas many others are looking to bake in consistency and quality; “you can’t get away with ‘ok’ anymore.”

SSIMWAVE pointed out that some customers were having problems keeping the channel quality high and were even changing encoder settings to deal with the re-runs of their older footage which was less good quality than today’s sharp 1080p coverage. “Broadcast has set the quality mark” and streaming is trying to achieve parity. Netflix has shown that good quality goes on good devices. They’re not alone being a streaming service 50 per cent of whose content is watched on TVs rather than streaming devices. When your content lands on a TV, there’s no room for compromise on quality.

Crucially, the panel agrees that the pandemic has not been a driver for change. Rather, it’s been an accelerant of the intended change already desired and even planned for. If you take the age-old problem of bandwidth in a house with a number of people active with streaming, video calls and other internet usage, any bitrate you can cut out is helpful to everyone.

Next, Carlos from Conviva takes us through graphs for the US market showing how sports streaming dropped 60% at the beginning of the lockdowns only to rebound after spectator-free sporting events started up now running at around 50% higher than before March 2020. News has shown a massive uptick and currently retains a similar increase as sports, the main difference being that it continues to be very volatile. The difficulties of maintaining news output throughout the pandemic are discussed in this video from the RTS.

Before hearing the panel’s predictions, we hear their thoughts on the challenges in improving. One issue highlighted is that sports is much more complex to encode than other genres, for instance, news. In fact, tests show that some sports content scores 25% less than news for quality, according to SSIMWAVE, acknowledging that snooker is less challenging than sailing. Delivering top-quality sports content remains a challenge particularly as the drive for low-latency is requiring smaller and smaller segment sizes which restrict your options for GOP length and bandwidth.

To keep things looking good, the panel suggests content-aware encoding where machine learning analyses the video and provides feedback to the encoder settings. Region of interest coding is another prospect for sports where close-ups tend to want more detail in the centre as you look at the player but wide shots intent to capture all detail. WebRTC has been talked about a lot, but not many implementations have been seen. The panel makes the point that advances in scalability have been noticeable for CDNs specialising in WebRTC but scalability lags behind other tech by, perhaps, 3 times. An alternative, Synamedia points out, is HESP. Created by THEOPlayer, HESP delivers low latency, chunked streaming and very low ‘channel change’ times.

Watch now!
Speakers

Andrew Pope Andrew Pope
Senior Solutions Architect,
Friend MTS
Brandon Farley Brandon Farley
SVP & Chief Revenue Officer,
Streaming Global
Carlos Bacquet Carlos Bacquet
Manager, Sales Engineers,
SSIMWAVE
Nick Fielibert
CTO, Video Network
Synamedia
Will Penson Will Penson
Vice President, GTM Strategy & Operations,
Conviva
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: Per-Title Encoding in the Wild

How deep do you want to go to make sure viewers get the absolute best quality streamed video? It’s been common over the past few years not to just choose 7 bitrates for a streamed service and encode everything to those bitrates. Rather to at least vary the bitrate for each video. In this talk we examine why doing this is leaving bitrate savings on the table which, in turn, means bitrate savings for your viewers, faster time-to-play and an overall better experience.

Jan Ozer starts with a look at the evolution of bitrate optimisation. It started with Beamr and, everyone’s favourite, FFmpeg. Both of which re-encode every frame until they get the best quality. FFmpeg’s CRF mode will change the quantizer parameter for each frame to maintain the same quality throughout the whole file, though with a variable bitrate. Beamr would encode each frame repeatedly reducing the bitrate until it got the desired quality. These worked well but missed out on a big trick…

Over the years, it’s been clear that sometimes 720p at 1Mbps looks better than 1080p at 1Mbps. This isn’t always the case and depends on the source footage. Much rolling news will be different from premium sports content in terms of sharpness and temporal content. So, really, the resolution needs to be assessed alongside data rate. This idea was brought into Netflix’s idea of per-title encoding. By re-encoding a title hundreds of times with different resolutions and data rates, they were able to determine the ‘convex hull’ which is a graph showing the optimum balance between quality, bitrate and resolution. That was back in 2015. Moving beyond that, we’ve started to consider more factors.

The next evolution is fairly obvious really, and that’s to make these evaluations not for each video, but for each shot. Doing this, Jan explains, offers bitrate improvements of 28% for AVC and more for other codecs. This is more complex than per-title because the stream itself changes, for instance, GOP sizes, so whilst we know this is something Netflix is using, there are no available commercial implementations currently.

Pushing these ideas further, perhaps the streaming service should take into account the device on which you are viewing. Some TV’s typically only ever take the top two rungs on the ladder, yet many mobile devices have low-resolutions screens and never get around to pulling the higher bitrates. So profiling a device based on either its model or historic activity can allow you to offer different ABR ladders to allow for a better experience.

All of this needs to be enabled by automatic, objective metrics so the metrics need to look out for the right aspects of the video. Jan explains that PSNR and MS-SSIM, though tried and trusted in the industry, only measure spatial information. Jan gives an overview of the alternatives. VMAF, he says, ads a detail loss metric, but it’s not until we start using PW-SSIM from Bright cove where aspects such as device information is taken into account. SSIMPLUS does this and also considers wide colour gamut HDR and frame rates. Similarly ATEME’s ‘Quality Vector’ considers frame rate and HDR.

Dr. Abdul Rehman follows Jan with his introduction to SSIMWAVE’s technologies and focuses on their ability to understand what quality the viewer will see. This allows a provider to choose whether to deliver a quality of ’70’ or, say, ’80’. Each service is different and the demographics will expect different things. It’s important to meet viewer expectations to avoid churn, but it’s in everyone’s interest to keep the data rate as low as possible.

Abdul gives the example of banding which is something that is not easily picked up by many metrics and so can be introduced as the encode optimiser continues to reduce the bitrate oblivious to the obvious banding. He says that since SSIMPLUS is not referenced to a source, this can give an accurate viewer score no matter the source material. Remember that if you use PSNR, you are comparing against your source. If the source is poor, your PSNR score might end up close to the maximum. The trouble is, your viewers will still see the poor video you send them, not caring if this is due to encoding or a bad source.

The video ends with a Q&A.

Watch now!
Speakers

Jan Ozer Jan Ozer
Principal, Stremaing Learning Center
Contributing Editor, Streaming Media
Abdul Rehman Abdul Rehman
CEO,
SSIMMWAVE

Video: Cloud Encoding – Overview & Best Practices

There are so many ways to work in the cloud. You can use a monolithic solution which does everything for you which is almost guaranteed by its nature to under-deliver on features in one way or another for any non-trivial workflow. Or you could pick best-of-breed functional elements and plumb them together yourself. With the former, you have a fast time to market and in-built simplicity along with some known limitations. With the latter, you may have exactly what you need, to the standard you wanted but there’s a lot of work to implement and test the system.

Tom Kuppinen from Bitmovin joins Christopher Olekas from SSIMWAVE and host of this Kirchner Waterloo Video Tech talk on cloud encoding. After the initial introduction to ‘middle-aged’ startup, Bitmovin, Tom talks about what ‘agility in the cloud’ means being cloud-agnostic. This is the, yet unmentioned, elephant in the room for broadcasters who are so used to having extreme redundancy. Whether it’s the BBC’s “no closer than 70m” requirement for separation of circuits or the standard deployment methodology for systems using SMPTE’s ST 2110 which will have two totally independent networks, putting everything into one cloud provider really isn’t in the same ballpark. AWS has availability zones, of course, which is one of a number of great ways of reducing the blast radius of problems. But surely there’s no better way of reducing the impact of an AWS problem than having part of your infrastructure in another cloud provider.

Bitmovin have implementations in Azure, Google Cloud and AWS along with other cloud providers. In this author’s opinion, it’s a sign of the maturity of the market that this is being thought about, but few companies are truly using multiple cloud providers in an agnostic way; this will surely change over the next 5 years. For reliable and repeatable deployments, API control is your best bet. For detailed monitoring, you will need to use APIs. For connecting together solutions from different vendors, you’ll need APIs. It’s no surprise that Bitmovin say they program ‘API First’; it’s a really important element to any medium to large deployment.

 

 

When it comes to the encoding itself, per-title encoding helps reduce bitrates and storage. Tom explains how it analyses each video and chooses the best combination parameters for the title. In the Q&A, Tom confirms they are working on implementing per-scene encoding which promises more savings still.

To add to the complexity of a best-of-breed encoding solution, using best-of-breed codecs is part and parcel of the value. Bitmovin were early with AV1 and they support VP9 and HEVC. They can also distribute the encoding so that it’s encoded in parallel by as many cores as needed. This was their initial offering for AV1 encoding which was spread over more than 200 cores.

Tom talks about how the cloud-based codecs can integrate into workflows and reveals that HDR conversion, instance pre-warming, advanced subtitling support and AV1 improvements are on the roadmap while leads on to the Q&A. Questions include whether it’s difficult to deploy on multiple clouds, which HDR standards are likely to become the favourites, what the pain points are about live streaming and how to handle metadata.

Watch now!
Speakers

Tom Kuppinen Tom Kuppinen
Senior Sales Engineer,
Bitmovin
Moderator: Christopher Olekas
Senior Software Engineer,
SSIMWAVE Inc.