Video: Per-Title Encoding in the Wild

How deep do you want to go to make sure viewers get the absolute best quality streamed video? It’s been common over the past few years not to just choose 7 bitrates for a streamed service and encode everything to those bitrates. Rather to at least vary the bitrate for each video. In this talk we examine why doing this is leaving bitrate savings on the table which, in turn, means bitrate savings for your viewers, faster time-to-play and an overall better experience.

Jan Ozer starts with a look at the evolution of bitrate optimisation. It started with Beamr and, everyone’s favourite, FFmpeg. Both of which re-encode every frame until they get the best quality. FFmpeg’s CRF mode will change the quantizer parameter for each frame to maintain the same quality throughout the whole file, though with a variable bitrate. Beamr would encode each frame repeatedly reducing the bitrate until it got the desired quality. These worked well but missed out on a big trick…

Over the years, it’s been clear that sometimes 720p at 1Mbps looks better than 1080p at 1Mbps. This isn’t always the case and depends on the source footage. Much rolling news will be different from premium sports content in terms of sharpness and temporal content. So, really, the resolution needs to be assessed alongside data rate. This idea was brought into Netflix’s idea of per-title encoding. By re-encoding a title hundreds of times with different resolutions and data rates, they were able to determine the ‘convex hull’ which is a graph showing the optimum balance between quality, bitrate and resolution. That was back in 2015. Moving beyond that, we’ve started to consider more factors.

The next evolution is fairly obvious really, and that’s to make these evaluations not for each video, but for each shot. Doing this, Jan explains, offers bitrate improvements of 28% for AVC and more for other codecs. This is more complex than per-title because the stream itself changes, for instance, GOP sizes, so whilst we know this is something Netflix is using, there are no available commercial implementations currently.

Pushing these ideas further, perhaps the streaming service should take in to account the device on which you are viewing. Some TV’s typically only ever take the top two rungs on the ladder, yet many mobile devices have low-resolutions screens and never get around to pulling the higher bitrates. So profiling a device based on either its model or historic activity can allow you to offer different ABR ladders to allow for a better experience.

All of this needs to be enabled by automatic, objective metrics so the metrics need to look out for the right aspects of the video. Jan explains that PSNR and MS-SSIM, though tried and trusted in the industry, only measure spatial information. Jan gives an overview of the alternatives. VMAF, he says, ads a detail loss metric, but it’s not until we start using PW-SSIM fro Bright cove where aspects such as device information is taken into account. SSIMPLUS does this and also considers wide colour gamut HDR and frame rates. Similarly ATEME’s ‘Quality Vector’ considers frame rate and HDR.

Dr. Abdul Rehman follows Jan with his introduction to SSIMWAVE’s technologies and focuses on their ability to understand what quality the viewer will see. This allows a provider to choose whether to deliver a quality of ’70’ or, say, ’80’. Each service is different and the demographics will expect different things. It’s important to meet viewer expectation to avoid churn, but it’s in everyone’s interest to keep the data rate as low as possible.

Abdul gives the example of banding which is something that is not easily picked up by many metrics and so can be introduced as the encode optimiser continues to reduce the bitrate oblivious to the obvious banding. He says that since SSIMPLUS is not referenced to a source, this can give an accurate viewer score no matter the source material. Remember that if you use PSNR, you are comparing against your source. If the source is poor, your PSNR score might end up close to the maximum. The trouble is, your viewers will still see the poor video you send them, not caring if this is due to encoding or a bad source.

The video ends with a Q&A.

Watch now!
Speakers

Jan Ozer Jan Ozer
Principal, Stremaing Learning Center
Contributing Editor, Streaming Media
Abdul Rehman Abdul Rehman
CEO,
SSIMMWAVE

Video: State of the Streaming Market 2021

Streaming Media is back to take the pulse of the Streaming market following on from their recent, mid-year survey measuring the impact of the pandemic. This is the third annual snapshot of the state of the streaming market which will be published by Streaming Media in March. To give us this sneak peak, Eric Schumacher-Rasmussen is joined by colleague Tim Siglin and Harmonic Inc.’s Robert Gambino,

They start off with a look at the demographics of the respondents. It’s no surprise that North America is well represented as Streaming Media is US-based and both the USA and Canada have very strong broadcast markets in terms of publishers and vendors. Europe is represented to the tune of 14% and South America’s representation has doubled which is in line with other trends showing notable growth in the South American market. In terms of individuals, exec-level and ‘engineering’ respondents were equally balanced with a few changes in the types of institutions represented. Education and houses of worship have both grown in representation since the last survey.

Of responding companies, 66% said that they both create and distribute content, a percentage that continues to grow. This is indicative, the panel says, of the barrier to entry of distribution continuing to fall. CDNs are relatively low cost and the time to market can be measured in weeks. Answering which type of streaming they are involved in, live and on-demand were almost equal for the first time in this survey’s history. Robert says that he’s seen a lot of companies taking to using the cloud to deliver popups but also that streaming ecosystems are better attuned to live video than they used to be.

Reading the news, it seems that there’s a large migration into the cloud, but is that shown in the data? When asked about their plans to move to the cloud, around a third had already moved but only a quarter said they had no plans. This means there is plenty of room for growth for both cloud platforms and vendors. In terms of the service itself, video quality was the top ‘challenge’ identified followed by latency, scalability and buffering respectively. Robert points out better codecs delivering lower bitrates helps alleviate all of these problems as well as time to play, bandwidth and storage costs.

There have been a lot of talks on dynamic server-side ad insertion in 2020 including for use with targetted advertising, but who’s actually adopting it. Over half of respondents indicated they weren’t going to move into that sphere and that’s likely because many governmental and educational services don’t need advertising to start with. But 10% are planning to implement it within the next 12 months which represents a doubling of adoption, so growth is not slow. Robert’s experience is that many people in ad sales are still used to selling on aggregate and don’t understand the power of targetted advertising and, indeed, how it works. Education, he feels, is key to continuing growth.

The panel finishes by discussing what companies hope to get out of the move to virtualised or cloud infrastructure. Flexibility comes in just above reliability with cost savings only being third. Robert comes back to pop-up channels which, based on the release of a new film or a sports event, have proved popular and are a good example of the flexibility that companies can easily access and monetise. There are a number of companies that are heavily investing in private cloud as well those who are migrating to public cloud. Either way, these benefits are available to companies who invest and, as we’re seeing in South America, cloud can offer an easy on-ramp to expanding both scale and feature-set of your infrastructure without large Capex projects. Thus it’s the flexibility of the solution which is driving expansion and improvements in quality and production values.

Watch now!
Speakers

Tim Siglin Tim Siglin
Contributing Editor, Streaming Media Magazine
Founding Executive Director, HelpMeStream
Robert Gambino Robert Gambino
Director of Solutions,
Harmonic Inc.
Eric Schumacher-Rasmussen Moderator: Eric Schumacher-Rasmussen
Editor, Streaming Media

Video: Mobile and Wireless Layer 2 – Satellite/ATSC30/M-ABR/5G/LTE-B

Wireless internet is here to stay and as it improves, it opens new opportunities for streaming and broadcasting. With SpaceX delivering between 20 and 40ms latency, we see that even satellite can be relevant for low-latency streaming. Indeed radio (RF) is the focus of this talk discussing how 5G, LTE, 4G, ATSC and satellite fit into delivering streaming media o everyone.

LTE-B, in the title of this talk refers to LTE Broadcast, also known as eMBMS (Evolved Multimedia Broadcast Multicast Services.) delivered over LTE technology. Matt Stagg underlines the importance of LTE-B saying “Spectrum is finite and you shouldn’t waste it sending unicast”. Using LTE-B, we can achieve a one-to-many push with orchestration on top. ROuters do need to support this and UDP transport, but this is a surmountable challenge.

Matt explains that BT did a trial of LTE-B with BBC. The major breakthrough was they could ‘immediately’ deliver the output of an EVS direct to the fans in the stadium. For BT, the problem came with hitting critical mass. Matt makes the point that it’s not just sports, Love Island can get the same viewership. But with no support from Apple, the number of compatible devices isn’t high enough.

“Spectrum is final and you shouldn’t waste it sending unicast”

Matt Stagg

Turning the attention of the panel which includes Synamedia’s Mark Myslinski and Jack Arky from Verizon Wireless. Matt says that, in general, bandwidth capacity to edges in the UK is not a big issue since there is usually dark fibre, but hosting content at the edge doesn’t hit the spot due to the RAN. 5G has helped us move on beyond that.

Mark from Verizon explains that multi-edge access compute enabled by the low-latency of 5G. We need to move as much as is sensible to the edge to keep the delay down. Later in the video, we hear that XR (mixed reality) and AR (augmented reality) are two technologies which will likely depend on cloud computation to get the level of accurate graphics necessary. This will, therefore, require a low-latency connection.

From Verizon’s perspective, the most important technology being rolled out is actually ATSC 3.0. Much discussed at NAB 2015, stability has come to the standard and it’s now in use in South Korea and increasingly in the US. ATSC 3.0, as Mark explains, is a complimentary fully-IP technology to fit alongside 5G. He even talks about how 5G and ATSC could co-exist due to the open way the standards were created.

The session ends with a Q&A

Watch now!
Speakers

Mark Myslinski Mark Myslinski
Broadcast Solutions Manager,
Synamedia
Jack Arky Jack Arky
Senior Engineer, Product Development
Verizon Wireless
Matt Stagg Matt Stagg
Director, Mobile Strategy
BT Sport
Dom Robinson Dom Robinson
Co-Founder, Director and Creative Firestarter
id3as

Video: Where The Puck Is Going: What’s Next for Esports & Sports Streaming

How’s sports streaming changing as the pandemic continues? Esports has the edge on physical sports as it allows people to compete from diverse locations. But both physical and esports benefit from bringing people into one place and getting the fans to see the players.

This panel from Streaming Media, moderated by Jeff Jacobs, looks at how producers, publishers, streamers and distributors reacted to 2020 and where they’re positioning themselves to be ahead in 2021. The panel opens by looking at the tools and the preferred workflows. There are so many ways to do remote production. Sam Asfahani from OS Studios, explained how they had already adopted some remote workflows to keep costs down but he has been impressed at the number of innovations released which help improve remote production. He explains they have a physical NDI Control room where they also use VMix for contribution. The changed workflows during the pandemic have convinced them that the second control room they were planning to build should now be in the cloud.

Aaron Nagler from Cheesehead TV discussed the way he’s stopped flying to watch games and has changed to watching syncronised using LiveX Director with his co-presenter. Within a few milliseconds, he is seeing the same footage so they can both present and comment in real-time. Intriguingly, Tyler Champley from Poker Central explains that, for them, remote production hasn’t been needed since the tournaments have been canceled and they use their studio facilities. Their biggest issue is that their players need to be in the same room to play the game, close to each other and without masks.

Link to video

The panel discusses what will stick after the pandemic. Sam makes the point that he’s gone from paying $20,000 for a star to stay overnight and be part of the show. The pandemic has made it so that sports stars are happy to be paid $5,000 for the two hours on a programme without having to leave their house and the show saves money too. He feels this will continue to be an option on an on-going basis, though the panel notes that technical capability is limited with contributors, even top dollar talent without anyone else there to help. Tyler says that his studio has been more in demand during Covid so his team has become better at tear-downs to accommodate multiple uses. And lastly, the panel makes the point that hybrid programme making models are going to continue.

After some questions from the audience, the panel comments on future strategies. Sean Gardner from Xilinx talks about the need and arrival of newer codecs such as AV1 and LCEVC can help do deliver lower bitrates and/or lower latency. Aaron mentions that he’s seen ways of gamifying the streams which he hasn’t used before which helps with monetising. And Sam leaves us with the thought that game APIs can help create fantastic productions when they’re done well, but he sees an even better future where APIs allow information to be fed back into the game which will be able to create a two-way event between the fans and the game.

Watch now!
Speakers

Jeff Jacobs Moderator:Jeff Jacobs
Executive Vice President & General Manager,
VENN
Aaron Nagler Aaron Nagler
Co-Founder,
Cheesehead TV
Sam Asfahani Sam Asfahani
CEO,
OS Studios
Sean Gardner Sean Gardner
Snr Manager, Market Development & Strategy, Cloud Video,
Xilinx
Tyler Champley Tyler Champley
VP Marketing & Audience Development,
Poker Central