Video: Meeting the Multi-Platform, Multi-Device Challenge

OTT’s changed over the last decade going from a technical marvel to a massive market in its own right with significant reach and technical complexity. There are now many ways to ‘goto market’ and get your content in front of your viewers. Managing the strategy, the preparation & delivery of content, as well as the player ecosystem, is a big challenge under discussion by this Streaming Media panel of experts: Ian Nock from Fairmile West, Remi Beaudouin from Ateme, Pluto TV’s Tom Schultz and Jeff Allen from ShortsTV.

Introduced by moderator Ben Schwarz Jeff launches straight into a much-needed list of definitions. Video on demand, VOD, is well-understood subgenres are simultaneously similar and important to differentiate. AVOD means advertising-funded, SVOD is subscription-funded and TVOD, not mentioned in the video, is transactional VOD which is otherwise called Pay TV. As Jeff shows next, if you have an SVOD channel on someone else’s platform such as Amazon Prime your strategy may be different, so calling this out separately is useful. A new model has appeared called FAST which stands for ‘Free Ad-Supported TV’ which is a linear service that is streamed with dynamic ad insertion. To be clear, this is not the same as AVOD since AVOD implies choosing each and every show you want to watch. FAST simulates the feel of a traditional linear TV channel. Lastly, Jeff calls out the usefulness and uniqueness of the social platforms which are rarely a major source of income for larger companies but can form an important part in curating a following and leading viewers to your your services.

 

 

Jeff finishes up by explaining some of the differences in strategy for launching in these different ways. For instance, for a traditional linear channel, you would want to make sure you have a large amount of new material but for an ad-supported channel on another platform, you may be much more likely to hold back content. For FAST channels, typically these are more experimentally and niche-branded. Jeff looks at real examples from the History Channel, MTV and AMC before walking through the thinking for his own fictional service.

Next up is Ian Nock who is Chair of the Ultra HD Forum’s interoperability working group looking at how to launch a service with next-generation features such as HDR, UHD or high frame rates. He outlines the importance of identifying your customers because by doing that, you can understand the likely device population in your market, their average network performance and the prevalence of software versions. These are all big factors in understanding how you might be able to deliver your content and the technologies you can choose from to do so. For UHD, codec choice is an important part of delivery as well as the display format such as HDR10, HDR10+ etc. Ian also talks about needing a ‘content factory’ to seamlessly transcode assets into and out of next-generation formats remembering that for each UHD/HDR viewer, you’re still likely to have 10 who need SDR. Ian finishes off by discussing the delivery of higher frame rates and the importance of next generation audio.

Wrapping up the video is Ateme’s Remi raising discussion points on the continuing need for balance between active and passive TV, the lack of customisation of TV services, increasing sensitivities on the part of both the customer and streaming providers around sharing analytics and the need to find a way to make streaming more environmentally friendly. Lastly, Tom talks about how PlutoTV is a a service which is very much based on data and though privacy is upheld as very important, decisions are very quantitative. He’s seen that, over the past year, usage patterns have changed for instance the move from mobiles to second screens (i.e. tablets). Delivering DRM to many different platforms is a challenge but he’s focused on ensuring there is zero friction for customers since it’s an AVOD service, it’s vitally important to use the analytics to identify problems, to ensure channel changes are fast and to have end-to-end playback traceability.

Watch now!
Speakers

Tom Schultz Tom Schultz
Director of Engineering – Native Apps
Pluto TV
Ian Nock. Ian Nock
Founder & Principal Consultant,
Fairmile West
Jeff Allen Jeff Allen
President,
ShortsTV
Remi Beaudouin Remi Beaudouin
Chief strategy Officer
ATEME
Ben Schwarz Moderator:Ben Schwarz
CTO,
innovation Consulting

Video: Multiple Codec Live Streaming At Twitch

Twitch is constantly searching for better and lower cost ways of streaming and its move to include VP9 was one of the most high profile ways of doing this. In this talk, a team of Twitch engineers examine the reasons for this and other moves.

Tarek Amara first takes to the stage to introduce Twitch and its scale before looking at the codecs available, the fragmentation of support but also the drivers to improve the video delivered to viewers both in terms of frame rate and resolution in addition to quality. The discussion turns to the reasons to implement of VP9 and we see that if HEVC were chosen instead, less than 3% of people would be able to receive it.

Nagendra Babu explains the basic architecture employed at Twitch before going on to explain the challenges they met in testing and developing the backend and app. He also talks about the difficulty of running multiple transcodes in the cloud. FPGAs are in important tool for Twitch, and Nagendra discusses how they deal with their programming.

The last speaker is Nikhil who talks about the format of VP9 being FMP4 delivered by transport stream and then outlines the pros and cons of Fragmented FMP4 before handing the floor to the audience.

Watch now!
Speakers

Tarek Amara Tarek Amara
Principal Video Specialist,
Twitch
Nikhil Purushe Nikhil Purushe
Senior Software Engineer,
Twitch
Nagendra Babu Nagendra Babu
Senior Software Engineer,
Twitch

Video: Deploying CMAF In 2019

It’s all very good saying “let’s implement CMAF”, but what’s implemented so far and what can you expect in the real world, away from hype and promises? RealEyes took the podium at the Video Engineering Summit to explain.

CMAF represents an evolution of the tried and tested technologies HLS and DASH. With massive scalability and built upon the well-worn tenants of HTTP, Netflix and a whole industry was born and is thriving on these still-evolving technologies. CMAF stands for the Common Media Application Format because it was created to allow both HLS and DASH to be implemented in one common standard. But the push to reduce latency further and further has resulted in CMAF being better known for it’s low-latency form which can be used to deliver streams with five to ten times lower latencies.

John Gainfort tackles explaining CMAF and highlights all the non-latency-related features before then tackling its low-latency form. We look at what it is (a manfest) and where it came from (ISO BMFF before diving in to the current possibilities and the ‘to do list’ of DRM.

Before the Q&A, John then moves on to how CMAF is implemented to deliver low-latency stream: what to expect in terms of latency and the future items which, when achieved, will deliver the full low-latency experience.

Watch now!

Speaker

John Gainfort John Gainfort.
Development Manager,
RealEyes

Video: Making Live Streaming More ‘Live’ with LL-CMAF

Squeezing streaming latency down to just a few seconds is possible with CMAF. Bitmovin guides us through what’s possible now and what’s yet to come.

CMAF represents an evolution of the tried and tested technologies HLS and DASH. With massive scalability and built upon the well-worn tenants of HTTP, Netflix and a whole industry was born and is thriving on these still-evolving technologies. But the push to reduce latency further and further has resulted in CMAF which can be used to deliver streams with five to ten times lower latencies.

Paul MacDougall is a Solutions Architect with Bitmovin so is well placed to explain the application of CMAF. Starting with a look at what we mean by low latency, he shows that it’s still quite possible to find HLS latencies of up to a minute but more common latencies now are closer to 30 seconds. But 5 seconds is the golden latency which matches many broadcast mechanisms including digital terrestrial, so it’s no surprise that this is where low latency CMAF is aimed.

CMAF itself is simply a format which unites HLS and DASH under one standard. It doesn’t, in and of itself, mean your stream will be low latency. In fact, CMAF was born out of MPEG’s MP4 standard – officially called ISO BMFF . But you can use CMAF in a low-latency mode which is what this talk focusses on.

Paul looks at what makes up the latency of a typical feed discussing encoding times, playback latency and the other key places. With this groundwork laid, it’s time to look at the way CMAF is chunked and formatted showing that the smaller chunk sizes allow the encoder and player to be more flexible reducing several types of latency down to only a few seconds.

In order to take full advantage of CMAF, the play needs to understand CMAF and Paul explains these adaptations before moving on to the limitations and challenges of using CMAF today. One important change, for instance, is that chunked streaming players (i.e. HLS) have always timed the download of each chunk to get a feel for whether bandwidth was plentiful (download was quicker than time taken to play the chunk) or bandwidth was constrained (the chunk arrived slower than real-time). Based on this, the player could choose to increase or decrease the bandwidth of the stream it was accessing which, in HLS, means requesting a chunk from a different playlist. Due to the improvements in downloading smaller chunks and using real-time transfer techniques such as HTTP/1.1 Chunked Transfer the chunks are all arriving at the download speed. This makes it very hard to make ABR work for LL-CMAF, though there are approaches being tested and trialed not mentioned in the talk.

Watch now!

Speakers

Paul MacDougall Paul MacDougall
Solutions Architect,
Bitmovin