Everyone has a go-to program or three they use for problem solving. Here is a review of a whole swathe of diagnosis programs out there for live streaming.
There are known favourites like Wireshark, FFPlay and MediaInfo, free applications such as Eyevinn Technology’s Segment Analyser and the open source YUView. And this also covers paid programs like Elecard’s Stream Analyser and Telestream Switch.
This talk by David Hassoun CEO of RealEyes media is well worth a look because there is bound to be something there you didn’t know about – and who knows how useful that will be to you!
Server-Side Ad Insertion (SSAI) it’s the best defence against ad-blockers, but switching in an ad at source can be tricky particularly in low latency streams. This talk at the OTT Leadership Summit at Streaming Media East brings together leaders in the field to explain where they’re up to in delivering this technology and the benefits they see.
Magnus Svensson tells us about the instrumental role Eyevinn Technology, the consultancy who run the technical conference Streaming Tech Sweden , is played in Sweden creating an open standard for all the broadcasters to work to in order to agree how to track SSAI allowing the correct payments to be made. Magnus also talks about aligning SCTE insertion with MPEG structure and the importance of correct preparation of the source video.
Tony Brown from Newsy talks about the centralised nature of SSAI making management easier and gives ana overview of decisioning bringing together buys and sellers of ads. Tony also discusses other analytics such as adjacency and targeting.
Jason Justman of Sinclair Broadcasting Group, explains SCTE insertion and talks about the technical difficulties in reacting to live changes in programming.
Geir Magnusson, Jr. from fuboTV covers the difficulties of preparing the ads quickly enough for thousands or millions of streams to get customised, SSAI ads at the same time and discusses his strategy to start pre-fetching ads from the ad server to prepare them ahead of time. Geir also highlights the misunderstanding that can exist where streaming provides the same video and programme experience as traditional broadcast but ad buyers don’t all understand how much more targeting is possible – even with SSAI.
AV1 and VVC are both new codecs on the scene. Codecs touch our lives every day both at work and at home. They are the only way that anyone receives audio and video online and television. So all together they’re pretty important and finding better ones generates a lot of opinion.
So what are AV1 and VVC? VVC is one of the newest codecs on the block and is undergoing standardisation in MPEG. VVC builds on the technologies standardised by HEVC but adds many new coding tools. The standard is likely to enter draft phase before the end of 2019 resulting in it being officially standardised around a year later. For more info on VVC, check out Bitmovin’s VVC intro from Demuxed
AV1 is a new but increasingly known codec, famous for being royalty free and backed by Netflix, Apple and many other big hyper scale players. There have been reports that though there is no royalty levied on it, patent holders have still approached big manufacturers to discuss financial reimbursement so its ‘free’ status is a matter of debate. Whilst there is a patent defence programme, it is not known if it’s sufficient to insulate larger players. Much further on than VVC, AV1 has already had a code freeze and companies such as Bitmovin have been working hard to reduce the encode times – widely known to be very long – and create live services.
Here, Christian Feldmann from Bitmovin gives us the latest status on AV1 and VVC. Christian discusses AV1’s tools before discussing VVC’s tools pointing out the similarities that exist. Whilst AV1 is being supported in well known browsers, VVC is at the beginning.
There’s a look at the licensing status of each codec before a look at EVC – which stands for Essential Video Coding. This has a royalty free baseline profile so is of interest to many. Christian shares results from a Technicolor experiment.
There are two main modern approaches to low-latency live streaming, one is CMAF which used fragmented MP4s to allow frame by frame delivery of chunks of data. Similar to HLS, this is becoming a common ‘next step’ for companies already using HLS. Keeping the chunk size down reduces latency, but it remains doubtful if sub-second streaming is practical in real world situations.
Steve Miller Jones from Limelight explains the WebRTC solution to this problem. Being a protocol which is streamed from the source to the destination, this is capable of sub-second latency, too, and seems a better fit. Limelight differentiate themselves on offering a scalable WebRTC streaming service with Adaptive Bitrate (ABR). ABR is traditionally not available with WebRTC and Steve Miller Jones uses this as an example of where Limelight is helping this technology achieve its true potential.
Comparing and contrasting Limelight’s solution with HLS and CMAF, we can see the benefit of WebRTC and that it’s equally capable of supporting features like encryption, Geoblocking and the like.
Ultimately, the importance of latency and the scalability you require may be the biggest factor in deciding which way to go with your sub-second live streaming.
Streaming on the net relies on delivering video at a bandwidth you can handle. Called ‘Adaptive Bitrate’ or ABR, it’s hardly possible to think of streaming without it. While the idea might seem simple initially – just send several versions of your video – it quickly gets nuanced.
Streaming experts Streamroot take us through how ABR works at Streaming Media East from 2016. While the talk is a few years old, the facts are still the same so this remains a useful talk which not only introduces the topic but goes into detail on how to implement ABR.
The most common streaming format is HLS which relies on the player downloading the video in sections – small files – each representing around 3 to 10 seconds of video. For HLS and similar technologies, the idea is simply to allow the player, when it’s time to download the next part of the video, to choose from a selection of files each with the same video content but each at a different bitrate.
Allowing a player to choose which chunk it downloads means it can adapt to changing network conditions but does imply that each file has contain exactly the same frames of video else there would be a jump when the next file is played. So we have met our first complication. Furthermore, each encoded stream needs to be segmented in the same way and in MPEG, where you can only cut files on I-frame boundaries, it means the encoders need to synchronise their GOP structure giving us our second complication.
These difficulties, many more and Streamroot’s solutions are presented by Erica Beavers and Nikolay Rodionov including experiments and proofs of concept they have carried out to demonstrate the efficacy.
Date: Friday, March 29th 2019
Time: 11am PT / 2pm ET / 18:00 GMT
NAB is coming around again and the betting has started on what the show will bring. Whilst we can look to last year for hints, here editors from Streaming Media come together to discuss the current trends in the industry and how they will be represented at NAB.
Some highlights of the conversation will be:
What HEVC solutions people are showing – the ongoing codec wars are captivating to most people as AV1 tries – and gradually succeeds – to break its ‘too slow’ label, whilst HEVC continues to grow acceptance with its ‘ready to deploy’ label despite the fees.
UHD production and delivery – We know that production houses prefer to capture higher resolution as it increases the value of their content and gives them more options in editing. But how far is UHD developing further down the chain. Is it just for live sports?
Live Streaming – SRT is bound to keep making waves at NAB has Haivision plans its biggest event yet discussing the many ways it’s being used. SRT delivers encrypted, reliable streams – while there are competitors, SRT continues to grow apace.
NDI – This compressed but ultra low latency codec continues to impress for live production workflows – particularly live events, though it’s not clear how much – if at all – it will make its way into top-tier broadcasters.
Much more will be on the cards, so register now for this session on Friday March 29th.
VP & Editor-in-Chief
There are two ways to stream video online, either pushing from the server to the device like WebRTC, MPEG transport streams and similar technologies, or allowing the receiving device to request chunks of the stream which is how the majority of internet streaming is done – using HLS and similar formats.
Chunk-based streaming is generally seen as more scalable of these two methods but suffers extra latency due to buffering several chunks each of which can represent between 1 and, typically, 10 seconds of video.
CMAF is one technology here to change that by allowing players to buffer less video. How does this achieve this? An, perhaps more important, can it really cut costs? Iraj Sodagar from NexTreams is here to explain how in this talk from Streaming Media West, 2018.
A brief history of CMAF (Common Media Format)
The core technologies (ISO BMFF, Codecs, captions etc.)
Using microservices is a way of architecting your software platform to be nimble, simple and is just as applicable to on-premise platforms as cloud. As scaling is important for OTT providers, it’s not surprising that much work is being done in the OTT sector to utilise microservice architectures.
Even companies that are not yet actively operating on a microservices architecture are looking for vendors who at least have a strategy to cater to it for the future. This session will examine the core benefits (including redundancy, dev ops, scalability, and self-healing), the different approaches (including containerisation and orchestration via Docker, Kubernetes, and Mesos, as well as native microservices models like Erlang), and the complexities of migrating a generic architecture to a microservices architecture.
This panel covers:
Why is OTT so suited to microservices?
How microservices enable companies to be flexible to changing customer demands
Storing, preparing, and delivering media content securely involves leveraging systems that can scale and ensure top-of-the-line security. This webinar explores doing that by implementing workflows in AWS’s highly available, scalable, and secure cloud services such as Amazon S3 for storage, Amazon Elastic Transcoder, and Amazon CloudFront for delivery.
This session came from the Discovery Track at Streaming Media East and shows how you can build a media stack on AWS and use JW Player to deliver protected HTTP Live Streams (HLS) to various devices, including iOS, Android, and Windows desktops.
Amazon Web Services WAF
Nobody wants to find out about a big play or major news event on Twitter before they see it in their video stream, so reducing latency is crucial for OTT services’ success. Likewise, ultra-low latency is crucial for interactive streaming applications. Depending on your use case, a few seconds of latency might be fine, or you might need to try to hit that sub-second target.
Learn which technologies and solutions are best for your business, and make sure your viewers get their video on time, every time. In this webinar, you’ll learn the following:
Why it’s important to evaluate and improve latency end-to-end, including software and services, encoder, platform, and player
How to decide which technology and solution is best for your use case (e.g. CMAF, HLS/DASH, WebRTC, Websocket)
How chunked CMAF offers a standards-based approach that allows latency to be decoupled from segment duration
How chunked CMAF leverages existing CDN HTTP capacity to provide low-latency solutions at high scale
How WebRTC can be used to deliver live video sub-second latency at scale, and provide rich, interactive experiences for live streaming applications
How a single misconfigured component can undo any other effort to achieve low latency
How integrated solutions create new business opportunities for low latency interactive use cases
How to achieve low latency across all platforms and devices
VP of Product Strategy,
Moderator: Eric Schumacher-Rasmussen