Video: Sub-Second Live Streaming: Changing How Online Audiences Experience Live Events

There are two main modern approaches to low-latency live streaming, one is CMAF which used fragmented MP4s to allow frame by frame delivery of chunks of data. Similar to HLS, this is becoming a common ‘next step’ for companies already using HLS. Keeping the chunk size down reduces latency, but it remains doubtful if sub-second streaming is practical in real world situations.

Steve Miller Jones from Limelight explains the WebRTC solution to this problem. Being a protocol which is streamed from the source to the destination, this is capable of sub-second latency, too, and seems a better fit. Limelight differentiate themselves on offering a scalable WebRTC streaming service with Adaptive Bitrate (ABR). ABR is traditionally not available with WebRTC and Steve Miller Jones uses this as an example of where Limelight is helping this technology achieve its true potential.

Comparing and contrasting Limelight’s solution with HLS and CMAF, we can see the benefit of WebRTC and that it’s equally capable of supporting features like encryption, Geoblocking and the like.

Ultimately, the importance of latency and the scalability you require may be the biggest factor in deciding which way to go with your sub-second live streaming.

Watch now!

Speakers

Steve Miller-Jones Steve Miller-Jones
VP Product Strategy,
Limelight Networks

Video: Using CMAF to Cut Costs, Simplify Workflows & Reduce Latency

There are two ways to stream video online, either pushing from the server to the device like WebRTC, MPEG transport streams and similar technologies, or allowing the receiving device to request chunks of the stream which is how the majority of internet streaming is done – using HLS and similar formats.

Chunk-based streaming is generally seen as more scalable of these two methods but suffers extra latency due to buffering several chunks each of which can represent between 1 and, typically, 10 seconds of video.

CMAF is one technology here to change that by allowing players to buffer less video. How does this achieve this? An, perhaps more important, can it really cut costs? Iraj Sodagar from NexTreams is here to explain how in this talk from Streaming Media West, 2018.

Iraj covers:

  • A brief history of CMAF (Common Media Format)
  • The core technologies (ISO BMFF, Codecs, captions etc.)
  • Media Data Object (Chunks, Fragments, Segments)
  • Different ways of video delivery
  • Switching Sets (for ABR)
  • Content Protection
  • CTA WAVE project
  • Wave content specifications
  • Live Linear Content with Wave & CMAF
  • Low-latency CMAF usage
  • HTTP 1.1 Chunked Transfer Encoding
  • MPEG DASH

Watch now!

Speaker

Iraj Sodagar Iraj Sodagar
Independant Consultant
Multimedia System Architect, NexTreams

Video – Live Streaming with VP9 at Twitch TV

Tarek Amara from Twitch explains their move from a single codec (H.264) to multiple codecs in order to provide viewers an optimal viewing experience.

In this session, Tarek shares findings on VP9’s suitability for live streaming and the technical and industrial challenges such move involves. Covering:

  • VP9 encoding performance,
  • Device and player support,
  • Bandwidth savings,
  • The role of FPGAs
  • plus an overview of how the transcoding platform need to change to enable VP9 encoding and delivery at scale.

This presentation is from the Video Engineering Summit at Streaming Media West 2018.

Watch now!

Speaker

Tarek Amara Tarek Amara
Senior Video Specialist,
Twitch TV/Amazon

Video: Best Practices for Advanced Software Encoder Evaluations

Streaming Media East brings together Beamr, Netflix BAMTECH Media and SSIMWAVE to discuss the best ways to evaluate software encoders and we see there is much overlap with hardware encoder evaluation, too.

The panel gets into detail covering:

  • Test Design
  • Choosing source sequences
  • Rate Control Modes
  • Bit Rate or Quality Target Levls
  • Offline (VOD) vs Live (Linear)
  • Discrete vs. Multi-resolution/Bitrate
  • Subjective vs. objective measurements
  • Encoding Efficiency vs Performance
  • Video vs Still frames
  • PSNR Tuning
  • Evaluation at Encode Resolution Vs Display Resolution

Watch now for this comprehensive ‘How To’

Speakers

Anne Aaron Dr. Anne Aaron
Director of Video Algorithms,
Netflix
Scott Labrozzi Scott Labrozzi
VP Video Processing, Core Media Video Processing,
BAMTECH Media
Zhou Wang Dr. Zhou Wang
Chief Science Officer,
SSIMWAVE
Tom Vaughan Moderator: Tom Vaughan
VP Strategy,
Beamr