Video – Live Streaming with VP9 at Twitch TV

Tarek Amara from Twitch explains their move from a single codec (H.264) to multiple codecs in order to provide viewers an optimal viewing experience.

In this session, Tarek shares findings on VP9’s suitability for live streaming and the technical and industrial challenges such move involves. Covering:

  • VP9 encoding performance,
  • Device and player support,
  • Bandwidth savings,
  • The role of FPGAs
  • plus an overview of how the transcoding platform need to change to enable VP9 encoding and delivery at scale.

This presentation is from the Video Engineering Summit at Streaming Media West 2018.

Watch now!

Speaker

Tarek Amara Tarek Amara
Senior Video Specialist,
Twitch TV/Amazon

Video: ST 2110 over WAN

Andy Rayner from Nevion looks at using SMPTE ST 2110 on a Wide Area Network (WAN).

While using ST 2110 is a much discussed topic in the studio or within a building, there are extra difficulties in putting it between buildings, cities and countries with some saying it shouldn’t even be done. Here, Andy examines how you can do it whilst acknowledging the industry still has some decisions to make.

Topics discussed include:

  • SMPTE ST 2022-7 – dual flows
  • FEC use on ST 2110
  • Flow Trunking
  • Conversions to and from 2110 and 2022-6
  • Light/Mezzanine Compression
  • PTP Trunking and GPS-locked PTP
  • Multiple Timing Domains
  • Discovery & Control between buildings

Watch now!

Speaker

Andy Rayner Andy Rayner,
Chief Technologist,
Nevion

Video: SCTE-35 In-band Event Signalling in OTT

Alex Zambelli from Hulu presents SCTE-35 at the Seattle Video Tech Meetup.

Alex looks at what SCTE and SCTE-35 are and introduces ad insertion. With the foundation in place, he then looks through the message structures to show the commands and descriptors possible.
Finishing off with SCTE-35 signalling in MPEG-DASH and HLS, Alex covers the topic admirably for live streaming!

Watch now

Speaker

Alex Zambelli Alex Zambelli
Senior Product manager,
Hulu

Video: Automated Tagging of Image and Video Collections using Face Recognition

Real-world examples of using Machine Learning to detect faces in archives is discussed here by Andrew Brown and Ernesto Goto from The University of Oxford. Working with the British Film Institute (BFI) and BBC News, they show the value of facial recognition and metadata comparisons.

Andrew Brown was given the cast lists of thousands of films and shows how they managed to not only discover errors and forgotten cast members, but also develop a searchable interface to find all instances of an actor.

Ernesto Goto shows the searchable BBC News archives interface he developed which used google images results of a famous person to find all the ocurrences in over 10,000 hours of video and jump straight to that point in the video.

A great video from the No Time To Wait 3 conference which looked at all aspects of archives for preservation.

Watch now!