Video: Predicting Viewer Attention in Video for use in Compression

Video compression is a never-ending endavour with hundreds of techniques possible. Some which aren’t in use are waiting for computers to catch up or, in this case, to find the best way to apply new techniques, such as machine learning, to the task.

In this talk from Streaming Tech Sweden 2018, Fritz Barnes from Entecon explains that region of interest compression – where you compress the image more in areas where the viewer won’t be looking – can significantly help reduce bitrate.

Fritz looks at techniques to analyse video and work out where people will be looking. This technique is called ‘saliancy deteciton’ and has been made practical by machine learning. Convolutional Neural Networks are introduced. The extensive training material is introduced and explains the model used to learn from it. Optical flow is a way to encode the motion of the video and is also part of the video.

The talk finishes by looking at the results of this technique; both the successes and problems.

Watch now!
Free registration required
Streaming Tech Sweden is an annual conference run by Eyevinn Technology in Sweden. Talks are recorded and are available to delegates for several months and are then freely available. Whilst registration is required on the website, it is free to register and to watch this video.

Video: The Evaluation Process of Ultra Low-Latency Solutions

Patrick Debois summarises the world of low-latency players from his perspective of wanting to deploy his own solution.

Low- and ultra low-latency is an emerging market in the sense that there are few standards and getting the solutions working at scale and across all platforms is difficult and is a ‘work in progress’. As such, selecting a player is a compromise and there are many issues at play.

Patrick covers the following aspects:

  • DIY vs ‘as a service’ models
  • Different methods of ingest (replacing RTMP?)
    Platform support & SDK size
  • Synchronised play.
  • Ad support
  • Hangouts
  • Stream protection
  • Redundancy & Load testing
  • Load testing
  • Streaming costs & pricing models
  • Network compatibility of non-HTTP-based solutions
  • ABR support
  • Debug options and detecting stream failure
  • Quality analytics & monitoring support
  • Support

Watch now!
Speakers

Patrick Debois Patrick Debois
CTO & Co-founder,
Zender

Video: Delivering for Large-Scale Events

From event to event it’s not a surprise that streaming traffic increases, but this look at the Wolrd Cup 2018 shows a very sharp rise beating many expecatations. Joachim Hengge tells us what hte World Cup looked like from Akamai’s perspective.

Joachim takes us through the stats for streaming the World Cup where they peaked at 23Tbps of throuhgput with nearly 10 million concurrent viewers. The bandwidth was significantly higher than the last World Cup but looking at the data, we can learn a few more things about the market.

After looking at a macth-by-match breakdown we look at a sytsem architecture for one customer who delivered the World Cup to highlight the importance of stable content ingest, latency and broadcast quality. Encoding and packaging into HLS with 4-second chunks were tasks done on site with the rest happening within Akamai and being fed to other CDNs. Joachim pulls this together into three key recommendations for anyone looking at streaming large events before delvingin to some Sweden-specific streaming stats where over 81% of feeds were played back at the highest quality.

Watch now!
Free registration required

This talk is from Streaming Tech Sweden, an annual conference run by Eyevinn Technology. Videos from the event are available to paid attendees but are released free of charge after several months. As with all videos on The Broadcast Knowledge, this is available free of charge after registering on the site.

Speaker

Joachim Hengge Joachim Hengge
Senior Product Manager, Media Services,
Akamai

Video: User-Generated HDR is Still Too Hard

HDR and wide colour gamuts are difficult enough in professional settings – how can YouTube get it right with user-generated content?

Steven Robertson from Google explains the difficulties that YouTube has faced in dealing with HDR in both its original productions but also in terms of user generated content (UGC). These difficulties stem from the Dolby PQ way of looking at the world with fixed brightnesses and the ability to go all the way up to 10,000 nits of brightness and also from the world of wider colour gamuts with Display P3 and BT.2020 (WCG).

Viewing conditions have been a challenge right from the beginning of TV but ever more so now with screens of many different shapes and sizes being available with very varied abilities to show brightness and colour. Steven spends some time discussing the difficulty of finding a display suitable for colour grading and previewing your work on – particularly for individual users who are without a large production budget.

Interestingly, we then see that one of the biggest difficulties is in visual perception which makes colours you see after having seen bad colours look much better. HDR can deliver extremely bright and extremely wrong colours. Steven shows real examples from YouTube of where the brain has been tricked into thinking colour and brightness are correct but they clearly are not.

Whilst it’s long been known that HDR and WCG are inextricably linked with human vision, this is a great insight into tackling this at scale and the research that has gone on to bring this under automated control.

Watch now!
Free registration required

This talk is from Streaming Tech Sweden, an annual conference run by Eyevinn Technology. Videos from the event are available to paid attendees but are released free of charge after several months. As with all videos on The Broadcast Knowledge, this is available free of charge after registering on the site.

Speaker

Steven Robertson Steven Robertson
Software Engineer, YouTube Player Infrastructure
Google