Delivering great quality, live video without breaking the bank is difficult. This talk looks at the different ways companies are dealing with this challenge.
NGCodec’s founder, Oliver Gunasekara, starts by quantifying the millions of dollars spent just by one company each year just on delivering their video and introduces the difficulties of CPU encoding compared to dedicated chips – ASICS and looks at how FPGAs fit in. Cloud-based FPGAs are available on AWS, Baidu, Alibaba and others.
After covering Twitch’s move to VP9 on FPGA, the talk finishes looking at on-premise implementation, Oliver looks at the cost of ownership of servers compared to Xilinx FPGA.
How can we make video more appealing to humans? We’ve evolved to live a certain way and this has defined – and will continue to define – our video technologies. MUX founder Jon Dahl talks to us here about the ways in which human physiology drives viewing habits.
Vertical vs. horizontal video, angular resolution and how the typical viewing distances of computers, TVs and other devices affects what resolution we can perceive are all discussed. Jon moves on to frequencies both of audio and video where frame rates and flicker are important and where physics comes into play alongside biology.
Even for the experienced, this talk is bound to bring something new and is a great tour of the fundamentals of the visual perception that our industry relies on and strives to please day in, day out.
This talk was given at Streaming Tech Sweden which is an annual conference from Eyvinn Technology. Streamed on their own video platform, talks are initially available exclusively to all conference attendees, but are released free-to-view during the subsequent year. Free registration is required to watch the videos.
Low latency streaming is always a compromise, but what can be done to keep QOE high?
This on-demand webinar looks at CMAF and presents some real-world data on this low latency technique. The webinar starts by explaining that CMAF is a low-latency streaming technology similar to HLS and other streaming protocols where the idea is to deliver the video as small files. Olivier and Alain from Harmonic explain how this is done and look at some of the trade-offs and compromises that are needed and introduce techniques to keep QOE high. They also look at deployment in cloud vs. on premise.
Pieter-Jan Speelmans talks about play tradeoffs and optimisations within the player. CMAF allows the buffer to be reduced and whilst a bad network may mean you buffer is similar to ‘normal’, but in good networks, this buffer can be brought down significantly. He also talks about how ABR switching is impacted by GOP length even in CMAF.
Viaccess-Orca explains how DRM works with CMAF and looks at some of the challenges including licences acquisition time and overloading licence servers at the beginning of events. Akamai’s Will Law explains some benefits of CMAF and the near-real-time of chunk-based transfer (HTTP 1.1) and how downloading chunks at full speed leads to problems when the same broadband link is used by several clients.
There are lots of good talks on CMAF, but this is one of the few which talks about CMAF not as theory, but as is deployable today.
Netflix has famously moved in to original content but less-known are its innovations behind the scenes in production workflows.
Eric Reinecke looks at the challenges in moving media and finding ways to correctly pick and choose the right media to move. He looks at the different ways of moving editorial data: the venerable EDL, Avid’s more recent AAF and Final Cut’s XML talking about the pros and cons of them all.
The talk then moves on to OpenTimelineIO which is an API and interchange format for editorial cut information which was designed to help departments in animation studios to work together. Hosted by Pixar, companies like Netflix are finding uses for the API outside of animation and Eric shows demos of how he’s using it within Netflix then ends with a call to get involved!
Sports broadcasting has always been on the forefront of technology both by developing products specifically for the sporting market such as sports graphics, annotation and ball tracking and also by pressing nearly any new technology that comes along into production.
The result of this relentless thirst for technology is year-by-year better and better productions done in more innovative and often lower-cost ways.
Remote production has long been a buzz word in sports broadcasting which has taken a long time to take hold (known as REMIs in North America). This is partly because the technologies needed to do it really well and really seamlessly are only just becoming dominant and partly because sports workflows from a technology and a business needs perspective are so different from company to company that one remote production.
However there are ever stronger pushes into remote production which very much brings remote production into day-to-day use in many companies. Kiswe Mobile joins us on this webinar to explain their experience in enabling remote production.
AI is looked on as an important tool in sports broadcasting. With so much data, both visual and textual, AI will increasingly be an excellent tool to parse and interpret these large data sets. Whether this is simply to produce better stats analytics or to comb through the thousands of hours of footage looking for, and logging, interesting events between players, ball possession stats etc.
IBC brings in Jérôme Wauthoz from Tedial and production consultant Mike Ruddell to bring us their experience making the sports on our screens as great as it can be at a cost that broadcasters can afford.
CMAF brings low latency streams of less than 4 seconds into the realms of possibility, WebRTC pushes that below a second – but which is the right technology for you?
Date: June 12th 2019 Time: 11am PST / 2pm EST / 19:00 BST
CMAF represents an evolution of the tried and tested technologies HLS and DASH. With massive scalability and built upon the well-worn tenants of HTTP, Netflix and a whole industry was born and is thriving on these still-evolving technologies. The push to reduce latency further and further has resulted in CMAF which can be used to deliver streams with five to ten times lower latencies.
WebRTC is a Google-backed streaming protocol with the traditional meaning of streaming; it pushes a stream to you a opposed to the HLS-style methods of making small files available for download and reassembly into a stream. One benefit of this is extremely low bitrates of 1 second or less. Used widely by Google Hangouts and Facebook messenger, WebRTC is increasingly an option for more broadcast-style streaming services from live sports & music to gaming and gambling.
Both have advantages and draw-backs so Wowza’s Barry Owen and Anne Balistreri are here to help navigate the ins and outs of both technologies plus answer your questions.
JPEG XS is a brand-new, ultra-low latency standard delivering JPEG 2000 quality with 1000x lower latency; microseconds instead of milliseconds. This mezzanine compression standard promises compression ratios of up to 10:1, resolutions of up to 8K plus HDR and features frame rates from 24 to 120 fps.
Jean-Baptiste Lorent from intoPIX shows how JPEG-XS can be used with SMPTE ST-2110 stack. Part -22 of ST 2110 allows for transport of compressed video essence as an alternative to uncompressed essence – all the other elementary streams stay the same, just the video RTP payload changes. This approach saves a lot of bandwidth and keeps all the existing advantages of moving from SDI to IP at the same time.
Based on TICO which arrived in products four or more years ago allowing HD products to support UHD workflows, JPEG XS was also designed for visually lossless quality and maintaining that quality over multiple re-encoding stages. The combination of very-low microsecond-latency and relatively low bandwidth makes it ideal for remote production of live events.
MPEG DASH is a standardised, widely-supported protocol for networked streaming – but how can you spot problems and tell if you or another vendor have implemented it right?
This webinar, run by HbbTV – an initiative aimed at merging over-the-air broadcast with broadband delivery (which includes both file-download and streaming) – sets out to explain how you can test your DASH streaming using new tools now available. For instance, HbbTV and DVB have collaborated on a DASH validation tool which checks MPDs, segments and more to be sure that a stream is compliant with both DVB and HbbTV specifications.
Bringing together the experience of Bob Campbell from Eurofins, Waqar Zia from Nomor Research and Juha Joki from Sofia Digital, anyone who develops for, or provides services based on DASH will benefit from this webinar.
Everyone has a go-to program or three they use for problem solving. Here is a review of a whole swathe of diagnosis programs out there for live streaming.
There are known favourites like Wireshark, FFPlay and MediaInfo, free applications such as Eyevinn Technology’s Segment Analyser and the open source YUView. And this also covers paid programs like Elecard’s Stream Analyser and Telestream Switch.
This talk by David Hassoun CEO of RealEyes media is well worth a look because there is bound to be something there you didn’t know about – and who knows how useful that will be to you!
There continues to be fervent activity in codec development and it’s widely expected that there won’t be a single successor to AVC (h.264). Vying for one of the spots is AV1 but also MPEG’s VVC.
In this talk at SMPTE 2018, Julien Le Tanou from MediaKind compares the coding tools used by VVC and AV1 and explains the methodology he uses to compare the two codecs. We see the increase in decoding time compared to HEVC required for VVC as well as the famously slow AV1. We also see the bitrate savings with VVC performing better.
Julien also presents subjective results which are not correlated to the objective results and explains reasons for this.