Low latency streaming is always a compromise, but what can be done to keep QOE high?
This on-demand webinar looks at CMAF and presents some real-world data on this low latency technique. The webinar starts by explaining that CMAF is a low-latency streaming technology similar to HLS and other streaming protocols where the idea is to deliver the video as small files. Olivier and Alain from Harmonic explain how this is done and look at some of the trade-offs and compromises that are needed and introduce techniques to keep QOE high. They also look at deployment in cloud vs. on premise.
Pieter-Jan Speelmans talks about play tradeoffs and optimisations within the player. CMAF allows the buffer to be reduced and whilst a bad network may mean you buffer is similar to ‘normal’, but in good networks, this buffer can be brought down significantly. He also talks about how ABR switching is impacted by GOP length even in CMAF.
Viaccess-Orca explains how DRM works with CMAF and looks at some of the challenges including licences acquisition time and overloading licence servers at the beginning of events. Akamai’s Will Law explains some benefits of CMAF and the near-real-time of chunk-based transfer (HTTP 1.1) and how downloading chunks at full speed leads to problems when the same broadband link is used by several clients.
There are lots of good talks on CMAF, but this is one of the few which talks about CMAF not as theory, but as is deployable today.
Sports broadcasting has always been on the forefront of technology both by developing products specifically for the sporting market such as sports graphics, annotation and ball tracking and also by pressing nearly any new technology that comes along into production.
The result of this relentless thirst for technology is year-by-year better and better productions done in more innovative and often lower-cost ways.
Remote production has long been a buzz word in sports broadcasting which has taken a long time to take hold (known as REMIs in North America). This is partly because the technologies needed to do it really well and really seamlessly are only just becoming dominant and partly because sports workflows from a technology and a business needs perspective are so different from company to company that one remote production.
However there are ever stronger pushes into remote production which very much brings remote production into day-to-day use in many companies. Kiswe Mobile joins us on this webinar to explain their experience in enabling remote production.
AI is looked on as an important tool in sports broadcasting. With so much data, both visual and textual, AI will increasingly be an excellent tool to parse and interpret these large data sets. Whether this is simply to produce better stats analytics or to comb through the thousands of hours of footage looking for, and logging, interesting events between players, ball possession stats etc.
IBC brings in Jérôme Wauthoz from Tedial and production consultant Mike Ruddell to bring us their experience making the sports on our screens as great as it can be at a cost that broadcasters can afford.
CMAF brings low latency streams of less than 4 seconds into the realms of possibility, WebRTC pushes that below a second – but which is the right technology for you?
Date: June 12th 2019 Time: 11am PST / 2pm EST / 19:00 BST
CMAF represents an evolution of the tried and tested technologies HLS and DASH. With massive scalability and built upon the well-worn tenants of HTTP, Netflix and a whole industry was born and is thriving on these still-evolving technologies. The push to reduce latency further and further has resulted in CMAF which can be used to deliver streams with five to ten times lower latencies.
WebRTC is a Google-backed streaming protocol with the traditional meaning of streaming; it pushes a stream to you a opposed to the HLS-style methods of making small files available for download and reassembly into a stream. One benefit of this is extremely low bitrates of 1 second or less. Used widely by Google Hangouts and Facebook messenger, WebRTC is increasingly an option for more broadcast-style streaming services from live sports & music to gaming and gambling.
Both have advantages and draw-backs so Wowza’s Barry Owen and Anne Balistreri are here to help navigate the ins and outs of both technologies plus answer your questions.
JPEG XS is a brand-new, ultra-low latency standard delivering JPEG 2000 quality with 1000x lower latency; microseconds instead of milliseconds. This mezzanine compression standard promises compression ratios of up to 10:1, resolutions of up to 8K plus HDR and features frame rates from 24 to 120 fps.
Jean-Baptiste Lorent from intoPIX shows how JPEG-XS can be used with SMPTE ST-2110 stack. Part -22 of ST 2110 allows for transport of compressed video essence as an alternative to uncompressed essence – all the other elementary streams stay the same, just the video RTP payload changes. This approach saves a lot of bandwidth and keeps all the existing advantages of moving from SDI to IP at the same time.
Based on TICO which arrived in products four or more years ago allowing HD products to support UHD workflows, JPEG XS was also designed for visually lossless quality and maintaining that quality over multiple re-encoding stages. The combination of very-low microsecond-latency and relatively low bandwidth makes it ideal for remote production of live events.
MPEG DASH is a standardised, widely-supported protocol for networked streaming – but how can you spot problems and tell if you or another vendor have implemented it right?
This webinar, run by HbbTV – an initiative aimed at merging over-the-air broadcast with broadband delivery (which includes both file-download and streaming) – sets out to explain how you can test your DASH streaming using new tools now available. For instance, HbbTV and DVB have collaborated on a DASH validation tool which checks MPDs, segments and more to be sure that a stream is compliant with both DVB and HbbTV specifications.
Bringing together the experience of Bob Campbell from Eurofins, Waqar Zia from Nomor Research and Juha Joki from Sofia Digital, anyone who develops for, or provides services based on DASH will benefit from this webinar.
Webinar Date: Thursday May 30th 2019
Time: Duration 4 hours. 7am PT / 10am ET / 15:00 BST
AWS is synonymous with cloud computing so an insight into managing media on AWS is an insight into cloud computing in general. AWS is offering a 4-hour showcase of implementing content creation, distribution and your supply chain in the cloud.
The online event starts with a keynote on the motivations for moving your workflows into the cloud and how AWS meets them. After that, there are 3 tracks which track the 3 topics.
The complete list is available here. AWS Elemental dominates the distribution track explaining the use cases that can be met and going through the many in-cloud transcoding options.
The creation and supply chain tracks finish with a customer spotlight from FuseFX and Deluxe respectively. For anyone considering a move to the cloud for any part of their operation, these sessions should shed light on what is actually achievable and what is still wishful thinking.
Webinar date: Thursday May 30th 2019
Time: 16:00 BST / 11 am EST / 8 am PDT
Experienced advice is on hand in this webinar for those producing in HDR and UHD. Productions are always trying to raise the quality of acquisition in order to deliver better quality to the viewers, to enhance creative possibilities and to maximise financial gain by future proofing their archives. But this push always brings challenges in production and the move to UHD and HDR is no different.
HDR and UHD are not synonymous, but often do go hand-in-hand. This is partly because the move to UHD is a move to improve quality, but time and again we hear the reasons that increasing resolution in and of itself is not always an improvement. Rather the ‘better pixels’ mantra seeks to improve quality through improving the video using a combination of resolution, frame-rate, HDR and Wide Colour Gamut (WCG). So when it’s possible, HDR and WCG are often combined with UHD.
In this webinar, we hear the challenges on the way to success met by director and producer Pamela Ann Berry and The Farm Group. Register to hear them share their tips and tricks for better UHD and HDR production.
ISO BMFF a standardised MPEG media container developed from Apple’s Quicktime and is the basis for cutting edge low-latency streaming as much as it is for tried and trusted mp4 video files. Here we look into why we have it, what it’s used for and how it works.
ISO BMFF provides a structure to place around timed media streams whilst accommodating the metadata we need for professional workflows. Key to its continued utility is its extensible nature allowing additional abilities to be added as they are developed such as adding new codecs and metadata types.
ATSC 3.0’s streaming mechanism MMT is based on ISO BMFF as well as the low-latency streaming format CMAF which shows that despite being over 18 years old, the ISO BMFF container is still highly relevant.
Thomas Stockhammer is the Director of Technical Standards at Qualcomm. He explains the container format in structure and origin before explaining why it’s ideal for CMAF’s low-latency streaming use case, finishing off with a look at immersive media in ISO BMFF.
AV1 and VVC are both new codecs on the scene. Codecs touch our lives every day both at work and at home. They are the only way that anyone receives audio and video online and television. So all together they’re pretty important and finding better ones generates a lot of opinion.
So what are AV1 and VVC? VVC is one of the newest codecs on the block and is undergoing standardisation in MPEG. VVC builds on the technologies standardised by HEVC but adds many new coding tools. The standard is likely to enter draft phase before the end of 2019 resulting in it being officially standardised around a year later. For more info on VVC, check out Bitmovin’s VVC intro from Demuxed
AV1 is a new but increasingly known codec, famous for being royalty free and backed by Netflix, Apple and many other big hyper scale players. There have been reports that though there is no royalty levied on it, patent holders have still approached big manufacturers to discuss financial reimbursement so its ‘free’ status is a matter of debate. Whilst there is a patent defence programme, it is not known if it’s sufficient to insulate larger players. Much further on than VVC, AV1 has already had a code freeze and companies such as Bitmovin have been working hard to reduce the encode times – widely known to be very long – and create live services.
Here, Christian Feldmann from Bitmovin gives us the latest status on AV1 and VVC. Christian discusses AV1’s tools before discussing VVC’s tools pointing out the similarities that exist. Whilst AV1 is being supported in well known browsers, VVC is at the beginning.
There’s a look at the licensing status of each codec before a look at EVC – which stands for Essential Video Coding. This has a royalty free baseline profile so is of interest to many. Christian shares results from a Technicolor experiment.
Controlling services by voice is on the rise. Recently we have seen Google move all their Nest hardware control into Google Assistant and the abilities of Alexa and Siri continue to grow. All of these smart speakers and voice-controlled AI assistants have seen rapid adoption in homes, the UK being the biggest adopter with voice assistant devices now used in more than a quarter of all households.
With a shift away from the on-screen EPG and clunky remote controls to a world where any content is a voice command away, who owns the voice interface with the consumer and the vast amount of valuable data it creates? Does this put more power in the hands of the Silicon Valley tech giants as their voice assistants and AI algorithms become a new gatekeeper? And how should content owners respond?
This webinar explores the value of voice control for content, and finds the best strategies for broadcasters and platform operators to develop voice interfaces and maintain control of the user experience.