This webinar brings together Support Partners and Microsoft to explain the term ‘intelligent cloud’ and how this can help creative teams produce higher quality, more innovative content by augmenting human ingenuity, manage content better and grow audiences while increasing advertising and subscription revenue.
The panel will cover:
– Haivision’s SRT Hub, intelligent media routing and cloud-based workflows
– Highlights from partners such as Avid, Telestream and Wowza.
– New production workflows for remote live production, sports and breaking news.
– Connected production: A process that helps with production collaboration and management, removing traditional information and creative silos which exist today, while driving savings and efficiencies from script to screen.
Real-world solutions to real-world streaming latency in this panel from the Content Delivery Summit at Streaming Media East. With everyone chasing reductions in latency, many with the goal of matching traditional broadcast latencies, there are a heap of tricks and techniques at each stage of the distribution chain to get things done quicker.
The panel starts by surveying the way these companies are already serving video. Comcast, for example, are reducing latency by extending their network to edge CDNs. Anevia identified encoding as latency-introducer number 1 with packaging at number 2.
Bitmovin’s Igor Oreper talks about Periscope’s work with low-latency HLS (LHLS) explaining how Bitmovin deployed their player with Twitter and worked closely with them to ensure LHLS worked seamlessly. Periscope’s LHLS is documented in this blog post.
The panel shares techniques for avoiding latency such as keeping ABR ladders small to ensure CDNs cache all the segments. Damien from Anevia points out that low latency can quickly become pointless if you end up with a low-latency stream arriving on an iPhone before Android; relative latency is really important and can be more so than absolute latency.
The importance of HTTP and the version is next up for discussion. HTTP 1.1 is still widely used but there’s increasing interest in HTTP 2 and QUIC which both handle connections better and reduce overheads thus reducing latency, though often only slightly.
The panel finishes with a Q&A after discussing how to operate in multi-CDN environments.
Esports is here to stay and brings a new dimension on big events which combine the usual challenges of producing and broadcasting events at scale with less usual challenges such as non-standard resolutions and frame rates. This session from the IBC 2019 conference looks at the reality of bringing such events to life.
The talk starts with a brief introduction to some Esports-only terms before heading into the discussions starting with Simon Eicher who talks about his switch toward typical broadcast tools for Esports which has helped drive better production values and story telling. Maxwell Trauss from Riot Games explains how they incubated a group of great producers and were able keep production values high by having them working on shows remotely worldwide.
Blizzard uses the technique of using a clean ‘world feed’ which is shared worldwide for regions to regionalise it with graphics and language before then broadcasting this to the world. In terms of creating better storytelling, Blizzard have their own software which interprets the game data and presents it in a more consumable way to the production staff.
Observers are people who control in-game cameras. A producer can call out to any one of the observers. The panel talks about how separating the players from the observers from the crowd allows them to change the delay between what’s happening in the game and each of these elements seeing it. At the beginning of the event, this creates the opportunity to move the crowd backwards in time so that players don’t get tipped off. Similarly they can be isolated from the observers for the same effect. However, by the end of the game, the delays have been changed to bring everyone back into present time for a tense finale.
Corey Smith from Blizzard explains the cloud setup including clean feeds where GFX is added in the cloud. This would lead to a single clean feed from the venue, in the end. ESL, on the other hand choose to create their streams locally.
Ryan Chaply from Twitch explains their engagement models some of which reward for watching. Twitch’s real-time chat banner also changes the way productions are made because the producers have direct feedback from the viewers. This leads, day by day, to tweaks to the formats where a production may stop doing a certain thing by day three if it’s not well received, conversely when something is a hit, they can capitalise on this.
Ryan also talks about what they are weighing up in terms of when they will start using UHD. Riot’s Maxwell mentions the question of whether fans really want 4K at the moment, acknowledging it’s an inevitability, he asks whether the priority is actually having more/better stats.
The panel finishes with a look to the future, the continued adoption of broadcast into Esports, timing in the cloud and dealing with end-to-end metadata and a video giving a taste of the Esports event.
FPGAs are flexible, reprogrammable chips which can do certain tasks faster than CPUs, for example, video encoding and other data-intensive tasks. Once the domain of expensive hardware broadcast appliances, FPGAs are now available in the cloud allowing for cheaper, more flexible encoding.
In fact, according to NGCodec founder Oliver Gunasekara, video transcoding makes up a large percentage of cloud work loads and this increasing year on year. The demand for more video and the demand for more efficiently-compressed video both push up the encoding requirements. HEVC and AV1 both need much more encoding power than AVC, but the reduced bitrate can be worth it as long as the transcoding is quick enough and the right cost.
Oliver looks at the likely future adoption of new codecs is likely to playout which will directly feed into the quality of experience: start-up time, visual quality, buffering are all helped by reduced bitrate requirements.
It’s worth looking at the differences and benefits of CPUs, FPGAs and ASICs. The talk examines the CPU-time needed to encode HEVC showing the difficulty in getting real-time frame rates and the downsides of software encoding. It may not be a surprise that NGCodec was acquired by FPGA manufacturer Xilinx earlier in 2019. Oliver shows us the roadmap, as of June 2019, of the codecs, VQ iterations and encoding densities planned.
The talk finishes with a variety of questions like the applicability of Machine Learning on encoding such as scene detection and upscaling algorithms, the applicability of C++ to Verilog conversion, the need for a CPU for supporting tasks.
Former CEO, founder & president, NGCodec
Oliver is now an independent consultant.
Subscribe to get daily updates
Views and opinions expressed on this website are those of the author(s) and do not necessarily reflect those of SMPTE or SMPTE Members.
This website is presented for informational purposes only. Any reference to specific companies, products or services does not represent promotion, recommendation, or endorsement by SMPTE