Video: Remote Production In Pajamas

Remote production (AKA REMIs) has been discussed for a long time – but what’s practical today? Teradek, Brandlive and Vimond share their experiences making it work.

The main benefit of remote production is reducing costs by keeping staff at base instead of sending them to the event. Switching video, adding graphics and publishing are all possible in the cloud, but how practical this all is and which people stay behind very much depend on the company; their quality standards, their workflows, complexity of the programme etc.

This panel at the Streaming Media East looks at when remote production is appropriate, how much does a service provider needs to be present, redundancy, the role of standards and is a wide ranging discussion on the topic.

Watch now!

Speakers

Jon Landman Jon Landman
VP of Sales,
Teradek
Megan Wagoner Megan Wagoner
VP of Sales,
Vimond
Mark Adams Mark Adams
SVP Sales & Marketing,
Brandlive
Kevin McCarthy Kevin McCarthy
Moderator
Director of Production,
VideoLink LLC

Webinar: Building Tomorrow’s OTT Platforms

Discover the critical success factors the Broadcasters and platform owners, investing millions in building and upgrading OTT platforms, need to achieve to ensure they can compete successfully with a growing array of digital competitors and deliver compelling user experiences.

Many of these broadcasters are beginning to move from their initial OTT offerings to more mature services that can scale for the future, and answer the requirements of demanding viewers and regulators.

This webinar uncovers the essential parts of a flourishing OTT service, including:
– Delivering content at scale as more viewing and live events move to OTT
– Ensuring a class-leading user experience and quality
– Using analytics to maximise revenue and engagement
– Ensuring cost efficiency in the OTT workflow
– Securing platforms and content against piracy and malicious attacks

Register now!

Speakers

Natalie Billingham Natalie Billingham
Vice President, Media & Carrier EMEA,
Akamai
Raphaël Goldwaser Raphaël Goldwaser
Lead Video Architect,
France Télévisions
Chris Wood Chris Wood
Chief Technology Officer,
Spicy Mango

Video: Deploying WebRTC In A Low-Latency Streaming Service

WebRTC is an under appreciated streaming protocol with sub-second latency. Several startups are working hard to harness this technology born by Google for use in video conferencing for live streaming.

When you look at the promised latencies, you can see why. CMAF, the lowest-latency protocol for live streaming using HLS-style chunked file delivery is gaining wider adoption and provides a very impressive latency reduction, however it typically stops at between 4 and 2 seconds. To get below a second, WebRTC is almost the only option out there.

In this talk, Millicast CTO Dr. Alex Gouaillard looks at the misunderstandings and misinformation are out there regarding WebRTC. Dr. Alex covers WebRTC now having ABR, using over multiple hops, the testing ecosystem and much more.

Dr. Alex also covers the lessons learnt over the last two years of development and implementation of the standard and finishes by looking to the future which will bring in QUIC, AV1 and Web ASM.

Watch now!
Speaker

Alex Gouaillard Alex Gouaillard
Founder & CTO,
Millicast

Video: What’s the Deal with LL-HLS?

Low latency streaming was moving forward without Apple’s help – but they’ve published their specification now, so what does that mean for the community efforts that were already underway and, in some places, in use?

Apple is responsible for HLS, the most prevalent protocol for streaming video online today. In itself, it’s a great success story as HLS was ideal for its time. It relied on HTTP which was a tried and trusted technology of the day, but the fact it was file-based instead of a stream pushed from the origin was a key factor in its wide adoption.

As life has moved on and demands have moved from “I’d love to see some video – any video – on the internet!” to “Why is my HD stream arriving after my flat mate’s TV’s?” we see that HLS isn’t quite up to the task of low-latency delivery. Using pure HLS as originally specified, a latency of less than 20 seconds was an achievement.

Various methods were, therefore, employed to improve HLS. These ideas included cutting the duration of each piece of the video, introducing HTTP 1.1’s Chunked Transfer Encoding, early announcement of chunks and many others. Using these, and other, techniques, Low Latency HLS (LHLS) was able to deliver streams of 9 down to 4 seconds.

Come WWDC this year, Apple announced their specification on achieving low latency streaming which the community is calling ALHLS (Apple Low-latency HLS). There are notable differences in Apple’s approach to that already adopted by the community at large. Given the estimated 1.4 billion active iOS devices and the fact that Apple will use adherence to this specification to certify apps as ‘low latency’, this is something that the community can’t ignore.

Zac Shenker from Comcast explains some of this backstory and helps us unravel what this means for us all. Zac first explains what LHS is and then goes into detail on Apple’s version which includes interesting, mandatory, elements like using HTTP/2. Using HTTP/2 and the newer QUIC (which will become effectively HTTP/3) is very tempting for streaming applications but it requires work both on the server and the player side. Recent tests using QUIC have been, when taken as a whole, inconclusive in terms of working out whether this it has a positive or a negative impact on streaming performance; experiments have shown both results.

The talk is a detailed look at the large array of requirements in this specification. The conclusion is a general surprise at the amount of ‘moving parts’ given there is both significant work to be done on the server as well as the player. The server will have to remember state and due to the use of HTTP/2, it’s not clear that the very small playlist.m3u8 files can be served from a playlist-optimised CDN separately from the video as is often the case today.

There’s a whole heap of difference between serving a flood of large files and delivering a small, though continually updated, file to thousands of endpoints. As such, CDNs currently optimised separately for the text playlists and the media files they serve. They may even be delivered by totally separate infrastructures.

Zac explains why this changes with LL-HLS both in terms of separation but also in the frequency of updating the playlist files. He goes on to explore the other open questions like how easy it will be to integrate Server-Side Ad Insertion (SSAI) and even the appetite for adoption of HTTP/2.

Watch now!
Speaker

Zac Shenker Zac Shenker
Director of Engineering, Video Experience & Optimization,
CBS Interactive