Video: Demystifying Video Delivery Protocols

Let’s face it, there are a lot of streaming protocols out there both for contribution and distribution. Internet ingest in RTMP is being displaced by RIST and SRT, whilst low-latency players such as CMAF and LL-HLS are vying for position as they try to oust HLS and DASH in existing services streaming to the viewer.

This panel, hosted by Jason Thibeault from the Streaming Video Alliance, talks about all these protocols and attempts to put each in context, both in the broadcast chain and in terms of its features. Two of the main contribution technologies are RIST and SRT which are both UDP-based protocols which implement a method of recovering lost packets whereby packets which are lost are re-requested from the sender. This results in a very high resilience to packet loss – ideal for internet deployments.

First, we hear about SRT from Maxim Sharabayko. He lists some of the 350 members of the SRT Alliance, a group of companies who are delivering SRT in their products and collaborating to ensure interoperability. Maxim explains that, based on the UDT protocol, it’s able to do live streaming for contribution as well as optimised file transfer. He also explains that it’s free for commercial use and can be found on github. SRT has been featured a number of times on The Broadcast Knowledge. For a deeper dive into SRT, have a look at videos such as this one, or the ones under the SRT tag.

Next Kieran Kunhya explains that RIST was a response to an industry request to have a vendor-neutral protocol for reliable delivery over the internet or other dedicated links. Not only does vendor-neutrality help remove reticence for users or vendors to adopt the technology, but interoperability is also a key benefit. Kieran calls out hitless switching across multiple ISPs and cellular. bonding as important features of RIST. For a summary of all of RIST’s features, read this article. For videos with a deeper dive, have a look at the RIST tag here on The Broadcast Knowledge.

Demystifying Video Delivery Protocols from Streaming Video Alliance on Vimeo.

Barry Owen represents WebRTC in this webinar, though Wowza deal with many protocols in their products. WebRTC’s big advantage is sub-second delivery which is not possible with either CMAF or LL-HLS. Whilst it’s heavily used for video conferencing, for which it was invented, there are a number of companies in the streaming space using this for delivery to the user because of it’s almost instantaneous delivery speed. Whilst a perfect rendition of the video isn’t guaranteed, unlike CMAF and LL-HLS, for auctions, gambling and interactive services, latency is always king. For contribution, Barry explains, the flexibility of being able to contribute from a browser can be enough to make this a compelling technology although it does bring with it quality/profile/codec restrictions.

Josh Pressnell and Ali C Begen talk about the protocols which are for delivery to the user. Josh explains how smoothstreaming has excited to leave the ground to DASH, CMAF and HLS. They discuss the lack of a true CENC – Common Encryption – mechanism leading to duplication of assets. Similarly, the discussion moves to the fact that many streaming services have to have duplicate assets due to target device support.

Looking ahead, the panel is buoyed by the promise of QUIC. There is concern that QUIC, the Google-invented protocol for HTTP delivery over UDP, is both under standardisation proceedings in the IETF and is also being modified by Google separately and at the same time. But the prospect of a UDP-style mode and the higher efficiency seems to instil hope across all the participants of the panel.

Watch now to hear all the details!
Speakers

Ali C. Begen Ali C. Begen
Technical Consultant, Comcast
Kieran Kunhya Kieran Kunhya
Founder & CEO, Open Broadcast Systems
Director, RIST Forum
Barry Owen Barry Owen
VP, Solutions Engineering
Wowza Media Systems
Joshua Pressnell Josh Pressnell
CTO,
Penthera Technologies
Maxim Sharabayko Maxim Sharabayko
Senior Software Developer,
Haivision
Jason Thibeault Moderator: Jason Thibeault
Executive Director,
Streaming Video Alliance

Video: Remote editing, storage, cloud dynamics & reopening production

The rug was pulled from under the feet of the production industry due to the pandemic, both in film and television. The scramble to finish projects and to fill TV schedules has resulted in a lot of creative ideas and a surge in remote editing. This panel looks at the benefits of this work and considers whether this will continue to be done in the future when the restrictions are lifted.

In this video, we hear from Sony, Teradici, Lou Wirth Productions, EditShare and PADEM Group on the gaping hole in workflows left by the pandemic and how the industry has bridged the gap with remote editing.

Moderated by IET Media Exec director Allan McLennan from PADEM group, we hear answers to questions like “What are the challenges moving to remote editing?”, “Can Remote Editing open up diversity in this part of the industry?” and features a look to the future in terms of new technologies for meeting the streaming demand.

“One of the challenges with a technology transition is people often need a motivation”

Stephen Tallamy, EditShare

“It’s easy to keep doing the thing you used to do until you’re forced to do it,” explains EditShare’s Stephen Tallamy. But the panel doesn’t see the pandemic as just something that forced a change, rather they see the benefits in the move towards remove editing and remote collaboration. David Rosen from Sony was positive saying that “Creative resources can be anywhere and the elimination of having to move those people to where the content it…is a significant advantage.” From his perspective, increasing numbers of customers have cloud as part of their workflow.

“Never again.” My customers are saying, “Never again will I be in a situation where I cannot get access to. my content.”

David Rosen, Sony

The panel’s discussion moves to remote editing, the practice of giving editors access to remote computers which run the editing software and have access to the relevant media. The editor’s local computer then becomes a window on to the edit suite in a different building, or in the cloud. Ian Main from Teradici, explains that a company can open an edit station up to an editor who could be anywhere in the world which is why this is such an important part of the solution to enabling work to continue in an emergency. Teradici specialises in developing and deploying high-performance remote control of PCs and Stephen Tallamy speaks from the experience of enabling remote editing using Teradici for enabling remote editing workflows on AWS and other cloud providers and data centres.

“The production side shut down, but the post-production side accelerated.”

Ian Main, Teradici
Lou Wirth, award-winning editor and producer, joins the panel as someone who has continued to edit locally. “For producers who were forced to go into a remote editing situation, they may have always been on the fence about it”, Lou says, “…If it was a good experience, they would see the advantages of it and continue.” Indeed the consensus does seem to be that much of what’s happening now will be fed back into workflows of the future even when restrictions are lifted.

Listen to the whole discussion which includes a look ahead to IBC.

Watch now!
Speakers

Ian Main Ian Main
Technical Marketing Principle,
Teradici
David Rosen David Rosen
VP, Cloud Applications & Solutions,
Sony
Stephen Tallamy Stephen Tallamy
Chief Technology Officer,
EditShare
Lou Wirth Lou Wirth
Head Story Editor,
Lou Wirth Productions
Allan McLennan Moderator: Allan McLennan
Chief Executive, Global Market Technologist, PADEM Media Group,
Executive Board Director, IET Media technology network

Video: Maintaining Colour Spaces

Getting colours right is tricky. Many of us get away without considering colour spaces both in our professional and personal life. But if you’ve ever wanted to print a logo which is exactly the right colour, you may have found out the hard way that the colour in your JPEG doesn’t always match the CMYK of the printer. Here, we’re talking, of course about colour in video. With SD’s 601 and HD’s 709 colour space, how do we keep colours correct?

Rec. ITU-R BT.601-7 also known as REC 601 is the colour space standardised for SD video, Rec. ITU-R T.709-6 also known as Rec. 709 is typically used for HD video. Now for anyone who wants to brush up on what a colour space is, check out this excellent talk from Vimeo’s Vittorio Giovara. A great communicator, we have a number of other talks from him.

In this talk starting 28 minutes into the Twitch feed, Matt Szatmary exposes a number of problems. The first is the inconsistent, and sometimes wrong, way that browsers interpret colours in videos. Second is that FFmpeg only maintains colour space information in certain circumstances and, lastly, he exposes the colour changes that can occur when you’re not careful about maintaining the ‘chain of custody’ of colour space information.

Matt starts by explaining that the ‘VUI’ information, the Video Usability Information, found in AVC and HEVC conveys colour space information among other things such as aspect ratio. This was new to AVC and are not used by the encoder but indicate to decoders things to consider during the decoder process. We then see a live demonstration of Matt using FFmpeg to move videos through different colour spaces and the immediate results in different browsers.

This is an illuminating talk for anyone who cares about actually displaying the correct colours and brightnesses, particularly given there are many processes based on FFmpeg. Matt demonstrates how to ensure FFmpeg is maintaining the correct information.

Watch now!
Download the scripts used in the video
Speakers

Matt Szatmary Matt Szatmary
Senior Video Encoding Engineer,
Mux

Video: A State-of-the-Industry Webinar: Apple’s LL-HLS is finally here

Even after restrictions are lifted, it’s estimated that overall streaming subscriptions will remain 10% higher than before the pandemic. We’ve known for a long time that streaming is here to stay and viewers want their live streams to arrive quickly and on-par with broadcast TV. There have been a number of attempts at this, the streaming community extended HLS to create LHLS which brought down latency quite a lot without making major changes to the defacto standard.

MPEG’s DASH also has created a standard for low-latency streaming allowing CMAF to be used to get the latency down even further than LHLS. Then Apple, the inventors of the original HLS, announced low-latency HLS (LL-HLS). We’ve looked at all of these previously here on The Broadcast Knowledge. This Online Streaming Primer is a great place to start. If you already know the basics, then there’s no better than Will Law to explain the details.

The big change that’s happened since Will Law’s talk above, is that Apple have revised their original plan. This talk from CTO and Founder of THEOplayer, Pieter-Jan Speelmans, explains how Apple’s modified its approach to low-latency. Starting with a reminder of the latency problem with HLS, Pieter-Jan explains how Apple originally wanted to implement LL-HLS with HTTP/2 push and the problems that caused. This has changed now, and this talk gives us the first glimpse of how well this works.

Pieter-Jan talks about how LL-DASH streams can be repurposed to LL-HLS, explains the protocol overheads and talks about the optimal settings regarding segment and part length. He explains how the segment length plays into both overall latency but also start-up latency and the ability to navigate the ABR ladder without buffering.

There was a lot of frustration initially within the community at the way Apple introduced LL-HLS both because of the way it was approached but also the problems implementing it. Now that the technical issues have been, at least partly, addressed, this is the first of hopefully many talks looking at the reality of the latest version. With an expected ‘GA’ date of September, it’s not long before nearly all Apple devices will be able to receive LL-HLS and using the protocol will need to be part of the playbook of many streaming services.

Watch now to get the full detail

Speaker

Pieter-Jan Speelmans Pieter-Jan Speelmans
CTO & Founder
THEOplayer