Video: SRT – How the hot new UDP video protocol actually works under the hood

In the west, RTMP is seen as a dying protocol so the hunt is on for a replacement which can be as widely adopted but keep some of it’s best parts including relatively low latency. SRT is a protocol for Secure, Reliable Transport of streams over the internet so does this have a role to play and how does it work?

Alex Converse from Twitch picks up the gauntlet to dive deep into the workings of SRT to show how it compares to RTMP and specifically how it improves upon it.

RTMP fails in many ways, two to focus on are that the spec has stopped moving forward and it doesn’t work well over problematic networks. So Alex takes a few minutes to explain where SRT has come from, the importance of t being open source and how to get hold of the code and more information.

Now, Alex starts his dive into the detail reminding us about UDP, TS Packets and. Ethernet MTUs has he goes down. We look at how SRT data packets are formed which helps explain some of the features and sets us up for a more focussed look.

SRT, as with other, similar protocols which create their resilience by retransmitting missing packets, need to use buffers in order to have chance to send the missing data before it’s needed at the decoder. Alex takes us through how the sender and receiver buffers work to understand the behaviour in different situations.

Fundamental to the whole protocol is packet the packet acknowledgement and negative acknowledgements which feature heavily before we discuss handshaking as we start our ascent from the depths of the protocol. As much as acknowledgements provide the reliability, encryption provides the ‘secure’ in Secure Reliable Transport. We look at the approach taken to encryption and how it relates to current encryption for websites.

Finally Alex answers a number of questions from the audience as he concludes this talk from the San Francisco Video Tech meet up.

Watch now!

Speaker

Alex Converse Alex Converse
Streaming Video Software Engineer,
Twitch

Video: How Libre Can you Go?


Many companies would love to be using free codecs, unencumbered by patents, rather than paying for HEVC or AVC. Phil Cluff shows that, contrary to popular belief, it is possible stream with free codecs and get good coverage on mobile and desktop.

Phil starts off by looking at the codecs available and whether they’re patent encumbered with an eye to how much of the market can actually decode them. Free codecs and containers like WebM, VP8 etc. are not supported by Safari which reduces mobile penetration by half. To prove the point, Phil presents the results of his trials in using HEVC, AVC and VP8 on all major browsers.

Whilst this initially leaves a disappointing result for streaming with libre codecs on mobile, there is a solution! Phil explains how an idea from several years ago is being reworked to provide a free streaming protocol MPAG-SASH which avoids using DASH which is itself based on ISO BMFF which is patent encumbered. He then explains how open video players like video.js can be modified to decode libre codecs.

With these two enhancements, we finally see that coverage of up to 80% on mobile is, in principle, possible.

Watch now!
Speakers

Phil Cuff Phil Cluff
Streaming Specialist,
Mux

Video: What’s the Deal with LL-HLS?

Low latency streaming was moving forward without Apple’s help – but they’ve published their specification now, so what does that mean for the community efforts that were already under way and, in some places, in use?

Apple is responsible for HLS, the most prevalent protocol for streaming video online today. In itself it’s a great success story as HLS was ideal for its time. It relied on HTTP which was a tried and trusted technology of the day, but the fact it was file-based instead of a stream pushed from the origin was a key factor in its wide adoption.

As life has moved on and demands have moved from “I’d love to see some video – any video – on the internet!” to “Why is my HD stream arriving after my flat mate’s TV’s?” we see that HLS isn’t quite up to the task of low-latency delivery. Using pure HLS as originally specified, a latency of less than 20 seconds was an achievement.

Various methods were, therefore, employed to improve HLS. These ideas included cutting the duration of each piece of the video, introducing HTTP 1.1’s Chunked Transfer Encoding, early announcement of chunks and many others. Using these, and other, techniques, Low Latency HLS (LHLS) was able to deliver streams of 9 down to 4 seconds.

Come WWDC this year, Apple announced their specification on achieving low latency streaming which the community is calling ALHLS (Apple Low-latency HLS). There are notable differences in Apple’s approach to that already adopted by the community at large. Given the estimated 1.4 billion active iOS devices and the fact that Apple will use adherence to this specification to certify apps as ‘low latency’, this is something that the community can’t ignore.

Zac Shenker from Comcast explains some of this backstory and helps us unravel what this means for us all. Zac first explains the what LHS is and then goes in to detail on Apple’s version which includes interesting, mandatory, elements like using HTTP/2. Using HTTP/2 and the newer QUIC (which will become effectively HTTP/3) is very tempting for streaming applications but it requires work both on the server and the player side. Recent tests using QUIC have been, when taken as a whole, inconclusive in terms of working out whether this it has a positive or a negative impact on streaming performance; experiemnts have shown both results.

The talk is a detailed look at the large array of requirements in this specification. The conclusion is a general surprise at the amount of ‘moving parts’ given there is both significant work to be done on the server as well as the player. The server will have to remember state and due to the use of HTTP/2, it’s not clear that the very small playlist.m3u8 files can be served from a playlist-optimised CDN separately from the video as is often the case today.

There’s a whole heap of difference between serving a flood of large files and delivering a small, though continually updated, file to thousands of end points. As such, CDNs currently optimised separately for the text playlists and the media files they serve. They may even be delivered by totally separate infrastructures.

Zac explains why this changes with LL-HLS both in terms of separation but also in the frequency of updating the playlist files. He goes on to explore the other open questions like how easy it will be to integrate Server-Side Add Insertion (SSAI) and even the appetite for adoption of HTTP/2.

Watch now!
Speaker

Zac Shenker Zac Shenker
Director of Engineering, Video Experience & Optimization,
CBS Interactive

Video: Understanding Video Performance: QoE is not QoS

Mux’s Justin Sanford explains the difference between Quality of Service and Quality of Experience; the latter being about the entire viewer experience. Justin looks at ‘Startup time’ showing that it’s a combination of an number of factors which can include loading a web page showing the dependence of your player on the whole ecosystem.

Justin discusses rebuffering and what ‘quality’ is when we talk about streaming. Quality is a combination of encoding quality, resolution but also whether the playback judders.

“Not every optimisation is a tradeoff, however startup time vs. rebuffering is a canonical tradeoff.”

Justin Sanford, Mux

Finally we look at ways of dealing with this, including gathering analytics, standards for measuring quality of experience, and understanding the types of issues your viewers care most about.

From San Francisco Video Tech.

Watch now!

Speaker

Justin Sanford Justin Sanford
Product Manager,
Mux