Video: Latency Still Sucks (and What You Can Do About It)

The streaming industry is on an ever-evolving quest to reduce latency to bring it in line with, or beat linear broadcasts and to allow business models such as gaming to flourish. When streaming started, latency of a minute or more was not uncommon and whilst there are some simple ways to improve that, getting down to the latency of digital TV, approximately 5 seconds, is not without challenges. Whilst the target of 5 seconds works for many use cases, it’s still not enough for auctions, gambling or ‘gamification’ which need sub-second latency.

In this panel, Jason Thielbaut explores how to reduce latency with Casey Charvet from Gigcasters, Rob Roskin from CenturyLink and Haivision VP Engineering, Marc Cymontkowski. This wide-ranging discussion covers CDN caching, QUIC and HTTP/3, encoder settings, segmented Vs. non-segmented streaming, ingest and distribution protocols.

Key to the discussion is differentiating the ingest protocol from the distribution protocol. Often, just getting the content into the cloud quickly is enough to bring the latency into the customer’s budget. Marc from Haivision explains how SRT achieves low-latency contribution. SRT allows unreliable networks like the Internet to be used for reliable, encrypted video contribution. Created by Haivision and now an Open Source technology with an IETF draft spec, the alliance of SRT users continues to grow as the technology continues to develop and add features. SRT is a ‘re-request’ technology meaning it achieves its reliability by re-requesting from the encoder any data it missed. This is in contrast to TCP/IP which acknowledges every single packet of data and is sent missing data when acknowledgements aren’t received. Doing it the SRT, way really makes the protocol much more efficient and able to cope with real-time media. SRT can also encrypt all traffic which, when sending over the internet, is extremely important even if you’re not sending live-sports. In this video, Marc makes the point that SRT also recovers the timing of the stream so that the data comes out the SRT pipe in the same ‘shape’ as it went in. Particularly with VBR encoding, your decoder needs to receive the same peaks and troughs as the encoder created to save it having to buffer the input as much. All this included, SRT manages to deliver a transport latency of around 2.5 times the round trip time.

Haivision are members of RIST which is a similar technology to SRT. Marc explains that RIST is approaching the problem from a standards perspective; taking IETF RFCs and applying them to RTP. SRT took a more pragmatic way forward by creating a binary which implemented the features and by making this open source for interoperability.

The video finishes with a Q&A covering HTTP Header compression, recommended size of HLS chunks, peer-to-peer streaming and latency requirements for VoD.

Watch now!
Speakers

Rob Roskin Rob Roskin
Principal Solutions Architect,
Level3 Communications
Marc Cymontkowski Marc Cymontkowski
VP Engineering – Cloud,
Haivision
Casey Charvet Casey Charvet
Managing Director,
Gigcasters
Jason Thibeault Jason Thibeault
Executive Director,
Streaming Media Alliance

Video: How to Build an SRT Streaming Flow from Encoder to Edge

SRT is an enabler for contribution over the internet – whether point to point, or cloud egress/ingress. In recent weeks here on The Broadcast Knowledge we have seen different takes on how SRT, short for Secure Reliable Transport, and RIST can be used including from Open Broadcast Systems.

Here, Karel Boek, CEO of Raskenlund a Norwegian consultancy company for streaming, explains SRT and builds a workflow as a live demo showing how you can implement it quickly and easily. He starts by explaining where SRT sits and what it’s for. SRT makes contribution over the internet possible because it has a very light-touch away of recovering missing packets which are inevitable on internet links. Karel covers Haivision’s creation of SRT and the SRT Alliance that has grown out of that which now boast 350 members. The protocol being Open Source – and now an IETF Draft – means that a lot of companies have been happy to adopt the protocol. There are frequent plugfests, one has just concluded, where vendors test compatibility with the increasing set of features offered in SRT.

‘Secure’ is the ‘S’ in SRT’s name which is because the stream can be easily encrypted as part of the protocol. This is an important aspect in enabling sports and enterprise contribution in the cloud given the security that no-one can watch the feed before it gets to its destination.

‘Reliable’ is the key offer for SRT as that’s the number one problem with the internet and other networks where not all packets get delivered. TCP/IP is a great protocol on which most webpages are delivered. It’s fantastic for file delivery since every single packet gets acknowledged and there really isn’t any way that a file can get to the other end without being completely intact. Live streams can’t afford the overhead of counting in and counting out every packet so SRT’s ability to request only the missing packets is very important. It should be noted that this ability is also in Zixi, ARQ and RIST.

Karel compares SRT with other protocols including RTMP, MPEG-2 Transport Streams amongst others. He is careful to separate HLS, MPEG-DASH and WebRTC as ‘last-mile protocols’ in order to make a differentiation between those which content providers use to move video around as part of production and those which are used for distribution. RTMP’s use is still notable but diminishing particularly in Europe and the American markets. But the idea of MPEG-TS over UDP is still the best way to deliver within a building. Outside of the building, you would then want to protect it at least with FEC, with SMPTE 2022-7 or, better, with a protocol such as RIST or SRT. Karel mentions the details of the Simple Profile of RIST which was, by design, missing the features Karel notes are absent. We’ve heard here on The Broadcast Knowledge that these features have been delivered as planned in the Main Profile.

In the final part of this talk, Karel builds, live, an example workflow which combines both Wowza and SRTHub to create an end-to-end workflow. This is a great way of demonstrating how quickly you can create a workflow with SRT. There are plenty of SRT-enabled encoders and senders which is one of the ways we can judge the success of the SRT Alliance. Similarly whilst Haivision’s SRTHub is a useful product which brings things together in the cloud or on-prem, but Techex’s MWEdge and Videoflow’s DVG can do similar or more, each with their own advantages.

Overwell the takeaway from this talk from Raskenlund is that internet contribution is sorted, it’s now for your to choose how to do it and with whom. To that end, the talk ends with a Q&A from people wondering exactly that.

Watch now!
Speaker

Karel Boek Karel Boek
CEO,
Raskenlund

Video: RIST: Enabling Remote Work with Reliable Live Video Over Unmanaged Networks

Last week’s article on RIST, here on The Broadcast Knowledge, stirred up some interest about whether we view RIST as being against SRT & Zixi, or whether it’s an evolution thereof. Whilst the talk covered the use of RIST and the reasons one company chose to use it, this talk explains what RIST achieves in terms of features showing that it has a ‘simple’ and ‘main’ profile which bring different features to the table.

Rick Ackermans is the chair of the RIST Activity Group which is the group that develops the specifications. Rick explains some of the reasons motivating people to look at the internet and other unmanaged networks to move their video. The traditional circuit-based contribution and distribution infrastructure on which broadcasting relied has high fixed costs. Whilst this can be fully justifiable for transmitter links, though still expensive, for other ad-hoc circuits you are paying all the time for something which is only occasionally used, satellite space in the C-band is reducing squeezing people out. And, of course, remote working is much in the spotlight so technologies like RIST which don’t have a high latency (unlike HLS) are in demand.

RIST manages to solve many of the problems with using the internet such as protecting your content from theft and from packet loss. It’s a joint effort between many companies including Zixi and Haivision. The aim is to create choice in the market by removing vendor bias and control. Vendors are more likely to implement an open specification than one which has ties to another vendor so this should open up the market creating more demand for this type of solution.

In the next section, we see how RIST as a group is organised and who it fits in to the Video Services Forum, VSF. We then look at the profiles available in RIST. A full implementation aims at being a 3-layer onion with the ‘Simple Profile’ in the middle. This has basic network resilience and interoperability. On top of that, the ‘Main Profile’ is built which adds encryption, authentication and other features. The future sees an ‘Enhanced Profile’ which may bring with it channel management.

Rick then dives down into each of these profiles to uncover the details of what’s there and explain the publication status. The simple profile allows full RTP interoperability for use as a standard sender, but also adds packet recovery plus seamless switching. The Main profile introduces the use of GRE tunnels where a single connection is setup between two devices. Like a cable, multiple signals can then be sent down the cable together. From an IT perspective this makes life so much easier as the number of streams is totally transparent to the network so firewall configuration, for example, is made all the simpler. However it also means that by just running encryption on the tunnel, everything is encrypted with no further complexity. Encryption works better on higher bitrate streams so, again, running on the aggregate has a benefit than on each stream individually. Rick talks about the encryption modes with DTLS and Pre-shared Key being available as well as the all important, but often neglected, step of authenticating – ensuring you are sending to the endpoint you expected to be sending to.

The last part of the talk covers interoperability, including a comparison between RIST and SRT. Whilst there are many similarities, Rick claims RIST can cope with higher percentages of packet loss. He also says that 2022-7 doesn’t work with SRT, though The Broadcast Knowledge is aware of interoperable implementations which do allow 2022-7 to work even through SRT. The climax of this section is explaining the setup of the RIST NAB demo, a multi-vendor, international demo which proved the reliability claims. Rick finishes by examining some case studies and with a Q&A.

Watch now!
Speakers

Merrick Ackermans Rick Ackermans
MVA Broadcast Consulting
RIST Activity Group Chair

Video: SRT – The Simple Solution for Your Remote and At-Home Workforce

SRT allows unreliable networks like the Internet to be used for reliable, encrypted video contribution. Created by Haivision and now an Open Source technology with an IETF draft spec, the alliance of SRT users continues to grow as the technology continues to develop and add features. Haivision are members of RIST which Kieran Kunhya spoke about in yesterday’s article.

Being open source, SRT is widely deployed in across hundreds of manufacturers so there is a lot of choice although Haivision do focus on their products in this video. The important part is in how the protocol works to keep the data intact which is dealt with in the second segment from Haivision’s Selwyn Jans. Lastly we hear of some examples of real-world use cases to whet the appetite and start the thought process about how SRT could benefit you.

The fundamental aspect of SRT, as Selwyn explains, is that the packets are counted in at the remote end and if one packet is missing, it’s re-requested from the source. Whilst this is how normal file transfers work, using TCP, this has been optimised to ensure real-time media isn’t unduly delayed. TCP would acknowledge every single packet and the sender should take note when a packet acknowledgement doesn’t arrive. SRT is more efficient whereby no acknowledgements are sent, only re-requests which keeps the amount of administrative traffic down. A buffer is set up in the destination so that there is still data available while we’re waiting for these packets to be resent. Depending on the network quality, we may need to have enough buffer to deal with several rerequests for the sane packet.

How SRT Works

Selwyn expands upon this re-request mechanism and looks the way SRT can be sent, or ‘pushed’, as well as working in as a ‘listener’ so that the sender waits to be contacted. before it starts sending any data. You can choose the best one to use to fit around your firewalls. Where there is a NAT firewall, SRT can always be sent out but receiving requests would need firewall modification. One of the benefits of SRT is its ability to be deployed anywhere, including in a home, quickly and easily so firewall changes would not be welcome. For a more in-depth description of SRT, check out this talk from SF Video Technology.

The last section features Corey Behnke from streaming company Live X talking about where they have been using SRT. Replacing satellite is one important use of SRT since in many places, there is sufficient bandwidth available to stream over the internet. Before technologies such as SRT, this was likely to lead to break ups on air, so satellite was the clear winner. Now, there’s money to be saved by not buying satellite space. Could ingress and egress is also a very important workflow for SRT, and similar protocols. The panelists explain how this works using as an example the Haivision Media Gateway, though other products such as Techex and Videoflow.

Watch now!
Speakers

Marcus Schioler Marcus Schioler
Vice President, Product Marketing
Haivision
Selwyn Jans Selwyn Jans
Technical Video Engineer,
Haivision
Corey Behnke Corey Behnke
Producer & Co-Founder,
Live X