SRT – How the hot new UDP video protocol actually works under the hood

It’s been a great year at The Broadcast Knowledge growing to over four thousand followers on social media and packing in 250 new articles. So what better time to look back at 2020’s most popular articles as we head into the new year?

It’s fair to say that SRT has seen a lot of interest this year. This was always going to be the case as top-tier broadcasters are now adopting a ‘code as infrastructure’ approach. whereby transmission chains, post-production and live-production workflows are generated via APIs in the cloud, ready for temporary or permanent use. Seen before as the perfect place to put your streaming service, the cloud is increasingly viewed as a viable option for nearly all parts of the production chain.

Getting video in and out of the cloud can be done without SRT, but SRT is a great option as it seamlessly corrects for missing packets which can get lost on the route. How it does this, is the topic of this talk from Alex Converse from Twitch. In the original article on this site, one of the highest-ranking this year, it’s also pitched as an RTMP replacement.

RTMP is still heavily used around the world and like many established technologies, there’s an element of ‘better the devil you know’ mixed in with a reality that much equipment out there will never be updated to do anything else. However, new equipment is being delivered with technologies such as SRT which means that getting from your encoder to the cloud, can now be done with less latency, with better reliability and with a wider choice of codecs than RTMP.

SRT, along with RIST, is helping transform the broadcast industry. To learn more, watch Alex’s video and then look at our other articles and videos on the topic.

Speaker

Alex Converse Alex Converse
Streaming Video Software Engineer,
Twitch

Video: Proper Network Designs and Considerations for SMPTE ST-2110

Networks from SMPTE ST 2110 systems can be fairly simple, but the simplicity achieved hides a whole heap of careful considerations. By asking the right questions at the outset, a flexible, scalable network can be built with relative ease.

“No two networks are the same” cautions Robert Welch from Arista as he introduces the questions he asks at the beginning of the designs for a network to carry professional media such as uncompressed audio and video. His thinking focusses on the network interfaces (NICs) of the devices: How many are there? Which receive PTP? Which are for management and how do you want out-of-band/ILO access managed? All of these answers then feed into the workflows that are needed influencing how the rest of the network is created. The philosophy is to work backwards from the end-nodes that receive the network traffic.

Robert then shows how these answers influence the different networks at play. For resilience, it’s common to have two separate networks at work sending the same media to each end node. Each node then uses ST 2022-7 to find the packets it needs from both networks. This isn’t always possible as there are some devices which only have one interface or simply don’t have -7 support. Sometimes equipment has two management interfaces, so that can feed into the network design.

PTP is an essential service for professional media networks, so Robert discusses some aspects of implementation. When you have two networks delivering the same media simultaneously, they will both need PTP. For resilience, a network should operate with at least two Grand Masters – and usually, two is the best number. Ideally, your two media networks will have no connection between them except for PTP whereby the amber network can benefit from the PTP from the blue network’s grandmaster. Robert explains how to make this link a pure PTP-only link, stopping it from leaking other information between networks.

Multicast is a vital technology for 2110 media production, so Robert looks at its incarnation at both layer 2 and layer 3. With layer 2, multicast is handled using multicast MAC addresses. It works well with snooping and a querier except when it comes to scaling up to a large network or when using a number of switches. Robert explains that this because all multicast traffic needs to be sent through the rendez-vous point. If you would like more detail on this, check out Arista’s Gerard Phillips’ talk on network architecture.

Looking at JT-NM TR-1001, the guidelines outlining the best practices for deploying 2110 and associated technologies, Robert explains that multicast routing at layer 3 works much increases stability, enables resiliency and scalability. He also takes a close look at the difference between ‘all source’ multicasting supported by IGMP version 2 and the ability to filter for only specific sources using IGMP version 3.

Finishing off, Robert talks about the difficulties in scaling PTP since all the replies/requests go into the same multicast group which means that as the network scales, so does the traffic on that multicast group. This can be a problem for lower-end gear which needs to process and reject a lot of traffic.

Watch now!
Speaker

Robert Welch Robert Welch
Technical Solutions Lead
Arista Networks

Video: Remote Production in Real Life

Remote production is in heightened demand at the moment, but the trend has been ongoing for several years. For each small advance in technology, it becomes practical for another event to go remote. Remote production solutions have to be extremely flexible as a remote production workflow for one copmany won’t work for the next This is why the move to remote has been gradual over the last decade.

In this video, Dirk Skyora from Lawo gives three examples of remote production projects stretching back as far as 2016 to the present day in this RAVENNA webinar with evangelist Andreas Hildebrand.

The first case study is remote production for Belgian second division football. Working with Belgian telco Proximus along with Videohouse & NEP, LAWO setup remote production for stadia kitted out with 6 cameras and 2 commentary positions. With only 1 gigabit connectivity to the stadiums, they opted to use JPEG 2000 encoding at 100 Mbps for both the cameras out of the stadia but also the two return feeds back in for the commentators.

The project called for two simultaneous matches feeding into an existing gallery/PCR. Deployment was swift with flightcases deployed remotely and a double set of equipment being installed into the fixed PCR. Overall latency was around 2.5 frames one-way, so the camera viewfinders were about 5 frames adrift once transport and en/decoding delay were accounted for.

The main challenges were with the MPLS network into the stadia which would spontanously reroute and be loaded with unrelated traffic at around 21:00. Although there was packet loss, none was noticable on the 100Mbps J2K feeds. Latency for the commentators was a problem so some local mixing was needed and lastly PTP wasn’t possible over the network. Timing was, therefore, derived from the return video feed into the stadium which had come from the PTP-locked gallery. Locally this incoming timing was used to lock a locally generated PTP signal.

The next case study is inter-country links for the European Council connecting the Luxembourg and Brussels buildings for the European Council. The project was to move all production to a single tech control room in Brussells and relied on two 10GbE links between the buildings going through an Arista 7280 carrying 18 videos in one direction and two in return. Although initially reluctant to compress, the Council realised after testing that VC2 which offers around 4x compression would work well and deliver no noticable latency (approx 20ms end to end). Thanks to using VC2, the 10Gig links saw low usage from the project and the Council were able to migrate other business activities onto the link. PTP was generated in Brussels and Luxembourg re-generated their PTP from the Brussels signal, to be distributed locally. Overall latency was 1 frame.

Lastly, Dirk outlines the work done for the Belgium Daily News which had been bought out by DPG Media. This buy-out prompted a move from Brussels to Antwerp where a new building opened. However, all of the techinical equipmennt was already in Brussels. This led to the decision to remote control everything in Brussels from Antwerp. The production staff moved to Antwerp, causing some issues with the disconnect between production and technical, but also due to personnel relocating and getting used to new facilities.

The two locations were connected with a 400GbE, redundant infrastructure using IP<->SDI gateways. Latency was 1 frame and, again, PTP on one site was created from the incoming PTP from the other.

The video finishes with a detailed Q&A.

Watch now!
Speakers

Dirk Sykora Dirk Sykora
Technical Sales Manager,
Lawo
Andreas Hildebrand Andreas Hildebrand
RAVENNA Evangelist,
ALC NetworX

Video: Broadcast in the cloud!

Milan Video Tech’s back with a three takes on putting broadcast into the cloud. So often we see the cloud as ‘for streaming’. That’s not today’s topic; we’re talking ingest and live transmissions in the cloud. Andrea Fassina from videodeveloper.io introduces the three speakers who share their tips for doing cloud well by using KPIs, using the cloud to be efficient, agile & scale and, finally, running your live linear channels through the cloud as part of their transmission path.

First up is Christopher Brähler from Akamai who looks at a how they helped a customer becomes more efficient, be agile and scale. His first example shows how using a cloud workflow in AWS, including many AWS services such as Lambda, the customer was able to reduce human interaction with a piece of content during ingest by 80%. The problem was that every piece of content took two hours to ingest, mainly due to people having to watch for problems. Christopher shows how this process was automated. He highlights some easy wins by front-loading the process with MediaInfo which could easily detect obvious problems like incorrect duration, codec etc. Christopher then shows how the rest of the workflow used AWS components and Lamda to choose to transcode/rewrap files if needed and then pass them on to a whole QC process. The reduction was profound and whilst this could have been achieved with similar MAM-style processing on-premise, being in the cloud allows the next two benefits.

The next example is how the same customer was able to quickly adjust to a new demand on the workflow when they found that some files were arriving and weren’t compatible with their ingest process due to a bug in a certain vendor’s software which was going to take months to fix. Using this same workflow they were able to branch out, using MediaInfo to determine if this vendor’s software was involved. If it was it would be sent down a newly-created path in the workflow that worked around the problem. The benefit of this being in the cloud touches on the third example – scalability. Being in the cloud, it didn’t really matter how much or little this new branch was used. When it wasn’t being used, the cost would be nothing. If it was needed a lot, it would scale up.

The third example is when this customer merged with another large broadcaster, The cloud-based workflow meant that they were able to easily scale up and put a massive library of content through ingest in a matter of two or three months, rather than a year or more than otherwise would be the case on dedicated equipment.

Next up is Luca Moglia from Akamai who’s sharing with his experience on getting great value out of cloud infrastructure. Security should be the basis of any project whether it’s on the internet or not, so it’s no surprise that Luca starts with the mandate to ‘Secure all connections’. Whilst he focuses on the streaming use case, his points can be generalised to programme contribution. He splits up the chain into ‘first mile’ (origin/DC to cloud/CDN), ‘middle mile’ (cloud/CDN to edge) and last mile which is the delivery from the edge to the viewer. Luca looks at options to secure these segments such as ‘AWS Connect’ and other services for Azure & GCP. He looks at using private network interconnections (PNIs) for CDNs and then examines options for the last mile.

His other pieces of advice are to offload as mich ‘origin’ as you can, meaning to reduce the load on your origin server by using an Origin Gateway but also a Multi-CDN strategy. Similarly, he suggests offloading as much logic to the edge as is practical. After all, the viewer’s ping to the edge (RTT) is the lowest practical, so having two-way traffic is best there than deeper into the CDN as the edge is usually in the same ISP.

Another plea is to remember that CMAF is not just there to reduce latency, Luca emphasises all the other benefits which aren’t only important for low-latency use cases such as being able to use the same segments for delivering HLS and DASH streams. Being able to share the same segments allows CDNs to cache better which is a win for everyone. It also reduces storage costs and brings all DRM under CENC, a single mechanism supporting several different DRM methods.

Luca finishes his presentation suggesting looking at the benefits of using HTTP/2 and HTTP/3 to reduce round trips and, in theory, speed up delivery. Similarly, he talks about the TCP algorithm BBR which should improve throughput.

Last to speak is Davide Maggioni from Sky Italia who shows us how they quickly transitioned to a cloud workflow for NOWTV and SKYGO when asked to move to HD, maintain costs and make the transition quickly. They developed a plan to move the metadata enrichement, encryption, encoding and DRM into the cloud. This helped them reduce procurement overhead and allowed them to reduce deployment time.

Key to the project was taking an ‘infrastructure as code’ approach whereby everything is configured by API, run from automated code. This reduces mistakes, increases repeatability and also allowed them to, more easily, deploy popup channels.

Davide takes us through the diagrams and ways in which they are able to deploy permanent and temporary channels showing ‘mezzanine’ encoding on-premise, manipulation done in the cloud, and then a return to on premise ahead of transmission to the CDN.

Watch now!
Speakers

Christopher Brähler Christopher Brähler
Director of Product Management,
SDVI Corporation
Davide Maggioni Davide Maggioni
OTT & Cloud Process and Delivery,
Sky Italia
Luca Moglia Luca Moglia
Media Solutions Engineer,
Akamai
Andrea Fassina Andrea Fassina
Freelance Developer,
https://videodeveloper.io