Video: Case Study FIS Ski World Championship

There’s a lot to learn when it comes to implementing video over IP, so it’s healthy to stand back from the details and see a working system in use to understand how the theory becomes reality. There’s been a clear change in the tone of conversation at the IP Showcase over the years as we’ve shifted from ‘trust us, this could work’ to ‘this is what it looks like!’ That’s not to say there’s not plenty to be done, but this talk about an uncompressed 2110 remote production workflow is great example of how the benefits of IP are being realised by broadcasters.
Robert Erickson is with Grass Valley specialising in sports such as the FIS Alpine World Ski Championships which were in the city of Åre in Sweden some 600km from Stockholm where Sweden’s public broadcaster SVT is based. With 80 cameras at the championships to be remotely controlled over an uncompressed network, this was no small project. Robert explains the two locations were linked by a backbone of two 100Gbps circuits.

The principle behind SVT’s project was to implement a system which could be redeployed, wouldn’t alter the viewers’ experience and would reduce staff and equipment on site. Interestingly the director wanted to be on-site meaning that the production was then split between much of the staff being in Stockholm, which of course was where most of the equipment was, and Åre. The cameras were natively IP, so no converters were needed in the field.

Centralisation was the name of the game, based in Stockholm, producing an end-to-end IP chain. Network switching was provided by Arista which aggregated the feeds of the cameras and brought them to Stockholm where the CCUs were located. Robert highlights the benefits of this approach which include the use of COTS switches, scalability and indifference as to the circuits in use. We then have a look inside the DirectIP connection which is a 10gig ‘pipe’ carrying 2022-6 camera and return feeds along with control and talkback, replicating the functionality of a SMPTE fibre in IP.

To finish up, Robert talks about the return visions, including multivewers, which were sent back to Åre. A Nimbra setup was used to take advantage of a lower-bandwidth circuit using JPEG 2000 to send the vision back. In addition, it carried the data to connect the vision mixer/switcher at Åre with the switch at Stockholm. This was the only point at which noticeable latency was introduced to the tune of around 4 frames.

Watch now!
Download the presentation
Speakers

Robert Erickson Robert Erickson
Strategic Account Manager Sports and Venues,
Grass Valley

Video: How to Build an SRT Streaming Flow from Encoder to Edge

SRT is an enabler for contribution over the internet – whether point to point, or cloud egress/ingress. In recent weeks here on The Broadcast Knowledge we have seen different takes on how SRT, short for Secure Reliable Transport, and RIST can be used including from Open Broadcast Systems.

Here, Karel Boek, CEO of Raskenlund a Norwegian consultancy company for streaming, explains SRT and builds a workflow as a live demo showing how you can implement it quickly and easily. He starts by explaining where SRT sits and what it’s for. SRT makes contribution over the internet possible because it has a very light-touch away of recovering missing packets which are inevitable on internet links. Karel covers Haivision’s creation of SRT and the SRT Alliance that has grown out of that which now boast 350 members. The protocol being Open Source – and now an IETF Draft – means that a lot of companies have been happy to adopt the protocol. There are frequent plugfests, one has just concluded, where vendors test compatibility with the increasing set of features offered in SRT.

‘Secure’ is the ‘S’ in SRT’s name which is because the stream can be easily encrypted as part of the protocol. This is an important aspect in enabling sports and enterprise contribution in the cloud given the security that no-one can watch the feed before it gets to its destination.

‘Reliable’ is the key offer for SRT as that’s the number one problem with the internet and other networks where not all packets get delivered. TCP/IP is a great protocol on which most webpages are delivered. It’s fantastic for file delivery since every single packet gets acknowledged and there really isn’t any way that a file can get to the other end without being completely intact. Live streams can’t afford the overhead of counting in and counting out every packet so SRT’s ability to request only the missing packets is very important. It should be noted that this ability is also in Zixi, ARQ and RIST.

Karel compares SRT with other protocols including RTMP, MPEG-2 Transport Streams amongst others. He is careful to separate HLS, MPEG-DASH and WebRTC as ‘last-mile protocols’ in order to make a differentiation between those which content providers use to move video around as part of production and those which are used for distribution. RTMP’s use is still notable but diminishing particularly in Europe and the American markets. But the idea of MPEG-TS over UDP is still the best way to deliver within a building. Outside of the building, you would then want to protect it at least with FEC, with SMPTE 2022-7 or, better, with a protocol such as RIST or SRT. Karel mentions the details of the Simple Profile of RIST which was, by design, missing the features Karel notes are absent. We’ve heard here on The Broadcast Knowledge that these features have been delivered as planned in the Main Profile.

In the final part of this talk, Karel builds, live, an example workflow which combines both Wowza and SRTHub to create an end-to-end workflow. This is a great way of demonstrating how quickly you can create a workflow with SRT. There are plenty of SRT-enabled encoders and senders which is one of the ways we can judge the success of the SRT Alliance. Similarly whilst Haivision’s SRTHub is a useful product which brings things together in the cloud or on-prem, but Techex’s MWEdge and Videoflow’s DVG can do similar or more, each with their own advantages.

Overwell the takeaway from this talk from Raskenlund is that internet contribution is sorted, it’s now for your to choose how to do it and with whom. To that end, the talk ends with a Q&A from people wondering exactly that.

Watch now!
Speaker

Karel Boek Karel Boek
CEO,
Raskenlund

Video: Versatile Video Coding (VVC)

MPEG’s VVC is the next iteration along from HEVC (H.265). Whilst there are other codecs being finalised such as EVC and LCEVC, this talk looks at how VVC builds on HEVC, but also lends its hand to screen content and VR becoming a more versatile codec than HEVC, meeting the world’s changing needs. For an overview of these emerging codecs, this interview covers them all.

VVC is a joint project between ITU-T and MPEG (AKA ISO/IEC). Its aim is to create a 50% encoding efficiency in bitrate for the same quality picture, with the emphasis on higher resolutions, HDR and 10-bit video. At the same time, acknowledging that optimising codecs on natural video is no longer the core requirement for a lot of people. Its versatility comes from being able to encode screen content, independent sub-picture encoding, scalable encoding among others.

Gary Sullivan from Microsoft Technology & Research talks us through what all this means. He starts by outlining the case for a new codec, particularly the reach for another 50% bitrate saving which may come at further computational cost. Gary points out that, as video use continues to increase, anything that can be done to significantly reduce bitrates will either drive down costs or allow people to use video in better ways.

Any codec is a set of tools all working together to create the final product. Some tools are not always needed, say if you are running on a lower-power system, allowing the codec to be tuned for the situation. Gary puts up a list of some of the tools in VVC, many of which are an evolution of the same tool in HEVC, and highlights a few to give an insight into the improvements under the hood.

Gary’s pick of the big hitters in the tool-set are the Adaptive Loop Filter which reduces artefacts and prediction errors, affine motion compensation which provides better motion compensation, triangle partitioning mode which is a high-computation improvement in intra prediction, bi-directional optical flow (BIO) for motion prediction, intra-block copy which is useful for screen content where an identical block is found elsewhere in the same frame.

Gary highlights SCC, Screen Content Coding, which was in HEVC but not in the base profile, this has changed for VVC so all VVC implementations will have SCC whereas very few HEVC implementations do. Reference Picture Resampling (RPR) allows changing resolution from picture to picture where pictures can be stored at a different resolution from the current picture. And independent sub-pictures which allow parts of the video frame to be re-arranged or only for only one region to be decoded. This works well for VR, video conferencing and allows the creation of composite videos without intermediate decoding.

As usual, doing more thinking about how to compress a picture brings further computational demands. MPEG’s LCEVC is the standards body’s way of fighting against this, as notable bitrate improvements are possible even for low-power devices. With VVC, versatility is the aim, however. Decoders see a 60% increase in decode complexity. Whilst MPEG specifications are all about the decoder – hence allowing a lot of ongoing innovation in encoding techniques – current examples are about 8 or 9 times slower. Performance is better for screen content and on higher resolutions. Whilst the coding part of VVC is mature, versatility is still being worked on but the aim is to publishing within about 2 months.

The video finishes with a Q&A that covers implementing DASH into a low-latency video workflow. How CMAF will be specified to use VVC. Live workflows which Gary explains always come after the initial file-based work and is best understood after the first attempts at encoder implementations, noting that hardware lags by 2 years. He goes on to explain that chipmakers need to see the demand. At the moment, there is a lot of focus from implementors on AV1 by implementors, not to mention EVC, so the question is how much demand can be generated.

This talk is based on talk from Benjamin Bross originally given to an ITU workshop (PDF), then presented at Mile High Video by Benjamin and was updated by Gary for this conversation with the Seattle Video Tech community.

Bitmovin has an article highlighting many of the improvements in VVC written by Christian Feldmann who has given many talks on both AV1 and VVC.

Watch now!

Speakers

Gary Sullivan Gary Sullivan
Microsoft Technology & Research

Video: SRT – The Simple Solution for Your Remote and At-Home Workforce

SRT allows unreliable networks like the Internet to be used for reliable, encrypted video contribution. Created by Haivision and now an Open Source technology with an IETF draft spec, the alliance of SRT users continues to grow as the technology continues to develop and add features. Haivision are members of RIST which Kieran Kunhya spoke about in yesterday’s article.

Being open-source, SRT is widely deployed in across hundreds of manufacturers so there is a lot of choice, although Haivision do focus on their products in this video. The important part is in how the protocol works to keep the data intact which is dealt with in the second segment from Haivision’s Selwyn Jans. Lastly, we hear of some examples of real-world use cases to whet the appetite and start the thought process about how SRT could benefit you.

The fundamental aspect of SRT, as Selwyn explains, is that the packets are counted in at the remote end and if one packet is missing, it’s re-requested from the source. Whilst this is how normal file transfers work, using TCP, this has been optimised to ensure real-time media isn’t unduly delayed. TCP would acknowledge every single packet and the sender should take note when a packet acknowledgement doesn’t arrive. SRT is more efficient whereby acknowledgements are minimised, only re-requests which keeps overheads down. A buffer is set up in the destination so that there is still data available while we’re waiting for these packets to be resent. Depending on the network quality, we may need to have enough buffer to deal with several re-requests for the sane packet.

How SRT Works

Selwyn expands upon this re-request mechanism and looks the way SRT can be sent, or ‘pushed’, as well as working in as a ‘listener’ so that the sender waits to be contacted. before it starts sending any data. You can choose the best one to use to fit around your firewalls. Where there is a NAT firewall, SRT can always be sent out but receiving requests would need firewall modification. One of the benefits of SRT is its ability to be deployed anywhere, including in a home, quickly and easily so firewall changes would not be welcome. For a more in-depth description of SRT, check out this talk from SF Video Technology.

The last section features Corey Behnke from streaming company Live X talking about where they have been using SRT. Replacing satellite is one important use of SRT since in many places, there is sufficient bandwidth available to stream over the internet. Before technologies such as SRT, this was likely to lead to breakups on air, so satellite was the clear winner. Now, there’s money to be saved by not buying satellite space. Could ingress and egress is also a very important workflow for SRT, and similar protocols. The panelists explain how this works using as an example the Haivision Media Gateway, though other products such as Techex and Videoflow.

Watch now!
Speakers

Marcus Schioler Marcus Schioler
Vice President, Product Marketing
Haivision
Selwyn Jans Selwyn Jans
Technical Video Engineer,
Haivision
Corey Behnke Corey Behnke
Producer & Co-Founder,
Live X