Video: ST 2110 Over WAN — The Conclusion of Act 1

Is SMPTE ST 2110 suitable for inter-site connectivity over the WAN? ST 2110 is putting the early adopter phase behind it with more and more installations and OB vans bringing 2110 into daily use yet most sites works independently. The market is already seeing a strong need to continue to derive cost and efficiency savings from the infrastructure in the larger broadcasters who have multiple facilities spread around one country or even in many. To do this, though there are a number of challenges still to be overcome and moving a large number of essence flows long distances and between PTP time domains is one of them.

Nevion’s Andy Rayner is chair of the VSF Activity Group looking into transporting SMPTE ST 2110 over WAN and is here to give an update on the achievements of the past two years. He underlines that the aim of the ST 2110 over WAN activity group is to detail how to securely share media and control between facilities. The key scenarios being considered are 1) special events/remote production/REMIs. 2) Facility sharing within a company. 3) Sharing facilities between companies. He also notes that there is a significant cross over in this work and that happening in the Ground-Cloud-Cloud-Ground (GCCG) activity group which is also co-chairs.
 

 

The group has produced drafts of two documents under TR-09. The first, TR-09-01 discusses the data plane and has been largely discussed previously. It defines data protection methods as the standard 2022-7 which uses multiple, identical, flows to deal with packet loss and also a constrained version of FEC standard ST 2022-5 which provides a low-latency FEC for the protection of individual data streams.

GRE trunking over RTP was previously announced as the recommended way to move traffic between sites, though Andy notes that no one aspect of the document is mandatory. The benefits of using a trunk are that all traffic is routed down the same path which helps keep the propagation delay for each essence identical, bitrate is kept high for efficient application of FEC, the workflow and IT requirements are simpler and finally, the trunk has now been specified so that it can transparently carry ethernet headers between locations.

Andy also introduces TR-09-02 which talks about sharing of control. The control plane in any facility is not specified and doesn’t have to be NMOS. However NMOS specifications such IS-04 and IS-05 are the basis chosen for control sharing. Andy describes the control as providing a constrained NMOS interface between autonomous locations and discusses how it makes available resources and metadata to the other location and how that location then has the choice of whether or not to consume the advertised media and control. This allows facilities to pick and choose what is shared.

Watch now!
Speakers

Andy Rayner Andy Rayner
Chief Technologist, Nevion,
Chair, WAN IP Activity Group, VSF

Video: Telenor, DR & SVT – Platform modernisation & open source transcoding

In this triple play of short presentations, we hear about three Nordic companies’ work to improve, react to changing conditions and to stay relevant through modernising their whole platform, launching new products and developing improved transcoding.

Telenor’s Geir Inge Fevang and Stein Lindman-Johannesen speak first introducing us to the work done to launch fully launch the ‘T-We’ streaming service which provides live and VoD streaming anywhere you are. Delivering this service was not without its challenges, they explain, because several years ago they realised that their current system was too much based in delivery of broadcast television to be relevant to streaming and the complexity of the system was very high. Overall this led to reduced agility in the product offering and would be the cause of a growing divergence between what Telenor could offer and what the Norwegian public would be expecting. This prompted them to modernise the whole TV chain.

Decommissioning is one way to simplify your video delivery system and Telenor was not shy to do this, decommissioning both analogue TV, their Smartvision platform and their satellite distribution/syndication platform which shared channels with third parties. All the remaining viewers would then be brought under the output of a new project to bring a new user experience. This was done by launching a new set-top box and updating the software on the existing deployed STBs to bring it in line, as much as possible, with the new service. By launching new clients for mobile and web and modernising the back-end video delivery stack. Geir and Stein wrap up their segment discussing how customer satisfaction varied throughout this experience and the learnings they’ve collected along the way.

 

 

Troels Hauch Tornmark from DR presents next talking about two recent product launches one of which, he says, was a miss and one a hit. Being a public broadcaster, they have a continuing need to keep quality high and bring television to the public. One way to bring television to the public, explains Troels, is to allow it to follow you as you move around Europe. The EU portability regulation provides a legal framework and motivation to allow streaming services to continue providing you access even when you are abroad. DR felt this would be an important and valued option for Danish ex-pats and holidaymakers alike so they commissioned a project to make this a reality based on a national Danish identity system. This was given a ‘silent’ launch because project completion finished last year during the pandemic-related lockdowns. In contrast, Troels details a co-watching product that was perfect for lockdown which they have launched allowing shared, synchronised viewing of programmes no matter where you are putting text chat alongside the video and ensuring when one person needs to pause, everyone is paused too. This has now been spun out into its own company called flinge.

Finally, Olof Lindman from Sweden’s SVT talks about Encore, their in-house system which streamlines transcoding and has improved the visual quality of their VoD service. As a response to their previous transcoding software which became end-of-life, SVT trialled using FFmpeg to transcode assets and were very pleased with the results. This led to the creation of the Encore product which brings together in-house programming from SVT with open-source tools like FFmpeg, MediaInfo and many more to deliver a transcoding platform that features queuing of incoming jobs with three priority levels, flexible transcoding using a mixture of templates in response to the media being transcoded, prioritised transcoding whereby the simplest AVC profiles are published first with more advanced audio and HEVC versions being encoded and published at a later point in time when resources allow.

SVT have decided open source Encore which they hope will see continued development from a wider community. So far they have benefitted from the platform as they now have more control and flexibility than before as well as a much better quality of video output.

Watch now!
Speakers

Geir Inge Fevang Geir Inge Fevang
Head of Streaming & TV,
Telenor
Stein Lindman-Johannesen Stein Lindman-Johannesen
Head of Content & Recommendations,
Telenor
Troels Hauch Tornmark Troels Hauch Tornmark
Product Manager,
DRTV
Olof Lindman Olof Lindman
Video R&D Engineer,
SVT

Video: CMAF with ByteRange – A Unified & Efficient Solution for Low Latency Streaming

Apple’s LL-HLS protocol is the most recent technology offering to deliver low-latency streams of just 2 or 3 seconds to the viewer. Before that, CMAF which is built on MPEG DASH also enabled low latency streaming. This panel with Ateme, Akamai and THEOplayer asks how they both work, their differences and also maps out a way to deliver both at once covering the topic from the perspective of the encoder manufacturer, the CDN and the player client.

We start with ATEME’s Mickaël Raulet who outline’s CMAF starting with its inception in 2016 with Microsoft and Apple. CMAF was published in 2018 and most recently received detailed guidelines for low latency best practice in 2020 from the DASH Industry Forum. He outlines that the idea of CMAF is to build on DASH to find a single way of delivering both DASH and HLS using once set of media. THe idea here is to minimise hits on the cache as well as storage. Harnessing the ISO BMFF CMAF adds on the ability to break chunks in to fragments opening up the promise of low latency delivery.

 

 

Mickaël discusses the methods of getting hold of these short fragments. If you store the fragments separately, then you double your storage as 4 fragments make up a whole segment. So it’s better to have all the fragments written as a segment. We see that Byterange requests are the way forward whereby the client asks the server to start delivering a file from a certain number of bytes into the file. We can even request this ahead of time, using a preload hint, so that the server can push this data when it’s ready.

Next we hear from Akamai’s Will Law who examines how Apples LL-HLS protocol can work within the CDN to provide either CMAF for LL-HLS from the same media files. He uses the example of a 4-second segments with four second-long parts. A standard latency player would want to download the whole 4-second segment where as a LL-HLS player would want the parts. DASH, has similar requirements and so Will focusses on how to bring all of these requirements down into the mimum set of files needed which he calls a ‘common cache footprint’ using CMAF.

He shows how byterange requests work, how to structure them and explains that, to help with bandwidth estimation, the server will wait until the whole of the byterange is delivered before it sends any data thus allowing the client to download a wire speed. Moreover a single request can deliver the rest of the segments meaning 7 requests get collapsed into 1 or 2 requests which is an important saving for CDNs working at scale. It is possible to use longer GOPs for a 4-second video clip than for 1-second parts, but for this technique to work, it’s important to maintain the same structure within the large 4-second clip as in the 1-second parts.

THEOplayer’s Pieter-Jan Speelmans takes the floor next explaining his view from the player end of the chain. He discusses support for LL-HLS across different platforms such as Android, Android TV, Roku etc. and concludes that there is, perhaps surprisingly, fairly wide support for Apple’s LL-HLS protocol. Pieter-Jan spends some time building on Will’s discussion about reducing request numbers for browsers, CORS checking can increase cause extra requests to be needed when using Byterange requests. For implementing ABR, it’s important to understand how close you are to the available bandwidth. Pieter-Jan says that you shouldn’t only use the download time to determine throughput, but also metadata from the player to get as an exact estimate as possible. We also hear about dealing with subtitles which can need to be on screen longer than the duration of any of the parts or even of the segment length. These need to be adapted so that they are shown repeatedly and each chunk contains the correct information. This can lead to flashing on re-display so, as with many things in modern players, needs to be carefully and intentionally dealt with to ensure the correct user experience.

The last part of the video is a Q&A which covers:

  • Use of HTTP2 and QUIC/HTTP3
  • Dynamic Ad Insertion for low latency
  • The importance of playlist blocking
  • Player synchronisation with playback rate adjustment
  • Player analytics
  • DRM insertion problems at low-latency

    Watch now!
    Speakers

    Will Law Will Law
    Chief Architect, Edge Technology Group,
    Akamai
    Mickaël Raulet Mickaël Raulet
    CTO,
    ATEME
    Pieter-Jan Speelmans Pieter-Jan Speelmans
    CTO & Founder,
    THEOPlayer
  • Video: Uncompressed Video in the Cloud

    Moving high bitrate flows such as uncompressed media through cloud infrastructure = which is designed for scale rather than real-time throughput requires more thought than simply using UDP and multicast. That traditional approach can certainly work, but is liable to drop the occasional packet compromising the media.

    In this video, Thomas Edwards and Evan Statton outline the work underway at Amazon Web Services (AWS) for reliable real-time delivery. On-prem 2110 network architectures usually have two separate networks. Media essences are sent as single, high bandwidth flows over both networks allowing the endpoint to use SMPTE ST 2022-7 seamless switching to deal with any lost packets. Network architectures in the cloud differ compared to on-prem networks. They are usually much wider and taller providing thousands of possible paths to get to any one destination.

     

     

    AWS have been working to find ways of harnessing the cloud network architectures and have come up with two protocols. The first to discuss is Scalable Reliable Delivery, SRD, a protocol created by Amazon which guarantees delivery of packets. Delivery is likely to be out of order, so packet order needs to be restored by a layer above SRD. Amazon have custom network cards called ‘Nitro’ and it’s these cards which run the SRD protocol to keep the functionality as close to the physical layer as possible.

    SRD capitalises on hyperscale networks by splitting each media flow up into many smaller flows. A high bandwidth uncompressed video flow could be over 1 Gbps. SRD would deliver this over one or more hundred ‘flowlets’ each leaving on a different path. Paths are partially controlled using ECMP, Equal Cost Multipath, routing whereby the egress port used on a switch is chosen by hashing together a number of parameters such as the source IP and destination port. The sender controls the ECMP path selection by manipulating packet encapsulation. SRD employs a specialized congestion control algorithm that helps further decrease the chance of packet drops and minimize retransmit times, by keeping queuing to a minimum. SRD keeps an eye on the RTT (round trip time) of each of the flowlets and adjusts the bandwidth appropriately. This is particularly useful as a way to deal with the problem where upstream many flowlets may end up going through the same interface which is close to being overloaded, known as ‘incast congestion’. In this way, SRD actively works to reduce latency and congestion. SRD is able to monitor round trip time since it also has a very small retransmit buffer so that any packets which get lost can be resent. Similar to SRT and RIST, SRD does expect to receive acknowledgement packets and by looking at when these arrive and the timing between packets, RTT and bandwidth estimations can be made.

    CDI, the Cloud Digital Interface, is a layer on top of SRD which acts as an interface for programmers. Available on Github under a BSD licence, it gives access to the incoming essence streams in a way similar to SMPTE’s ST 2110 making it easy to deal with pixel data, get access to RGB graphics including an alpha layer as well as providing metadata information for subtitles or SCTE 104 signalling.

    Thomas Edwards Thomas Edwards
    Principal Solutions Architect & Evangelist,
    Amazon Web Services
    Evan Statton Evan Statton
    Principal Architect,
    Amazon Web Services (AWS)