Video: Reducing peak bandwidth for OTT

‘Flattening the curve’ isn’t just about dealing with viruses, we learn from Will Law. Rather, this is one way to deal with network congestion brought on by the rise in broadband use during the global lockdown. This and other key ways such as per-title encoding and removing the top tier are just two other which are explored in this video from Akamai and Bitmovin.

Will Law starts the talk explaining why congestion happens in a world where ABR (adaptive bitrate streaming) is supposed to deal with this. With Akamai’s traffic up by around 300%, it’s perhaps not a surprise there’s a contest for bandwidth. As not all traffic is a video stream, congestion will still happen when fighting with other, static, data transfers. However deeper than that, even with two ABR streams, the congestion protocol in use has a big impact as will shows with a graph showing Akamai’s FastTCP and BBR where BBR steals all the bandwidth rather than ‘playing fair’.

Using a webpage constructed for the video, Will shows us a baseline video playback and the metrics associated with it such as data transferred and bitrate which he uses to demonstrate the different benefits of bitrate production techniques. The first is covered by Bitmovin’s Sean McCarthy who explains Bitmovin’s per-title encoding technology. This approach ensures that each asset has encoder settings tuned to get the best out of the content whilst reducing bandwidth as opposed to simply setting your encoder to a fairly-high, safe, static bitrate for all content no matter how complex it is. Will shows on the demo that the bitrate reduces by over 50%.

Swapping codecs is an obvious way to reduce bandwidth. Unlike per-title encoding which is transparent to the end-user, using AV1, VP9 or HEVC requires support by the final device. Whilst you could offer multiple versions of your assets to make sure you still cover all your players despite fragmentation, this has the downside of extra encoding costs and time.

Will then looks at three ways to reduce bandwidth by stopping the highest-bitrate rendition from being used. Method one is to manually modify the manifest file. Method two demonstrates how to do so using the Bitmovin player API, and method three uses the CDN itself to manipulate the manifests. The advantage of doing this in the CDN is because this allows much more flexibility as you can use geolocation rules, for example, to deliver different manifests to different locations.

The final method to reduce peak bandwidth is to use the CDN to throttle download speed of the stream chunks. This means that while you may – if you are lucky – have the ability to download at 100Mbps, the CDN only delivers 3- or 5-times the real-time bitrate. This goes a long way to smoothing out the peaks which is better for the end user’s equipment and for the CDN. Seen in isolation, this does very little, as the video bitrate and the data transferred remain the same. However, delivering the video in this much more co-operative way is much more likely to cause knock-on problems for other traffic. It can, of course, be used in conjunction with the other techniques. The video concludes with a Q&A.

Watch now!
Speakers

Will Law Will Law
Chief Architect,
Akamai
Sean McCarthy Sean McCarthy
Technical Product Marketing Manager,
Bitmovin

Video: RIST in the Cloud

Cloud workflows are starting to become an integral part of broadcasters’ live production. However, the quality of video is often not sufficient for high-end broadcast applications where cloud infrastructure providers such as Google, Oracle or AWS are accessed through the public Internet or leased lines.

A number of protocols based on ARQ (Adaptive Repeat reQuest) retransmission technology have been created (including SRT, Zixi, VideoFlow and RIST) to solve the challenge of moving professional media over the Internet which is fraught with dropped packets and unwanted delays. Protocols such as a SRT and RIST enable broadcast-grade video delivery at a much lower cost than fibre or satellite links.

The RIST (Reliable Internet Streaming Transport) protocol has been created as an open alternative to commercial options such as Zixi. This protocol is a merging of technologies from around the industry built upon current standards in IETF RFCs, providing an open, interoperable and technically robust solution for low-latency live video over unmanaged networks.

In this presentation David Griggs from Amazon Web Services (AWS) talks about how the RIST protocol with cloud technology is transforming broadcast content distribution. He explains that delivery of live content is essential for the broadcasters and they look for a way to deliver this content without using expensive private fibre optics or satellite links. With unmanaged networks you can get content from one side of the world to the other with very little investment in time and infrastructure, but it is only possible with protocols based on ARQ like RIST.

Next, David discusses the major advantages of cloud technology, being dynamic and flexible. Historically dimensioning the entire production environment for peak utilisation was financially challenging. Now it is possible to dimension it for average use, while leveraging cloud resources for peak usage, providing a more elastic cost model. Moreover, the cloud is a good place to innovate and to experiment because the barrier to entry in terms of cost is low. It encourages both customers and vendors to experiment and to be innovative and ultimately build more compelling and better solutions.

David believes that open and interoperable QoS protocols like RIST will be instrumental in building complex distribution networks in the cloud. He hopes that AWS by working together with Net Insight, Zixi and Cobalt Digital can start to build innovative and interoperable cloud solutions for live sports.

Watch now!

Speaker

David Griggs
Senior Product Manager, Media Services
AWS Elemental

Video: ASTC 3.0 Basics, Performance and the Physical Layer

ATSC 3.0 is a revolutionary technology bringing IP into the realms of RF transmission which is gaining traction in North America and is deployed in South Korea. Similar to DVB-I, ATSC 3.0 provides a way to unite the world of online streaming with that of ‘linear’ broadcast giving audiences and broadcasters the best of both worlds. Looking beyond ‘IP’, the modulation schemes are provided are much improved over ATSC 1.0 providing much better reception for the viewer and flexibility for the broadcaster.

Richard Chernock, now retired, was the CSO of Triveni Digital when he have this talk introducing the standard as part of a series of talks on the topic. ATSC, formed in 1982 brought the first wave of digital television to The States and elsewhere, explains Richard as he looks at what ATSC 1.0 delivered and what, we now see, it lacked. For instance, it’s fixed 19.2Mbps bitrate hardly provides a flexible foundation for a modern distribution platform. We then look at the previously mentioned concept that ATSC 3.0 should glue together live TV, usually via broadcast, with online VoD/streaming.

The next segment of the talk looks at how the standard breaks down into separate standards. Most modern standards like STMPE’s 2022 and 2110, are actually a suite of individual standards documents united under one name. Whilst SMPTE 2110-10, -20, -30 and -40 come together to explain how timing, video, audio and metadata work to produce the final result of professional media over IP, similarly ATSC 3.0 has sections on explaining how security, applications, the RF/physical layer and management work. Richard follows this up with a look at the protocol stack which serves to explain which parts are served on TCP, which on UDP and how the work is split between broadcast and broadband.

The last section of the talk looks at the physical layer. That is to say how the signal is broadcast over RF and the resultant performance. Richard explains the newer techniques which improve the ability to receive the signal, but highlights that – as ever – it’s a balancing act between reception and bandwidth. ATSC 3.0’s benefit is that the broadcaster gets to choose where on the scales they want to broadcast, tuning for reception indoors, for high bit-rate reception or anywhere in between. With less than -6dB SNR performance plus EAS wakeup, we’re left with the feeling that there is a large improvement over ATSC 1.0.

The talk finishes with two headlining features of ATSC 3.0. PLPs, also known as Physical Layer Pipes, are another headlining feature of ATSC 3.0, where separate channels can be created on the same RF channel. Each of these can have their own robustness vs bit rate tradeoff which allows for a range of types of services to be provided by one broadcaster. The other is Layered Division Multiplexing which allows PLPs to be transmitted on top of each other which allows 100% utilisation of the available spectrum.

Watch now!
Speaker

Richard Chernock Dr. Richard Chernock
Former CSO,
Triveni Digital

Video: An Introduction to fibre optic cabling

Many of us take fibre optics for granted but how much about the basics do we actually know…or remember? You may be lucky enough to work in a company that only uses one type of fibre and connector, but in an job interview, it pays to know what happens in the wider world. Fortunately, Phil Crawley is here to explain fibre optics from scratch.

This introduction to fibre looks at the uses for fibre in broadcast. Simply put, we can consider that they’re used in high-speed networking and long-distance cabling of baseband signals such as SDI, audio or RF. The meat of the topic is that there are two types of fibre, multimode and single mode. It’s really important to know which one you’re going to be using; Phil explains why showing the two ways they manage to get light to keep moving down the glass and get to the other end.

The talk looks at the history of mulit-mode fibres which have continued to improve over the years which is recognised by the ‘OM’ number which currently stretches to OM5 (this is an advance on the OM4 which that talk considers.) Since multi-mode has some different versions, it’s possible to have mismatches if you send from one fibre to another. Phil visits these scenarios explaining how differences of the launch (laser vs. LED) and core diameter all affect the efficiency of moving light from one side of the junction to the other.

On that note, connectors are of key importance as there’s nothing worse than turning up with a fibre patch lead with the wrong connectors on the end. Phil explains the differences then looks at how to splice fibres together and the issues that need to be taken care of to do it well along with easy ways to fault find. Phil finishes the talk explaining how single-mode differs and offers some resources to learn more.

This video was recorded at a Jigsaw24 Tech Breakfast while Phil Crawley was their Chief Engineer. Download the slides

Watch now!
Speaker

Phil Crawley Phil Crawley
Lead Engineer, Media Engineers Ltd.
Former Chief Engineer, Jigsaw24