Video: Bandwidth Prediction in Low-Latency Chunked Streaming

How can we overcome one of the last, big, problems in making CMAF generally available: making ABR work properly.

ABR, Adaptive Bitrate is a technique which allows a video player to choose what bitrate video to download from a menu of several options. Typically, the highest bitrate will have the highest quality and/or resolution, with the smallest files being low resolution.

The reason a player needs to have the flexibility to choose the bitrate of the video is mainly due to changing network conditions. If someone else on your network starts watching some video, this may mean you can no longer download video quick enough to keep watching in full quality HD and you may need to switch down. If they stop, then you want your player to switch up again to make the most of the bitrate available.

Traditionally this is done fairly simply by measuring how long each chunk of the video takes to download. Simply put, if you download a file, it will come to you as quickly as it can. So measuring how long each video chunk takes to get to you gives you an idea of how much bandwidth is available; if it arrives very slowly, you know you are close to running out of bandwidth. But in low-latency streaming, your are receiving video as quickly as it is produced so it’s very hard to see any difference in download times and this breaks the ABR estimation.

Making ABR work for low-latency is the topic covered by Ali in this talk at Mile High Video 2019 where he presents some of the findings from his recently published paper which he co-authored with, among others, Bitmovin’s Christian Timmerer and which won the DASH-IF Excellence in DASH award.

He starts by explaining how players currently behave with low-latency ABR showing how they miss out on changing to higher/lower renditions. Then he looks at the differences on the server and for the player between non-low-latency and low-latency streams. This lays the foundation to discuss ACTE – ABR for Chunked Transfer Encoding.

ACTE is a method of analysing bandwidth with the assumption that some chunks will be delivered as fast as the network allows and some won’t be. The trick is detecting which chunks actually show the network speed and Ali explains how this is done and shows the results of their evaluation.

Watch now!

Speaker

Ali C. Begen Ali C. Begen
Technical Consultant and
Computer Science Professor

Video: Current Status of ST 2110 over 25 GbE

IT still has catching up to do. The promise of video over IP and ST 2110 is to benefit from the IT industry’s scale and products, but when it comes to bandwidth, there are times when it isn’t there. This talk looks at 25 gigabit (25GbE) network interfaces to see how well they work and if they’ve arrived on the broadcast market.

Koji Oyama from M3L Inc. explains why the move from 10GbE to 25GbE makes sense; a move which allows more scalability with fewer cables. He then looks at the physical characteristics of the signals, both as 25GbE but also linked together into a 100GbE path.

 

We see that the connectors and adapters are highly similar and then look at a cost analysis. What’s actually available on the market now and what is the price difference? Koji also shows us that FPGAs are available with enough capacity to manage several ports per chip.

So if the cost seems to be achievable, perhaps the decision should come down to reliability. Fortunately, Koji has examined the bit error rates and shows the data which indicates that Reed Solomon protection is needed, called RS-FEC. Reed Solomon is a simple protection scheme which has been used in CDs, satellite transmissions and many other places where a light-weight algorithm for error recovery is needed. Koji goes into some detail here explaining RS-FEC for 25GbE.

Koji has also looked into timing both in synchronisation but also jitter and wander. He presents the results of monitoring these parameters in 10GbE and 25GbE scenarios.

Finishing up by highlighting the physical advantages of moving to 25GbE such as density and streams-per-port, Koji takes a moment to highlight many of the 25GbE products available at NAB as final proof that the 25GbE is increasingly available for use today.

Watch now!

Copy of the presentation

Speaker

Koji Oyama Koji Oyama
Director,
M3L

Video: Network Automation Using Python and Google Sheets

“I’m lazy and I’m a master procrastinator.” If you sympathise, learn how to automate network configuration with some code and spreadsheets.

In this video, the EBU’s Ievgen Kostiukevych presents a simple way to automate basic operations on Arista switches working in a SMPTE ST 2110 environment. This is done with a Python script which retrieves parameters stored in Google Sheets and uses Arista’s eAPI to implement changes to the switch.

The Python script was created as a proof of concept for the EBU’s test lab where frequent changes of VLAN configuration on the switches were required. Google Sheets has been selected as a collaborative tool which allows multiple people to modify settings and keep track of changes at the same time. This approach makes repetitive tasks like adding or changing descriptions of the ports easier as well.

Functionality currently supported:

  • Creating VLANs and modyfying their descriptions based on the date in a Google Sheets
  • Changing access VLANs and interface descriptions for the ports based on the date in a Google Sheets
  • Reading interfaces status and the mac address table from the switch and writing the data to the spreadsheet

The script can be downloaded from GitHub.

Speaker

Ievgen Kostiukevych
Senior IP Media Technology Architect and Trainer
EBU

Video: A Study of Protocols for Low Latency Video Transport Over the Internet

Contribution via the internet is tricky but has great promise. With packet loss and jitter all over the place, how can you deliver perfect video?

Ciro Noronha from Cobalt Digital explains the two ways people get around the unreliability of the internet: FEC and retransmission. Forward Error Correction uses some maths to transmit extra data on top of the stream which allows the receiver to correct for any packet losses. This method is standard in satellite transmission where it is always used to add robustness.

Retransmission is different in that it requires a return channel. When a receiver spots a missing packet, it asks for it to be resent. Being that it has to wait for a reply, retransmission protocols like SRT, ARQ and RIST run with a configurable buffer which needs to be big enough for at least one round trip. FEC schemes also require a buffer as it needs to wait for a number of packets before it can complete the maths required.

Ciro introduces FEC and ARQ before presenting work showing experiments he’s run on both FEC and ARQ to see the limits of their signal-correcting capabilities and latency. He finishes explaining what RIST is and its status.

Bring yourself up to date with RIST!
Watch now!

Speaker

Ciro Noronha Ciro Noronha
Director of Technology,
Cobalt Digital