If there’s any talk that cuts through the AV1 hype, it must be this one. The talk from the @Scale conference starts by re-introducing AV1 and AoM but then moves quickly on to encoding techniques and the toolsets now available in AV1.
Starting by looking at the evolution from VP9 to AV1, Google engineer Yue Chen looks at:
There is no doubt that streaming video is here to stay. Every month, more consumers log into and subscribe to one or more OTT services. But as those services grow beyond geographical borders, providers are forced to ensure that their offerings can meet the demands of a swelling user base located around the world. Given that this involves employing the public Internet to deliver video to different pockets of the globe, OTT operators often struggle with implementing the best video delivery architecture: what infrastructure to purchase, to install, where & which partners to employ, and how to ensure the best possible viewer experience. This webinar explores some of the proven methods for scaling video delivery as well as best practices employed by some of the world’s biggest streamers.
Head of Exploration,
President-Chair at Ultra HD Forum,
VP Video Strategy, Harmonic
Streaming Video Alliance
IP Production is very important for sports streaming including esports where its flexibility is a big plus over SDI infrastructure. This panel discusses NDI, SMPTE ST 2110
eSports, in particular, uses many cameras, Point-of-video cameras, PC outputs and the normal camera positions needed to make a good show, so a technology like NDI really helps keeps costs down – since every SDI port is expensive and takes space – plus it allows computer devices to ‘natively’ send video without specific hardware.
NDI is an IP specification from Newtek (now owned by VizRT) which can be licenced for free and is included in Ross, VizRT, Panasonic, OBS, Epiphan and hundreds more. It allows ultra-low-latency video at 100Mbps or low-latency video at 8Mbps.
The panel discusses the right place and use for NDI compared to SDI. In the right places, networking is more convenient as in stadia. And if you have a short distance to run, SDI can often be the best plan. Similarly, until NDI version 4 which includes timing synchronisation, ST 2110 has been a better bet in terms of synchronised video for ISO recordings.
For many events which combine many cameras with computer outputs, whether it be computers playing youtube, Skype or something else, removing the need to convert to SDI allows the production to be much more flexible.
The panel finishes by discussing audio, and taking questions from the floor covering issues such as embedded alpha, further ST 2110 considerations and UHD workflows.
How can we overcome one of the last, big, problems in making CMAF a generally available: making ABR work properly.
ABR, Adaptive Bitrate is a technique which allows a video player to choose what bitrate video to download from a menu of several options. Typically, the highest bitrate will have the highest quality and/or resolution, with the smallest files being low resolution.
The reason a player needs to have the flexibility to choose the bitrate of the video is mainly due to changing network conditions. If someone else on your network starts watching some video, this may mean you can no longer download video quick enough to keep watching in full quality HD and you may need to switch down. If they stop, then you want your player to switch up again to make the most of the bitrate available.
Traditionally this is done fairly simply by measuring how long each chunk of the video takes to download. Simply put, if you download a file, it will come to you as quickly as it can. So measuring how long each video chunk takes to get to you gives you an idea of how much bandwidth is available; if it arrives very slowly, you know you are close to running out of bandwidth. But in low-latency streaming, your are receiving video as quickly as it is produced so it’s very hard to see any difference in download times and this breaks the ABR estimation.
He starts by explaining how players currently behave with low-latency ABR showing how they miss out on changing to higher/lower renditions. Then he looks at the differences on the server and for the player between non-low-latency and low-latency streams. This lays the foundation to discuss ACTE – ABR for Chunked Transfer Encoding.
ACTE is a method of analysing bandwidth with the assumption that some chunks will be delivered as fast as the network allows and some won’t be. The trick is detecting which chunks actually show the network speed and Ali explains how this is done and shows the results of their evaluation.