Video: Towards a healthy AV1 ecosystem for UGC platforms


Twitch is an ambassador for new codecs and puts its money where its mouth is; it is one of the few live streaming platforms which streams with VP9 – and not only at, with cloud FPGA acceleration thanks to Xylinx’s acquisition of NGCODEC.

As such, they have a strong position on AV1. With such a tech savvy crowd, they stream most of their videos at the highest bitrate (circa 6mbps). With millions of concurrent videos, they are highly motivated to reduce bandwidth where they can and finding new codecs is one way to do that.

Principal Research Engineer, Yueshi discusses Twitch’s stance on AV1 and the work they are doing to contribute in order to get the best product at the end of the process which will not only help them, but the worldwide community. He starts by giving an overview of Twitch which, while many of us are familiar with the site, the scale and needs of the site may be new information and drive the understanding of the rest of the talk.

Reduction in bitrate is a strong motivator, but also the fact that supporting many codecs is a burden. AV1 promises a possibility of reducing the number of supported codecs/formats. Their active contribution in AV1 is also determined by the ‘hand wave’ latency; a simple method of determining the approximate latency of a link which is naturally very important to a live streaming platform. This led to Twitch submitting a proposal for SWITCH_FRAME which is a technique, accepted in AV1, which allows more frequent changes by the player between the different quality/bitrate streams available. This results in a better experience for the user and also reduced bitrate/buffers.

YueShi then looks at the projected AV1 deployment roadmap and discusses when GPU/hardware support will be available. The legal aspect of AV1 – which promises to be a free-to-use codec is also discussed with the news that a patent pool has formed around AV1.

The talk finishes with a Q&A.

Watch now!

Speakers

Yueshi Shen Yueshi Shen
Principal (Level 7) Research Engineer & Engineering Manager,
Twitch

Video: BBC Cardiff Central Square – Update

It’s being closely watched throughout the industry, a long-in-the-making project to deploy SMPTE ST 2110 throughout a fully green-field development. Its failure would be a big setback for the push to a completely network-based broadcast workflow.

The BBC Cardiff Central Square project is nearing completion now and is a great example of the early-adopter approach to bringing cutting-edge, complex, large-scale projects to market. They chose a single principle vendor so that they could work closely in partnership at a time when the market for ST 2110 was very sparse. This gave them leverage over the product roadmap and allowed to the for the tight integration which would be required to bring this project to market.

Nowadays, the market for ST 2110 products continues to mature and whilst it has still quite a way to go, it has also come a long way in the past four years. Companies embarking similar projects now have a better choice of products and some may now feel they can start to pick ‘best of breed’ rather than taking the BBC approach. Whichever approach is taken there is still a lot to be gained by following and learning from the mistakes and successes of others. Fortunately, Mark Patrick, Lead Architect on the project is here to provide an update on the project.

Mark starts by giving and overview of the project, its scale and its aims. He presents the opportunities and challenges it presents and the key achievements and milestones passed to date.

Live IP has benefits and risks. Mark takes some time to explain the benefits of the flexibility and increasingly lower cost of the infrastructure and weighs them agains the the risks which include the continually developing standards and skills challenges

The progress overview names Grass Vally as the main vendor, control via BNCS having being designed and virtualised, ST 2110 network topology deployed and now the final commissioning and acceptance testing is in progress.

The media topology for the system uses an principal of an A and a B network plus a separate control network. It’s fundamentally a leaf and spine network and Mark shows how this links in to both the Grass Valley equipment but also the audio equipment via Dante and AES67. Mark takes some time to discuss the separate networks they’ve deployed for the audio part of the project, driven by compatibility issues but also within the constraints of this project, it was better to separate the networks rather than address the changes necessary to force them together.

PTP timing is discussed with a nod to the fact that PTP design can be difficult and that it can be expensive too. NMOS issues are also actively being worked on and remains an outstanding issue in terms of getting enough vendors to support it, but also having compatible systems once an implementation is deployed. This has driven the BBC to use NMOS in a more limited way than desired and creating fall-back systems.

From this we can deduce, if it wasn’t already understood, that interoperability testing is a vital aspect of the project, but Mark explains that formalised testing (i.e. IT-style automated) is really important in creating a uniform way of ensuring problems have been fully addressed and there are no regressions. ST 2110 systems are complex and fault finding can be similarly complex and time consuming.

Mark leaves us by explaining what keeps him awake at night which includes items such as lack of available test equipment, lack of single-stream UHD support and NMOS which leads him to a few comments on ST 2110 readiness such as the need for vendors to put much more effort into configuration and management tools.

Anyone with an interest in IP in broadcast will be very grateful at Mark’s, and the BBC’s, willingness to share the project’s successes and challenges in such a constructive way.

Watch now!

Speaker

Mark Patrick Mark Patrick
Lead Architect,
BBC Major Projects Infrastructure

Video: IBC2019 SRT Open Source Technical Panel

SRT allows unreliable networks like the Internet to be used for reliable, encrypted video contribution. Created by Haivision and now an Open Source technology, the alliance of SRT users continues to grow as the technology continues to develop and add features. This panel, from IBC 2019, is an update on what’s new with SRT and how it’s being used daily in broadcast.

Marc Cymontowski starts with an overview of the new features of SRT, mentioning its active Github repository, pointing to recent advances in the encryption available, upcoming FEC and the beginnings of SMPTE ST 2022-7 like redundancy. He also takes a look at how SRT fares against RTMP, the venerable incumbent technology for contribution of streams over the internet. Official support for RTMP will be coming to an end next year, so there is much interest in what may replace it. Marc makes the case that for the same link, SRT tends to have a latency of a half to a third and also performs better at higher bitrates.

RTP, the Real-Time Transport Protocol, is an important feature when it comes to redundancy. By using RTP’s ability to stamp each packet, the receiver can take two identical RTP streams – say from two separate ISPs and fill in missing packets on one stream from the packets of the other stream. This is a very powerful way of ensuring reliability over the internet so Marc makes the point that using SRT doesn’t stop you using RTP.

Simen Frostad then takes to the stage to explain why Bridge Technologies has added SRT support and how the SRT Hub will be a very important step forward. Then it’s Leonardo Chaves’ turn who explains how broadcaster Globo is using SRT to transform its video workflows and reduce OPEX costs to one third satellite costs.

Steve Russell from Red Bee talks about how they use SRT to create new, or lower cost, circuits and services to their customers. They’re able to use the internet not only for contribution from events but also to safely get video in and out of the cloud.

With these use-cases in mind, the panel opens up to thirty minutes of wide-ranging technical and non-technical questions.

Watch Free Now!
Free registration required
Speakers

Brian Ring Brian Ring
SRT Evangelist,
Ring Digital
Simen Frostad Simen Frostad
Chairman & Co-Founder
Bridge Technologies
Steve Russell Steve Russell
Head of OTT & Media Management Portfolios,
Red Bee Media
Marc Cymontkowski Marc Cymontkowski
VP Engineering,
Haivision
Leonardo Chaves Leonardo Chaves
Exec. Manager of New Transmission Technologies,
Globo

Video: Quantitative Evaluation and Attribute of Overall Brightness in a HDR World

HDR has long being heralded as a highly compelling and effective technology as high dynamic range can improve video of any resolution and much better mimics the natural world. HDR continues its relatively slow growth into real-world use, but continues to show progress.

HDR is so compelling because it can feed our senses more light and it’s no secret that TV shops know we like nice, bright pictures on our TV sets. But the reality of production in HDR is that you have to contend with human eyes which have a great ability to see dark and bright images – but not at the same time. The total ability of the eye to simultaneously distinguish brightness is about 12 stops, which is only two thirds of its non-simultaneous total range.
 

 
The fact that our eyes constantly adapt and, let’s face it, interpret what they see, makes understanding brightness in videos tricky. There are dependencies on overall brightness of a picture at any one moment, the previous recent brightness, the brightness of local adjacent parts of the image, the ambient background and much more to consider.

Selios Ploumis steps into this world of varying brightness to creat a ways of quantitatively evaluating brightness for HDR. The starting place is the Average Picture Level (APL) which is what the SDR world uses to indicate brightness. With the greater dynamic range in HDR and the way this is implemented, it’s not clear that APL is up to the job.

Stelios explains his work in analysing APL in SDR and HDR and shows the times that simply taking the average of a picture can trick you into seeing two images as practically the same, whereas the brain clearly sees one as more ‘bright’ than the other. On the same track, he also explains ways in which we can work to differentiate signals better, for instance taking in to account the spread of the brightness values as opposed to APL’s normalised average of all pixels’ values.

The talk wraps up with a description of how the testing was carried out and a summary of the proposals to improve the quantitive analysis of HDR video.

Watch now!
Speakers

Stelios Ploumis Stelios Ploumis
PhD Research Candidate
MTT Innovation Inc.