Video: Remote Production in the Cloud for DR and the New Normal

How does NDI fit into the recent refocussing of interest in working remotely, operating broadcast workflows remotely and moving workflows into the cloud? Whilst SRT and RIST have ignited imaginations over how to reliably ingest content into the cloud, an MPEG AVC/HEVC workflow doesn’t make sense due to the latencies. NDI is a technology with light compression with latencies low enough to make cloud workflows feel almost immediate.

Vizrt’s Ted Spruill and Jorge Dighero join moderator Russell Trafford-Jones to explore how the challenges the pandemic have thrown up and the practical ways in which NDI can meet many of the needs of cloud workflows. We saw in the talk Where can SMPTE ST 2110 and NDI co-exist? how NDI is a tool to get things done, just like ST 2110 and that both have their place in a broadcast facility. This video takes that as read looks at the practical abilities of NDI both in and out of the cloud.

Taking the of a demo and then extensive Q&A, this talk covers latency, running NDI in the cloud, networking considerations such as layer 2 and layer 3 networks, ease of discovery and routing, contribution into the cloud, use of SRT and RIST, comparison with JPEG XS, speed of deployment and much more!.

Click to watch this no-registration, free webast at SMPTE
Speakers

Jorge Dighero Jorge Dighero
Senior Solutions Architect,
Vizrt
Ted Spruill Ted Spruill
Sales Manager-US Group Stations,
Vizrt
Russell Trafford-Jones Moderator:Russell Trafford-Jones
Editor, TheBroadcastKnowledge.com
Director of Education, Emerging Technologies, SMPTE
Manager, Support & Services, Techex

Video: Scaling up Anime with Machine Learning and Smart Real Time Algorithms

Too long has video been dominated by natural scenes and compression has been about optimising for skin tones. Recently we have seen technologies taking care of displaying other types of video correctly like computer displays such as computer games, as seen in VVC and also animation optimisation for upscalers as we explore in this talk.

Anime, a Japanese genre of animation, is not very different from an objective point of video from most video cartoons; the drawing style is black lines on relatively simple, solid areas of colour. Anime itself is a clearly distinct genre whose fans are much more sensitive to quality, but for codecs and scalers, 2D animation, in general, is a style that easily shows artefacts.

Up- and down-scaling is the process of making an image of say 1080 pixels high and 1920 wide larger, for instance 2160×3840 or smaller, say to SD resolution. Achieving this without jagged edges or blurriness is difficult and conventional maths can do a decent job, but often leaves something to be desired. Christopher Kennedy from Crunchyroll explains the testing he’s done looking at a super resolution upscaling technique which uses machine learning to improve the quality of upscaled anime video.

Waifu2x is an opensource algorithm which uses Convolutional Neural Networks (CNNs) to scale images and remove artefacts. To start with, Christopher explains the background of traditional algorithmic upscaling discussing the fact that better-looking algorithms take longer so TVs often choose the fastest leading them to look pretty bad if fed SD video. Better for the streaming provider to spend the time doing an upconversion to 4K so allow the viewer a better final quality on their set.

Machine Learning needs a training set and one thing which has contributed to waifu2x’s success in Anime is that it has been trained only on examples of anime leaving it well practised in improving this type of image. Christopher presents the results of his tests comparing standard bilinear and bicubic scaling with waifu2x showing the VMAF, PSNR and SSIM scores.

Finishing off the video, Christopher talks about the time this waifu2x takes to run, the cost of running it in the cloud and he shares some of the command lines he used.

Reference links:

Watch now!
Speaker

Christopher Kennedy Christopher Kennedy
Staff Video Engineer,
Crunchyroll

Video: Low Latency, Real-Time Streaming & WebRTC

Can any stream be too low-latency? For some matching broadcast latency, is all they need. But for others, particularly for gaming, gambling or more interactive services, sub-second is a must and they are happy to swap out parts of their technology stack to make that happen. WebRTC is often seen as the best choice for anyone wanting to go achieve an almost instant stream. Started by Google in 2011 for video conferencing applications, WebRTC hit a 1.0 release in 2018 and has been adopted by a number of companies catering to the broadcast market.

WebRTC stands out among the plethora of streaming protocols since it is an actual stream of data and not a series of files transferred just in time. Traditionally buffers have been heavily used in streaming because it was so hard to get data to the player when the mainstream internet was starting out in the 90s and as the mobile internet was establishing itself 10 years later. Whilst those buffers are very helpful in dealing with delayed data, they are a big set back in delivering a low-latency stream. With WebRTC, there is very little buffering, so when using the protocol you have to understand that you may not get all your data delivered and if packets are missing glitches will be seen. This is one significant difference since MPEG DASH and HLS will either show you a blank screen or a perfect rendition of the file chunk that was sent thanks to TCP. This is an example of the compromises of going to sub-second latency; there are no second chances to get the packet again. And whilst this compromise may be a great exchange for an auction site or betting service, for other streaming services, it may be better to use CMAF with 3-second latency.

In this talk, Limelight Networks Video Architect Andrew Crowe introduces WebRTC and explains how it can be deployed. He starts by talking about the video codecs it contains. VP9 has recently been added to the options and for a long time, it was a VP8 technology. Andrew explains how the codecs it carries does have a knock-on effect on its compatibility with browsers. UDP is the underlying technology to all low-latency technologies since the bureaucracy of TCP/IP gets in the way of real-time media streams. Andrew also explains how security pervades WebRTC from its use of DTLS (which is like HTTPS/TLS for UDP) to secure RTP and SRTCP.

The last part of the talk discusses the architectures that CDN LimeLight uses to enable large-scale WebRTC streams including the need to get through firewalls. Andrew discusses how some features of the technology suit small-scale events, but can’t be used with thousands of viewers. He also discusses how adaptive bitrate streams can be delivered, although not within WebRTC itself, there is enough information to implement ABR in addition to the standard stream.

Watch now!
Speakers

Andrew Crowe Andrew Crowe
Video Architect,
Limelight Networks

Video: Reliable, Live Contribution over the Internet

For so long we’ve been desperate for a cheap and reliable way to contribute programmes into broadcasters, but it’s only in recent years that using the internet for live-to-air streams has been practical for anyone who cares about staying on-air. Add to that an increasing need to contribute live video into, and out of, cloud workflows, it’s easy to see why there’s so much energy going into making the internet a reliable part of the broadcast chain.

This free on-demand webcast co-produced by The Broadcast Knowledge and SMPTE explores the two popular open technologies for contribution over the internet, RIST and SRT. There are many technologies that pre-date those, including Zixi, Dozer and QVidium’s ARQ to name but 3. However, as the talk covers, it’s only in the last couple of years that the proprietary players have come together with other industry members to work on an open and interoperable way of doing this.

Russell Trafford-Jones, from UK video-over-IP specialist Techex, explores this topic starting from why we need anything more than a bit of forward error correction (FEC) moving on to understanding how these technologies apply to networks other than the internet.

This webcast looks at how SRT and RIST work, their differences and similarities. SRT is a well known protocol created and open sourced by Haivision which predates RIST by a number of years. Haivision have done a remarkable job of explaining to the industry the benefits of using the internet for contibution as well as proving that top-tier broadcasters can rely on it.

RIST is more recent on the scene. A group effort from companies including Haivision, Cobalt, Zixi and AWS elemental to name just a few of the main members, with the aim of making a vendor-agnostic, interoperable protocol. Despite, being only 3 years old, Russell explains the 2 specifications they have already delivered which brings them broadly up to feature parity with SRT and are closing in on 100 members.

Delving into the technical detail, Russell looks at how ARQ, the technology fundamental to all these protocols works, how to navigate firewalls, the benefits of GRE tunnels and much more!

The webcast is free to watch with no registration required.

Watch now!
Speakers

Russell Trafford-Jones Russell Trafford-Jones
Manager, Support & Services, Techex
Director of Education, Emerging Technologies, SMPTE
Editor, The Broadcast Knowledge