Video: News in the New Norm

Whilst television marches on despite the pandemic, whilst not to overlook the decimated sports broadcasters, the mantra is ‘the show must go on’ so everyone has been trying to find ways to make TV production safe, practical yet still good! There is so many practical issues behind the camera from the typical packed OB trucks to simple bathroom sharing in the office which needs to be considered. In this video, we hear from BBC News explaining how they have managed to reshape their production to keep the news reaching the public.

“It’s hard to do your job in these circumstances.”

Morwen Williams

Morwen Williams, Head of UK News Operations at the BBC, describes the news workflows that have been created to make the news work. The term ‘Zoom workflow’ is in the fore, in the same way as a ‘Dropbox workflow’ has, perhaps forever, changed many file-based workflows, for live production a ‘Zoom workflow’ is the same. Though Morwen is quick to point out the work is as much technological as practical with the need for ‘long poles’ to ensure social distancing for sound engineers and the like. Workflows have had to remove roles, such as vision mixing, or move people to otherwise spare galleries.

Morwen explains that within the mobile journalism team, there was a pilot last year to test how well an iPhone X would be able to capture real packages which had some good results which ran on the national news. This is just one example of how the technological groundwork to enable mobile journalism during this crisis was already being laid.

Meeting virtually has its advantages, we hear, because when you have a lot of staff physical space is hard to acquire at the best of times. Since attendance can never be 100%, it’s better to have meetings more frequently to give people a better chance of attending some. Whilst this is certainly no replacement for physically meeting with people, it is likely to be retained when that is again possible.

Robin Pembrooke then takes some time to explain the shifts in production that he’s seen. All of the digital teams are now working from home. 15,000 people went from the offices to working from home which was a fraught transition but with no major outages. Radio shows are often now being presented and run by the presenter themselves from home. Talkback now takes many forms whether that be WhatsApp or other more broadcast-focused talkback-over-broadband products.

Watch now!
Speakers

Morwen Williams Morwen Williams
Head of UK Operations,
BBC News
Robin Pembrooke Robin Pembrooke
Director, News Product and Systems,
BBC News

Video: Versatile Video Coding (VVC)

MPEG’s VVC is the next iteration along from HEVC (H.265). Whilst there are other codecs being finalised such as EVC and LCEVC, this talk looks at how VVC builds on HEVC, but also lends its hand to screen content and VR becoming a more versatile codec than HEVC, meeting the world’s changing needs. For an overview of these emerging codecs, this interview covers them all.

VVC is a joint project between ITU-T and MPEG (AKA ISO/IEC). Its aim is to create a 50% encoding efficiency in bitrate for the same quality picture, with the emphasis on higher resolutions, HDR and 10-bit video. At the same time, acknowledging that optimising codecs on natural video is no longer the core requirement for a lot of people. Its versatility comes from being able to encode screen content, independent sub-picture encoding, scalable encoding among others.

Gary Sullivan from Microsoft Technology & Research talks us through what all this means. He starts by outlining the case for a new codec, particularly the reach for another 50% bitrate saving which may come at further computational cost. Gary points out that, as video use continues to increase, anything that can be done to significantly reduce bitrates will either drive down costs or allow people to use video in better ways.

Any codec is a set of tools all working together to create the final product. Some tools are not always needed, say if you are running on a lower-power system, allowing the codec to be tuned for the situation. Gary puts up a list of some of the tools in VVC, many of which are an evolution of the same tool in HEVC, and highlights a few to give an insight into the improvements under the hood.

Gary’s pick of the big hitters in the tool-set are the Adaptive Loop Filter which reduces artefacts and prediction errors, affine motion compensation which provides better motion compensation, triangle partitioning mode which is a high-computation improvement in intra prediction, bi-directional optical flow (BIO) for motion prediction, intra-block copy which is useful for screen content where an identical block is found elsewhere in the same frame.

Gary highlights SCC, Screen Content Coding, which was in HEVC but not in the base profile, this has changed for VVC so all VVC implementations will have SCC whereas very few HEVC implementations do. Reference Picture Resampling (RPR) allows changing resolution from picture to picture where pictures can be stored at a different resolution from the current picture. And independent sub-pictures which allow parts of the video frame to be re-arranged or only for only one region to be decoded. This works well for VR, video conferencing and allows the creation of composite videos without intermediate decoding.

As usual, doing more thinking about how to compress a picture brings further computational demands. MPEG’s LCEVC is the standards body’s way of fighting against this, as notable bitrate improvements are possible even for low-power devices. With VVC, versatility is the aim, however. Decoders see a 60% increase in decode complexity. Whilst MPEG specifications are all about the decoder – hence allowing a lot of ongoing innovation in encoding techniques – current examples are about 8 or 9 times slower. Performance is better for screen content and on higher resolutions. Whilst the coding part of VVC is mature, versatility is still being worked on but the aim is to publishing within about 2 months.

The video finishes with a Q&A that covers implementing DASH into a low-latency video workflow. How CMAF will be specified to use VVC. Live workflows which Gary explains always come after the initial file-based work and is best understood after the first attempts at encoder implementations, noting that hardware lags by 2 years. He goes on to explain that chipmakers need to see the demand. At the moment, there is a lot of focus from implementors on AV1 by implementors, not to mention EVC, so the question is how much demand can be generated.

This talk is based on talk from Benjamin Bross originally given to an ITU workshop (PDF), then presented at Mile High Video by Benjamin and was updated by Gary for this conversation with the Seattle Video Tech community.

Bitmovin has an article highlighting many of the improvements in VVC written by Christian Feldmann who has given many talks on both AV1 and VVC.

Watch now!

Speakers

Gary Sullivan Gary Sullivan
Microsoft Technology & Research

Video: SRT – The Simple Solution for Your Remote and At-Home Workforce

SRT allows unreliable networks like the Internet to be used for reliable, encrypted video contribution. Created by Haivision and now an Open Source technology with an IETF draft spec, the alliance of SRT users continues to grow as the technology continues to develop and add features. Haivision are members of RIST which Kieran Kunhya spoke about in yesterday’s article.

Being open-source, SRT is widely deployed in across hundreds of manufacturers so there is a lot of choice, although Haivision do focus on their products in this video. The important part is in how the protocol works to keep the data intact which is dealt with in the second segment from Haivision’s Selwyn Jans. Lastly, we hear of some examples of real-world use cases to whet the appetite and start the thought process about how SRT could benefit you.

The fundamental aspect of SRT, as Selwyn explains, is that the packets are counted in at the remote end and if one packet is missing, it’s re-requested from the source. Whilst this is how normal file transfers work, using TCP, this has been optimised to ensure real-time media isn’t unduly delayed. TCP would acknowledge every single packet and the sender should take note when a packet acknowledgement doesn’t arrive. SRT is more efficient whereby acknowledgements are minimised, only re-requests which keeps overheads down. A buffer is set up in the destination so that there is still data available while we’re waiting for these packets to be resent. Depending on the network quality, we may need to have enough buffer to deal with several re-requests for the sane packet.

How SRT Works

Selwyn expands upon this re-request mechanism and looks the way SRT can be sent, or ‘pushed’, as well as working in as a ‘listener’ so that the sender waits to be contacted. before it starts sending any data. You can choose the best one to use to fit around your firewalls. Where there is a NAT firewall, SRT can always be sent out but receiving requests would need firewall modification. One of the benefits of SRT is its ability to be deployed anywhere, including in a home, quickly and easily so firewall changes would not be welcome. For a more in-depth description of SRT, check out this talk from SF Video Technology.

The last section features Corey Behnke from streaming company Live X talking about where they have been using SRT. Replacing satellite is one important use of SRT since in many places, there is sufficient bandwidth available to stream over the internet. Before technologies such as SRT, this was likely to lead to breakups on air, so satellite was the clear winner. Now, there’s money to be saved by not buying satellite space. Could ingress and egress is also a very important workflow for SRT, and similar protocols. The panelists explain how this works using as an example the Haivision Media Gateway, though other products such as Techex and Videoflow.

Watch now!
Speakers

Marcus Schioler Marcus Schioler
Vice President, Product Marketing
Haivision
Selwyn Jans Selwyn Jans
Technical Video Engineer,
Haivision
Corey Behnke Corey Behnke
Producer & Co-Founder,
Live X

Video: RIST and Open Broadcast Systems

RIST is a streaming protocol which allows lossy networks such as the internet to be used for critical streaming applications. Called Reliable Internet Stream Transport, it uses ARQ (Automatic Repeat reQuest) retransmission technology to request any data that is lost by the network, creating reliable paths for video contribution.

In this presentation, Kieran Kunhya from Open Broadcast Systems explains why his company has chosen RIST protocol for their software-based encoders and decoders. Their initial solution for news, sports and linear channels contribution over public internet were based on FEC (Forward Error Correction), a technique used for controlling errors in transmission by sending data in a redundant way using error-correcting code. However, FEC couldn’t cope with large burst losses, there was limited interoperability and the implementation was complex. Protecting the stream by sending the same feed over multiple paths and/or sending a delayed version of the stream on the same path, had a heavy bandwidth penalty. This prompted them, instead, to implement an ARQ technique based on RFC 4585 (Extended RTP Profile for Real-time Transport Control Protocol-Based Feedback), which gave them functionality quite similar to the basic RIST functionality.

Key to the discussion, Kieran explains why they decided not to adopt the SRT protocol. As SRT is based file transfer protocol, it’s difficult or impossible to add features like bonding, multi-network and multi-point support which were available in RIST from day one. Moreover, RIST has a large IETF heritage from other industries and is vendor-independent. In Kieran’s opinion, SRT will become a prosumer solution (similar to RTMP, now, for streaming) and RIST will be the professional solution (analogous to MPEG-2 Transport Streams).

Different applications for the RIST protocol are discussed, including 24/7 linear channels for satellite uplink from playout, interactive (two-way) talking heads for news, high bitrate live events and reverse vision lines for monitoring purposes. Also, there is a big potential for using RIST in cloud solutions for live broadcast production workflows. Kieran hopes that more broadcasters will start using spin-up and spin-down cloud workflows, which will help save space and money on infrastructure.

What’s interesting, Open Broadcast Solutions are not currently interested in RIST Main Profile (the main advantages of this profile are support for encryption, authentication and in-band data). Kieran explains that to control devices in remote locations you need some kind of off-the-shelf VPN anyway. These systems provide encryption and NAT port traversal, so the problem is solved at a different layer in the OSI model and this gives customers more control over the type of encryption they want.

Watch now!

Speaker

Kieran Kunhya Kieran Kunhya
Founder and CEO,
Open Broadcast Systems