Video: Reliable and Uncompressed Video on AWS

Uncompressed video in the cloud is an answer to the dreams that many people are yet to have, but the early adopters of cloud workflows, those that are really embedding the cloud into their production and playout efforts are already asking for it. AWS have developed a way of delivering this between computers within their infrastructure and have invited a vendor to explain how they are able to get this high-bandwidth content in and out.

On The Broadcast Knowledge we don’t normally feature such vendor-specific talks, but AWS is usually the sole exception to the rule as what’s done in AWS is typically highly informative to many other cloud providers. In this case, AWS is first to the market with an in-prem, high-bitrate video transfer technology which is in itself highly interesting.

LTN’s Alan Young is first to speak, telling us about the traditional broadcast workflows of broadcasters giving the example of a stadium working into the broadcaster’s building which then sends out the transmission feeds by satellite or dedicated links to the transmission and streaming systems which are often located elsewhere. LTN feel this robs the broadcaster of flexibility and cost savings from lower-cost internet links. The hybrid that he sees working in medium-term is feeding the cloud directly from the broadcaster. This allows production workflows to take place in the cloud. After this has happened, the video can either come back to the broadcaster before on-pass to transmission or go directly to one or more of the transmission systems. Alan’s view is the interconnecting network between the broadcaster and the cloud needs to be reliable, high quality, low-latency and able to handle any bandwidth of signal – even uncompressed.

Once in the cloud, AWS Cloud Digital Interface (CDI) is what allows video to travel reliably from one computer to another. Andy Kane explains what the drivers were to create this product. With the mantra that ‘gigabits are the new megabits’, they looked at how they could move high-bandwidth signals around AWS reliably with the aim of abstracting the difficulty of infrastructure away from the workflow. The driver for uncompressed in the cloud is reducing re-encoding stages since each of them hits latency hard and, for professional workflows, we’re trying to keep latency as close to zero as possible. By creating a default interface, the hope is that inter-vendor working through a CDI interface will help interoperability. LTN estimate their network latency to be around 200ms which is already a fifth of a second, so much more latency on top of that is going to creep up to a second quite easily.

David Griggs explains some of the technical detail of CDI. For instance, it has the ability to send data of any format be that raw packetised video, audio, ancillary data or compressed data using UDP, multicast between EC2 instances within a placement group. With a target latency of less than one frame, it’s been tested up to UHD 60fps and is based on the Elastic Fabric Adapter which is a free option for EC2 instances and uses kernel bypass techniques to speed up and better control network transfers. CPU use scales linearly so where 1080p60 takes 20% of a CPU, UHD would take 80%. Each stream is expected to have its own CPU.

The video ends with Alan looking at the future where all broadcast functionality can be done in the cloud. For him, it’s an all-virtual future powered by the increasingly accessible high-bandwidth internet connectivity coming in a less than the cost of bespoke, direct links. David Griggs adds that this is changing the financing model moving from a continuing effort to maximise utilisation of purchased assets, to a pay as you go model using just the tools you need for each production.

Watch now!
Download the slides
Please note, if you follow the direct link the video featured in this article is the seventh on the linked page.

Speakers

David Griggs David Griggs
Senior Product Manager,
AWS
Andy Kane Andy Kane
Principal Business Development Manager,
AWS
Alan Young Alan Young
CTO and Head of Strategy,
LTN Global

Video: Line by Line Processing of Video on IT Hardware

If the tyranny of frame buffers is let to continue, line-latency I/O is rendered impossible without increasing frame-rate to 60fps or, preferably, beyond. In SDI, hardware was able to process video line-by-line. Now, with uncompressed SDI, is the same possible with IT hardware?

Kieran Kunhya from Open Broadcast Systems explains how he has been able to develop line-latency video I/O with SMPTE 2110, how he’s coupled that with low-latency AVC and HEVC encoding and the challenges his company has had to overcome.

The commercial drivers are fairly well known for reducing the latency. Firstly, for standard 1080i50, typically treated as 25fps, if you have a single frame buffer, you are treated to a 40ms delay. If you need multiple buffers for a workflow, this soon stacks up so whatever the latency of your codec – uncompressed or JPEG XS, for example – the latency will be far above it. In today’s covid world, companies are looking for cutting the latency so people can work remotely. This has only intensified the interest that was already there for the purposes of remote production (REMIs) in having low-latency feeds. In the Covid world, low latency allows full engagement in conversations which is vital for news anchors to conduct interviews as well as they would in person.

IP, itself, has come into its own during recent times where there has been no-one around to move an SDI cable, being able to log in and scale up, or down, SMPTE ST 2110 infrastructure remotely is a major benefit. IT equipment has been shown to be fairly resilient to supply chain disruption during the pandemic, says Kieran, due to the industry being larger and being used to scaling up.

Kieran’s approach to receiving ST 2110 deals in chunks of 5 to 10 lines. This gives you time to process the last few lines whilst you are waiting for the next to arrive. This processing can be de-encapsulation, processing the pixel values to translate to another format or to modify the values to key on graphics.

As the world is focussed on delivering in and out of unusual and residential places, low-bitrate is the name of the game. So Kieran looks at low-latency HEVC/AVC encoding as part of an example workflow which takes in ST 2110 video at the broadcaster and encodes to MPEG to deliver to the home. In the home, the video is likely to be decoded natively on a computer, but Kieran shows an SDI card which can be used to deliver in traditional baseband if necessary.

Kieran talks about the dos and don’ts of encoding and decoding with AVC and HEVC with low latency targetting an end-to-end budget of 100ms. The name of the game is to avoid waiting for whole frames, so refreshing the screen with I-frame information in small slices, is one way of keeping the decoder supplied with fresh information without having to take the full-frame hit of 40ms (for 1080i50). Audio is best sent uncompressed to ensure its latency is lower than that of the video.

Decoding requires carefully handling the slice boundaries, ensuring deblocking i used so there are no artefacts seen. Compressed video is often not PTP locked which does mean that delivery into most ST 2110 infrastructures requires frame synchronising and resampling audio.

Kieran foresees increasing use of 2110 to MPEG Transport Stream back to 2110 workflows during the pandemic and finishes by discussing the tradeoffs in delivering during Covid.

Watch now!
Speaker

Kieran Kunhya Kieran Kunhya
CEO & Founder, Open Broadcast Systems

Video: DOS Gaming Aspect Ratio – 320×200


Occasionally, talks about broadcast topics can be a little dry. Not this one which discusses aspect ratios. For those who feel they are too well versed in 16:9, 4:3 and the many other standard aspect ratios in use in the film and broadcast industries, looking at them through the lens of retro computer gaming will be a breath of fresh air. For those who are new to anything that’s not widescreen 16:9 this is a great intro to a topic of fundamental importance for anyone dealing with video.

This video is no surprise coming from YouTube channel Displaced Gamers who have previously been on The Broadcast Knowledge talking about 525-Line Analog Video and Analog Luma – A History and Explanation of Video. After a brief intro, we quickly start looking at what standard resolutions are today and their aspect ratios.

The aspect ratio of a video is a way of describing how wide it is compared to its height. This can be done by an actual ratio of width:height or displayed more mathematically as a decimal such as 1.778 in the case of 16:9 widescreen. The video discusses how old CRTs display video, their use of analogue dials that changed the width and height of the image.

In today’s world, pixels tend to be square so those encountering any pixels which aren’t square tend to work in archiving and preservation. But the reality today is that with so many second screen devices, there are all sorts of resolutions and a variety of aspect ratios. As people working in media and entertainment, we have to understand the impact on the size and shape of the video when displaying it on different screens. This video shows the impacts vividly using figurines from Doom and comparing them with the in-game graphics from Doom before then looking at aspect ratios across the SNES, Amiga, Atari ST as well as IBM DOS.

Watch now!
Speaker

Chris Kennedy Chris Kennedy
Displaced Gamers, YouTube Channel

Video: What is 525-Line Analog Video?

With an enjoyable retro feel, this accessible video on understanding how analogue video works is useful for those who have to work with SDI rasters, interlaced video, black and burst, subtitles and more. It’ll remind those of us who once knew, a few things since forgotten and is an enjoyable primer on the topic for anyone coming in fresh.

Displaced Gamers is a YouTube channel and their focus on video games is an enjoyable addition to this video which starts by explaining why analogue 525-line video is the same as 480i. Using a slow-motion video of a CRT (Cathode Ray Tube) TV, the video explains the interlacing technique and why consoles/computers would often use 240p.

We then move on to timing looking at the time spent drawing a line of video, 52.7 microseconds, and the need for horizontal and vertical blanking. Blanking periods, the video explains are there to cover the time that the CRT TV would spend moving the electron beam from one side of the TV to the other. As this was achieved by electromagnets, while these were changing their magnetic level, and hence the position of the beam, the beam would need to be turned off – blanked.

The importance of these housekeeping manoeuvres for older computers was that this was time they could use to perform calculations, free from the task of writing data in to the video buffer. But this was not just useful for computers, broadcasters could use some of the blanking to insert data – and they still do. We see in this video a VHS video played with the blanking clearly visible and the data lines flashing away.

For those who work with this technology still, for those who like history, for those who are intellectually curious and for those who like reminiscing, this is an enjoyable video and ideal for sharing with colleagues.

Watch now!
Speaker

Chris Kennedy Chris Kennedy
Displaced Gamers,YouTube Channel