Video: Where can SMPTE 2110 and NDI co-exist?

When are two video formats better than one? Broadcasters have long sought ‘best of breed’ systems matching equipment as close as possible to your ideal workflow. In this talk we look getting the best of both compressed, low-latency and uncompressed video. NDI, a lightly compressed, ultra low latency codec, allows full productions in visually lossless video with a field of latency. SMPTE’s ST-2110 allows full productions with uncompressed video and almost zero latency.

Bringing together the EBU’s Willem Vermost who paints a picture from the perspective of public broadcasters who are planning their moves into the IP realm, Marc Risby from UK distributor and integrator Boxer brings a more general view of the market’s interest and Will Waters who spent many years in Newtek, the company that invented NDI we hear the two approaches of compressed and uncompressed compliment each other.

This panel took place just after the announcement that Newtek had been bought by VizRT, the graphics vendor, who sees a lot of benefit in being able to work in both types of workflow, for clients large and small and who have made Newtek its own entity under the VizRT umbrella to ensure continued focus.

A key differentiator of NDI is it’s focus on 1 gigabit networking. Its aim has always to enable ‘normal’ companies to be able to deploy IP video easily so they can rapidly benefit from the benefits that IP workflows bring over SDI or other baseband video technologies. A keystone in this strategy is to enable everything to happen on normal, 1Gbit switches which are prevalent in most companies today. Other key elements to the codec are: free, software development kit, bi-directionality, resolution independent, audio sample-rate agnostic, tally support, auto discovery and more.

In the talk, we discuss the pros and cons of this approach where interoperability is assured as everyone has to use the same receive and transmit code, against having an standard such as SMPTE ST-2110. SMPTE ST-2110 has the benefit of being uncompressed, assuring the broadcaster that they have captured the best possible quality of video, promises better management at scale, tighter integration into complex workflows, lower latency and the ability to treat the many different essences separately. Whilst we discuss many of the benefits of SMPTE ST-2110, you can get a more detailed overview from this presentation from the IP Showcase.

Watch now!

This panel was produced by IET Media, a technical network within the IET which runs events, talks and webinars for networking and education within the broadcast industry. More information

Speakers

Willem Vermost Willem Vermost
Senior IP Media Technology Architect,
EBU
Marc Risby Marc Risby
CTO,
Boxer Group
Will Walters Will Waters
Vice President Of Worldwide Customer Success,
VizRT
Russell Trafford-Jones Moderator: Russell Trafford-Jones
Exec Member, IET Media
Manager, Support & Services, Techex
Editor, The Broadcast Knowledge

Video: Analog Luma – A History and Explanation of Video

There are many video fundamentals in today’s video looking at how we see light and how we can represent it in a video signal. Following on from last week’s look at analogue 525-line video we take a deeper dive in to light and colour.

The video starts by examining how white light can be split into colours, primaries, and how these can be re-combined in different amounts to create different colours. It then moves on to examining how the proportion of colours which create ‘white’ light isn’t as even as you might imagine. This allows us to understand how to create brighter and dimmer light which is called the luminance. We’re introduced to the CIE 2d and 3d colour graphs helping us to understand colour space and colour volume

Modern video, even if analogue, is acquired with red, green and blue as separate signals. This means if we want a grey-scale video signal, i.e. luminance only, we need to combine using the proportions discussed earlier. This biased version of luminance is what is called ‘luma’ explains the video from the Displaced Gamers YouTube Channel.

On top of human perception, much of the 20th century was dominated by CRT (Cathode Ray Tube) TVs, which don’t respond linearly to electrical voltage, meaning if you double the voltage, the brightness doesn’t necessary double. In order to compensate for that, ‘gamma correction’ is applied on acquisition so that playback on a CRT produces a linear response.

Pleasantly, an oscillator is wheeled out next looking at a real analogue video waveform demonstrating the shape of not only the sync pulses but the luminance waveform itself and how the on-screen rendition would be seen on a TV. The video then finishes with a brief look at colour addition NTSC, PAL, SECAM signals. A prelude, perhaps, to a future video.

Watch now!

Speaker

Chris Kennedy Chris Kennedy
Displaced Gamers,YouTube Channel

Video: NMOS IS-07, GPI Replacement and Much, Much More…

GPI was not without its complexities, but the simplicity of its function in terms of putting a short or a voltage on a wire, is unmatched by any other system we use in broadcasting. So the question here is, how do we do ‘GPI’ with IP given all the complexity, and perceived delay, in networked communication. CTO of Pebble Beach, Miroslav Jeras, is here to explain.

The key to understanding the power of the new specification for GPI from NMOS called IS-07 is to realise that it’s not trying to emulate DC electronics. Rather, by adding the timing information available from the PTP clock, the GPI trigger can now become extremely accurate – down to the audio sample – meaning you can now use GPI to indicate much more detailed situations. On top of that, the GPI messages can contain a number of different data types, which expands the ability of these GPI messages and also helps interoperability between systems.

Miroslav explains the ways in which these messages are passed over the network and how IS-07 interacts with the other specifications such as IS-05 and BCP-002-01. He explains how IS-07 was used in the Techno Project – tpc, Zurich and then takes us through a range of different examples of how IS-07 can be used including synchronisation of the GUI and monitoring as well as routing based on GPI.

Watch now! | Download the slides

Speakers

Miroslav Jeras Miroslav Jeras
CTO,
Pebble Beach Systems

Video: A paradigm shift in codec standards – MPEG-5 Part 2 LCEVC

LCEVC (Low Complexity Enhancement Video Coding) is a low-complexity encoder/decoder is in the process of standardisation as MPEG-5 Part 2. Instead of being an entirely new codec, LCEVC improves detail and sharpness of any base video codec (e.g., AVC, HEVC, AV1, EVC or VVC) while lowering the overall computational complexity expanding the range of devices that can access high quality and/or low-bitrate video.

The idea is to use a base codec at lower resolution and add additional layer of encoded residuals to correct artifacts. Details are encoded with directional decomposition transform using a very small matrix (2×2 or 4×4) which is efficient at preserving high frequencies. As LCEVC uses parallelized techniques to reconstruct the target resolution, it encodes video faster than a full resolution base encoder.

LCEVC allows for enhancement layers to be added on top of existing bitstreams, so for example UHD resolution can be used where only HD was possible before thanks to sharing decoding between the ASIC and CPU. LCEVC can be decoded via light software processing, and even via HTML5.

In this presentation Guido Meardi from V-Nova introduces LCEVC and answers a few imporant question including: is it suitable for very high quality / bitrates compression and will it work with future codecs. He also shows performance data and benchmarks for live and VoD streaming, illustrating the compression quality and encoding complexity benefits achievable with LCEVC as an enhancement to H.264, HEVC and AV1.

Watch now!

Speaker

Guido Meardi
CEO and Co-Founder
V-Nova Ltd.