Video: Introduction To AES67 & SMPTE ST 2110

While standardisation of video and audio over IP is welcome, this does leave us with a plethora of standards numbers to keep track of along with interoperability edge cases to keep track of. Audio-over-IP standard AES67 is part of the SMPTE ST-2110 standards suite and was born largely from RAVENNA which is still in use in it’s own right. It’s with this backdrop that Andreas Hildebrand from ALC NetworX who have been developing RAVENNA for 10 years now, takes the mic to explain how this all fits together. Whilst there are many technologies at play, this webinar focusses on AES67 and 2110.

Andreas explains how AES67 started out of a plan to unite the many proprietary audio-over-IP formats. For instance, synchronisation – like ST 2110 as we’ll see later – was based on PTP. Andreas gives an overview of this synchronisation and then we shows how they looked at each of the OSI layers and defined a technology that could service everyone. RTP, the Real-time Transport Protocol has been in use for a long time for transport of video and audio so made a perfect option for the transport layer. Andreas highlights the important timing information in the headers and how it can be delivered by unicast or IGMP multicast.

As for the audio, standard PCM is the audio of choice here. Andreas details the different format options available such as 24-bit with 8 channels and 48 samples per packet. By varying the format permutations, we can increase the sample rate to 96kHz or modify the number of audio tracks. To signal all of this format information, Session Description Protocol messages are sent which are small text files outlining the format of the upcoming audio. These are defined in RFC 4566. For a deeper introduction to IP basics and these topics, have a look at Ed Calverley’s talk.

The second half of the video is an introduction to ST-2110. A deeper dive can be found elsewhere on the site from Wes Simpson.
Andreas starts from the basis of ST 2022-6 showing how that was an SDI-based format where all the audio, video and metadata were combined together. ST 2110 brings the splitting of media, known as ‘essences’, which allows them to follow separate workflows without requiring lots of de-embedding and embedding processes.

Like most modern standards, ATSC 3.0 is another example, SMPTE ST 2110 is a suite of many standards documents. Andreas takes the time to explain each one and the ones currently being worked on. The first standard is ST 2110-10 which defines the use of PTP for timing and synchronisation. This uses SMPTE ST 2059 to relate PTP time to the phase of media essences.

2110-20 is up next and is the main standard that defines use of uncompressed video with headline features such as being raster/resolution agnostic, colour sampling and more. 2110-21 defines traffic shaping. Andreas takes time to explain why traffic shaping is necessary and what Narrow, Narrow-Linear, Wide mean in terms of packet timing. Finishing the video theme, 2110-22 defines the carriage of mezzanine-compressed video. Intended for compression like TICO and JPEG XS which have light, fast compression, this is the first time that compressed media has entered the 2110 suite.

2110-30 marks the beginning of the audio standards describing how AES67 can be used. As Andreas demonstrates, AES67 has some modes which are not compatible, so he spends time explaining the constraints and how to implement this. For more detail on this topic, check out his previous talk on the matter. 2110-31 introduces AES3 audio which, like in SDI, provides both the ability to have PCM audio, but also non-PCM audio like Dolby E and D.

Finishing up the talk, we hear about 2110-40 which governs transport of ancillary metadata and a look to the standards still being written, 2110-23 Single Video essence over multiple 2110-20 streams, 2110-24 for transport of SD signals and 2110-41 Transport of extensible, dynamic metadata.

Watch now!
Speaker

Andreas Hildebrand Andreas Hildebrand
Senior Product Manager,
ALC NetworX Gmbh.

On Demand Webinar: The Technology of Motion-Image Acquisition

A lot of emphasis is put on the tech specs of cameras, but this misses a lot of what makes motion-image acquisition an art form as much as it is a science. To understand the physics of lenses, it’s vital we also understand the psychology of perception. And to understand what ‘4K’ really means, we need to understand how the camera records the light and how it stores the data. Getting a grip on these core concepts allow us to navigate a world of mixed messages where every camera manufacturer from webcam to phone, from DSLR to Cinema is vying for our attention.

In the first of four webinars produced in conjunction with SMPTE, Russell Trafford-Jones from The Broadcast Knowledge welcomes SMPTE fellows Mark Schubin and Larry Thorpe to explain these fundamentals providing a great intro for those new to the topic, and filling in some blanks for those who have heard it before!

Russell will start by introducing the topic and exploring what makes some cameras suitable for some types of shooting, say, live television and others for cinema. He’ll talk about the place for smartphones and DSLRs in our video-everywhere culture. Then he’ll examine the workflows needed for different genres which drive the definitions of these cameras and lenses; If your live TV show is going to be seen 2 seconds later by 3 million viewers, this is going to determine many features of your camera that digital cinema doesn’t have to deal with and vice versa.

Mark Schubin will be talking about at lighting, optical filtering, sensor sizes and lens mounts. Mark spends some time explaining how light is made up and created whereby the ‘white’ that we see may be made of thousands of wavelengths of light, or just a few. So, the type of light can be important for lighting a scene and knowing about it, important for deciding on your equipment. The sensors, then, are going to receive this light, are also well worth understanding. It’s well known that there are red-, green- and blue-sensitive pixels, but less well-known is that there is a microlens in front of each one. Granted it’s pricey, but the lens we think most about is one among several million. Mark explains why these microlenses are there and the benefits they bring.

Larry Thorpe, from Canon, will take on the topic of lenses starting from the basics of what we’re trying to achieve with a lens working up to explaining why we need so many pieces of glass to make one. He’ll examine the important aspects of the lens which determine its speed and focal length. Prime and zoom are important types of lens to understand as they both represent a compromise. Furthermore, we see that zoom lenses take careful design to ensure that the focus is maintained throughout the zoom range, also known as tracking.

Larry will also examine the outputs of the cameras, the most obvious being the SDI out of the CCU of broadcast cameras and the raw output from cinema cameras. For film use, maintaining quality is usually paramount so, where possible, nothing is discarded hence creating ‘raw’ files which are named as they record, as close as practical, the actual sensor data received. The broadcast equivalent is predominantly RGB with 4:2:2 colour subsampling meaning the sensor data has been interpreted and processed to create RGB pixels and half the colour information has been discarded. This still looks great for many uses, but when you want to put your image through a meticulous post-production process, you need the complete picture.

The SMPTE Core Concepts series of webcasts are both free to all and aim to support individuals to deepen their knowledge. This webinar is in collaboration with The Broadcast Knowledge which, by talking about a new video or webinar every day helps empower each person in the industry by offering a single place to find educational material.

Watch now!
Speakers

Mark Schubin Mark Schubin
Engineer and Explainer
Larry Thorpe Larry Thorpe
Senior Fellow,
Canon U.S.A., Inc.
Russell Trafford-Jones Russell Trafford-Jones
Editor, The Broadcast Knowledge
Manager, Services & Support, Techex
Exec Member, IET Media

Video: ASTC 3.0 Basics, Performance and the Physical Layer

ATSC 3.0 is a revolutionary technology bringing IP into the realms of RF transmission which is gaining traction in North America and is deployed in South Korea. Similar to DVB-I, ATSC 3.0 provides a way to unite the world of online streaming with that of ‘linear’ broadcast giving audiences and broadcasters the best of both worlds. Looking beyond ‘IP’, the modulation schemes are provided are much improved over ATSC 1.0 providing much better reception for the viewer and flexibility for the broadcaster.

Richard Chernock, now retired, was the CSO of Triveni Digital when he have this talk introducing the standard as part of a series of talks on the topic. ATSC, formed in 1982 brought the first wave of digital television to The States and elsewhere, explains Richard as he looks at what ATSC 1.0 delivered and what, we now see, it lacked. For instance, it’s fixed 19.2Mbps bitrate hardly provides a flexible foundation for a modern distribution platform. We then look at the previously mentioned concept that ATSC 3.0 should glue together live TV, usually via broadcast, with online VoD/streaming.

The next segment of the talk looks at how the standard breaks down into separate standards. Most modern standards like STMPE’s 2022 and 2110, are actually a suite of individual standards documents united under one name. Whilst SMPTE 2110-10, -20, -30 and -40 come together to explain how timing, video, audio and metadata work to produce the final result of professional media over IP, similarly ATSC 3.0 has sections on explaining how security, applications, the RF/physical layer and management work. Richard follows this up with a look at the protocol stack which serves to explain which parts are served on TCP, which on UDP and how the work is split between broadcast and broadband.

The last section of the talk looks at the physical layer. That is to say how the signal is broadcast over RF and the resultant performance. Richard explains the newer techniques which improve the ability to receive the signal, but highlights that – as ever – it’s a balancing act between reception and bandwidth. ATSC 3.0’s benefit is that the broadcaster gets to choose where on the scales they want to broadcast, tuning for reception indoors, for high bit-rate reception or anywhere in between. With less than -6dB SNR performance plus EAS wakeup, we’re left with the feeling that there is a large improvement over ATSC 1.0.

The talk finishes with two headlining features of ATSC 3.0. PLPs, also known as Physical Layer Pipes, are another headlining feature of ATSC 3.0, where separate channels can be created on the same RF channel. Each of these can have their own robustness vs bit rate tradeoff which allows for a range of types of services to be provided by one broadcaster. The other is Layered Division Multiplexing which allows PLPs to be transmitted on top of each other which allows 100% utilisation of the available spectrum.

Watch now!
Speaker

Richard Chernock Dr. Richard Chernock
Former CSO,
Triveni Digital

Video: An Introduction to fibre optic cabling

Many of us take fibre optics for granted but how much about the basics do we actually know…or remember? You may be lucky enough to work in a company that only uses one type of fibre and connector, but in an job interview, it pays to know what happens in the wider world. Fortunately, Phil Crawley is here to explain fibre optics from scratch.

This introduction to fibre looks at the uses for fibre in broadcast. Simply put, we can consider that they’re used in high-speed networking and long-distance cabling of baseband signals such as SDI, audio or RF. The meat of the topic is that there are two types of fibre, multimode and single mode. It’s really important to know which one you’re going to be using; Phil explains why showing the two ways they manage to get light to keep moving down the glass and get to the other end.

The talk looks at the history of mulit-mode fibres which have continued to improve over the years which is recognised by the ‘OM’ number which currently stretches to OM5 (this is an advance on the OM4 which that talk considers.) Since multi-mode has some different versions, it’s possible to have mismatches if you send from one fibre to another. Phil visits these scenarios explaining how differences of the launch (laser vs. LED) and core diameter all affect the efficiency of moving light from one side of the junction to the other.

On that note, connectors are of key importance as there’s nothing worse than turning up with a fibre patch lead with the wrong connectors on the end. Phil explains the differences then looks at how to splice fibres together and the issues that need to be taken care of to do it well along with easy ways to fault find. Phil finishes the talk explaining how single-mode differs and offers some resources to learn more.

This video was recorded at a Jigsaw24 Tech Breakfast while Phil Crawley was their Chief Engineer. Download the slides

Watch now!
Speaker

Phil Crawley Phil Crawley
Lead Engineer, Media Engineers Ltd.
Former Chief Engineer, Jigsaw24