Video: SMPTE Technical Primers

The Broadcast Knowledge exists to help individuals up-skill whatever your starting point. Videos like this are far too rare giving an introduction to a large number of topics. For those starting out or who need to revise a topic, this really hits the mark particularly as there are many new topics.

John Mailhot takes the lead on SMPTE 2110 explaining that it’s built on separate media (essence) flows. He covers how synchronisation is maintained and also gives an overview of the many parts of the SMPTE ST 2110 suite. He talks in more detail about the audio and metadata parts of the standard suite.

Eric Gsell discusses digital archiving and the considerations which come with deciding what formats to use. He explains colour space, the CIE model and the colour spaces we use such as 709, 2100 and P3 before turning to file formats. With the advent of HDR video and displays which can show bright video, Eric takes some time to explain why this could represent a problem for visual health as we don’t fully understand how the displays and the eye interact with this type of material. He finishes off by explaining the different ways of measuring the light output of displays and their standardisation.

Yvonne Thomas talks about the cloud starting by explaining the different between platform as a service (PaaS), infrastructure as a service (IaaS) and similar cloud terms. As cloud migrations are forecast to grow significantly, Yvonne looks at the drivers behind this and the benefits that it can bring when used in the right way. Using the cloud, Yvonne shows, can be an opportunity for improving workflows and adding more feedback and iterative refinement into your products and infrastructure.

Looking at video deployments in the cloud, Yvonne introduces video codecs AV1 and VVC both, in their own way, successors to HEVC/h.265 as well as the two transport protocols SRT and RIST which exist to reliably send video with low latency over lossy networks such as the internet. To learn more about these protocols, check out this popular talk on RIST by Merrick Ackermans and this SRT Overview.

Rounding off the primer is Linda Gedemer from Source Sound VR who introduces immersive audio, measuring sound output (SPL) from speakers and looking at the interesting problem of forward speakers in cinemas. The have long been behind the screen which has meant the screens have to be perforated to let the sound through which interferes with the sound itself. Now that cinema screens are changing to be solid screens, not completely dissimilar to large outdoor video displays, the speakers are having to move but now with them out of the line of sight, how can we keep the sound in the right place for the audience?

This video is a great summary of many of the key challenges in the industry and works well for beginners and those who just need to keep up.

Watch now!
Speakers

John Mailhot John Mailhot
Systems Architect for IP Convergence,
Imagine Communications
Eric Gsell Eric Gsell
Staff Engineer,
Dolby Laboratories
Linda Gedemer, PhD Linda Gedemer, PhD
Technical Director, VR Audio Evangelist
Source Sound VR
Yvonne Thomas Yvonne Thomas
Strategic Technologist
Digital TV Group

Video: Uncompressed Video over IP & PTP Timing

PTP and uncompressed video go hand in hand so this primer on ST 2022 and ST 2110 followed by a PTP deep dive is a great way to gain your footing in the uncompressed world.

In the longest video yet on The Broadcast Knowledge, Steve Holmes on behalf of Tektronix delivers two talks and a practical demo for the SMPTE San Francisco section where he introduces the reasons for and solutions to uncompressed video and goes through the key standards and technologies from ST 2022, those being -6 video and -7 seamless switching plus the major parts of ST 2110, those being timing, video, audio and metadata.

After that, at the 47 minute mark, Steve introduces the need for PTP by reference to black and burst, and goes on to explain how SMPTE’s ST2059 brings PTP into the broadcast domain and helps us synchronise uncompressed essences. He covered how PTP actually works, boundary clocks, Grandmaster/Master/Slave clocks and everything else you need to understand the system,

This video finishes with plenty of questions plus a look at the GUI of measurement equipment showing PTP in real life.

Watch now!
Speaker

Steve Holmes Steve Holmes
Senior Applications Engineer,
Tektronix

Video: Live Closed Captioning and Subtitling in SMPTE 2110-40

The ST 2110-40 standard specifies the real-time, RTP transport of SMPTE ST 291-1 Ancillary Data packets. It allows to create IP essence flow carrying VANC data known from SDI (like AFD, closed captions or triggering), complementing the existing video and audio portions of the SMPTE ST 2110 suite.

In this video, Bill McLaughlin introduces 2110-40 and shows its advantages for closed captioning. With video, audio and ancillary data broken into separate essence flows, you no longer need full SDI bandwidth to process closed captioning and transcription can be done by subscribing to a single audio stream which bandwith is less than 1 Mbps. That allows for a very high processing density, with up to 100 channels of closed captioning in 1 RU server.

Another benefit is that a single ST 2110-40 multicast containing closed captioning can be associated with multiple videos (e.g. for two different networks or dirty and clean feeds), typically using NMOS connection management. This translates into additional bandwidth savings and lower cost, as you don’t need separate CC/Subtitling encoders working in SDI domain.

Test and measurment equipment for ST 2110-40 is still under developmnent. However, with date rates of 50-100 kbps per flow monitoring is very managable and you can use COTS equipment and generic packet analyser like Wireshark with dissector available on Github.

Speaker

Bill McLaughlin
VP Product Development
EEG Enterprises