The SMPTE ST 2110-40 standard specifies the real-time, RTP transport of SMPTE ST 291-1 Ancillary Data packets. It allows creation of IP essence flows carrying the VANC data familiar to us from SDI (like AFD, closed captions or ad triggering), complementing the existing video and audio portions of the SMPTE ST 2110 suite.
This presentation, by Bill McLaughlin from EEG, is an updated tutorial on subtitling, closed captioning, and other ancillary data workflows using the ST 2110-40 standard. Topics include synchronization, merging of data from different sources and standards conversion.
Building on Bill’s previous presentation at the IP Showcase), this talk at NAB 2019 demonstrates a big increase in the number of vendors supporting ST 2110-40 standard. Previously a generic packet analyser like Wireshark with dissector was recommended for troubleshooting IP ancillary data. But now most leading multiviewer / analyser products can display captioning, subtitling and timecode from 2110-40 streams. At the recent “JT-NM Tested Program” event 29 products passed 2110-40 Reception Validation. Moreover, 27 products passed 2110-40 Transmitter Validation which mean that their output can be reconstructed into SDI video signals with appropriate timing and then decoded correctly.
Bill points out that ST 2110-40 is not really a new standard at this point, it only defines how to carry ancillary data from the traditional payloads over IP. Special care needs to be taken when different VANC data packets are concatenated in the IP domain. A lot of existing devices are simple ST 2110-40 receivers which would require a kind of VANC funnel to create a combined stream of all the relevant ancillary data, making sure that line numbers and packet types don’t conflict, especially when signals need to be converted back to SDI.
There is a new ST 2110-41 standard being developed for additional ancilary data which do not match up with ancillary data standardised in ST 291-1. Another idea discussed is to move away from SDI VANC data format and use a TTML track (Timed Text Markup Language – textual information associated with timing information) to carry ancillary information.
The ST 2110-40 standard specifies the real-time, RTP transport of SMPTE ST 291-1 Ancillary Data packets. It allows to create IP essence flow carrying VANC data known from SDI (like AFD, closed captions or triggering), complementing the existing video and audio portions of the SMPTE ST 2110 suite.
In this video, Bill McLaughlin introduces 2110-40 and shows its advantages for closed captioning. With video, audio and ancillary data broken into separate essence flows, you no longer need full SDI bandwidth to process closed captioning and transcription can be done by subscribing to a single audio stream which bandwith is less than 1 Mbps. That allows for a very high processing density, with up to 100 channels of closed captioning in 1 RU server.
Another benefit is that a single ST 2110-40 multicast containing closed captioning can be associated with multiple videos (e.g. for two different networks or dirty and clean feeds), typically using NMOS connection management. This translates into additional bandwidth savings and lower cost, as you don’t need separate CC/Subtitling encoders working in SDI domain.
Test and measurment equipment for ST 2110-40 is still under developmnent. However, with date rates of 50-100 kbps per flow monitoring is very managable and you can use COTS equipment and generic packet analyser like Wireshark with dissector available on Github.
VP Product Development
As much as video and audio are an essential part of watching a video, increasingly so is Timed Text (AKA Subtitles or Closed Captions). Legally required in some countries, its practical use beyond the hard of hearing is increasingly acknowledged. Whether for a sound-less TV in a reception or to help your follow the programme over the noise, Timed Text is here to stay online and in traditional broadcast. With the FCC declaring SMPTE-TT a ‘Safe Harbor’ format it has become a default format for subtitles interchange in the professional world.
In this webinar:
Why did we need a language for Timed Text?
An overview of TTML (Timed Text Markup Language from the WC3)
Examples of TTML
How SMPTE-TT extends TTML
How SMPTE-TT ends up as Closed Captions/CEA-608
Webinar: Thursday 14th December 19:00 GMT / 11am PT
Digging further into how AI can help broadcasters, this webinar shows how AI can generate a wealth of metadata which can improve your workflows and the value of your content.
IBM discusses how Watson can:
Generate closed captioning/subtitles and transcriptions
Improve content search and discovery
Increase relevance of viewer recommendations
Monitor for compliance violations
Automate highlight identification and clipping
Join Dave MacDonald, Vice President of Global Sales for IBM Watson & Cloud Platform and Eric Schumacher-Rasmussen, Editor at Streaming Media to talk about applying AI to broadcast.
Views and opinions expressed on this website are those of the author(s) and do not necessarily reflect those of SMPTE or SMPTE Members.
This website is presented for informational purposes only. Any reference to specific companies, products or services does not represent promotion, recommendation, or endorsement by SMPTE