Video: The Case To Caption Everything

To paraphrase a cliché, “you are free to put black and silence to air, but if you do it without captions, you’ll go to prison.” Captions are useful to the deaf, the hard of hearing as well as those who aren’t. And in many places, to not caption videos is seen as so discriminatory, there is a mandatory quota. The saying at the beginning alludes to the US federal and local laws which lay down fines for lack of compliance – though whether it’s truly possible to go to prison, is not clear.

The case for captioning:
“13.3 Million Americans watch British drama”

In many parts of the world ‘subtitles’ means the same as ‘captions’ does in countries such as the US. In this article, I shall use the word captions to match the terms used in the video. As Bill Bennett from ENCO Systems explains, Closed Captions are sent as data along with the video meaning you ask your receiver to turn off, or turn on, display of the text. 

In this talk from the Midwest Broadcast Multimedia Technology Conference, we hear not only why you should caption, but get introduced to the techniques for both creating and transmitting them. Bill starts by introducing us to stenography, the technique of typing on special machines to do real-time transcripts. This is to help explain how resource-intensive creating captions is when using humans. It’s a highly specialist skill which, alone, makes it difficult for broadcasters to deliver captions en masse.

The alternative, naturally, is to have computers doing the task. Whilst they are cheaper, they have problems understanding audio over noise and with multiple people speaking at once. The compromise which is often used, for instance by BBC Sports, is to have someone re-speaking the audio into the computer. This harnesses the best aspects of the human brain with the speed of computing. The re-speaker can annunciate and emphasise to get around idiosyncrasies in recognition.

Bill re-visits the numerous motivations to caption content. He talks about the legal reasons, particularly within the US, but also mentions the usefulness of captions for situations where you don’t want audio from TVs, such as receptions and shop windows as well as in noisy environments. But he also makes the point that once you have this data, the broadcaster can take the opportunity to use that data for search, sentiment analysis and archive retrieval among other things.

Watch now!
Download the presentation
Speaker

Bill Bennett Bill Bennett
Media Solutions Account Manager
ENCO Systems

Video: Live Closed Captioning and Subtitling in SMPTE 2110 (update)

The SMPTE ST 2110-40 standard specifies the real-time, RTP transport of SMPTE ST 291-1 Ancillary Data packets. It allows creation of IP essence flows carrying the VANC data familiar to us from SDI (like AFD, closed captions or ad triggering), complementing the existing video and audio portions of the SMPTE ST 2110 suite.

This presentation, by Bill McLaughlin from EEG, is an updated tutorial on subtitling, closed captioning, and other ancillary data workflows using the ST 2110-40 standard. Topics include synchronization, merging of data from different sources and standards conversion.

Building on Bill’s previous presentation at the IP Showcase), this talk at NAB 2019 demonstrates a big increase in the number of vendors supporting ST 2110-40 standard. Previously a generic packet analyser like Wireshark with dissector was recommended for troubleshooting IP ancillary data. But now most leading multiviewer / analyser products can display captioning, subtitling and timecode from 2110-40 streams. At the recent “JT-NM Tested Program” event 29 products passed 2110-40 Reception Validation. Moreover, 27 products passed 2110-40 Transmitter Validation which mean that their output can be reconstructed into SDI video signals with appropriate timing and then decoded correctly.

Bill points out that ST 2110-40 is not really a new standard at this point, it only defines how to carry ancillary data from the traditional payloads over IP. Special care needs to be taken when different VANC data packets are concatenated in the IP domain. A lot of existing devices are simple ST 2110-40 receivers which would require a kind of VANC funnel to create a combined stream of all the relevant ancillary data, making sure that line numbers and packet types don’t conflict, especially when signals need to be converted back to SDI.

There is a new ST 2110-41 standard being developed for additional ancilary data which do not match up with ancillary data standardised in ST 291-1. Another idea discussed is to move away from SDI VANC data format and use a TTML track (Timed Text Markup Language – textual information associated with timing information) to carry ancillary information.

Watch now!

Download the slides.

Speakers

 

Bill McLaughlin Bill McLaughlin
VP of Product Development
EEG

Video: Live Closed Captioning and Subtitling in SMPTE 2110-40

The ST 2110-40 standard specifies the real-time, RTP transport of SMPTE ST 291-1 Ancillary Data packets. It allows to create IP essence flow carrying VANC data known from SDI (like AFD, closed captions or triggering), complementing the existing video and audio portions of the SMPTE ST 2110 suite.

In this video, Bill McLaughlin introduces 2110-40 and shows its advantages for closed captioning. With video, audio and ancillary data broken into separate essence flows, you no longer need full SDI bandwidth to process closed captioning and transcription can be done by subscribing to a single audio stream which bandwith is less than 1 Mbps. That allows for a very high processing density, with up to 100 channels of closed captioning in 1 RU server.

Another benefit is that a single ST 2110-40 multicast containing closed captioning can be associated with multiple videos (e.g. for two different networks or dirty and clean feeds), typically using NMOS connection management. This translates into additional bandwidth savings and lower cost, as you don’t need separate CC/Subtitling encoders working in SDI domain.

Test and measurment equipment for ST 2110-40 is still under developmnent. However, with date rates of 50-100 kbps per flow monitoring is very managable and you can use COTS equipment and generic packet analyser like Wireshark with dissector available on Github.

Speaker

Bill McLaughlin
VP Product Development
EEG Enterprises

Video: SMPTE Timed Text

As much as video and audio are an essential part of watching a video, increasingly so is Timed Text (AKA Subtitles or Closed Captions). Legally required in some countries, its practical use beyond the hard of hearing is increasingly acknowledged. Whether for a sound-less TV in a reception or to help your follow the programme over the noise, Timed Text is here to stay online and in traditional broadcast. With the FCC declaring SMPTE-TT a ‘Safe Harbor’ format[1][2] it has become a default format for subtitles interchange in the professional world.

In this webinar:
Why did we need a language for Timed Text?
An overview of TTML (Timed Text Markup Language from the WC3)
Examples of TTML
How SMPTE-TT extends TTML
How SMPTE-TT ends up as Closed Captions/CEA-608

Watch now!

[1] https://apps.fcc.gov/edocs_public/attachmatch/FCC-12-9A1.txt
[2] FCC § 79.103 (c)