Video: Everything You Want to Know About Captioning

Legally mandated in many countries, captions or subtitles are a vital part of both broadcast and streaming. But as the saying goes, the lower the bandwidth signals are the more complex to manage. Getting subtitles right in all their nuances is hard whether live or from post-production. And by getting it right, we’re talking about protocol, position, colour, delay, spelling and accuracy. So there’s a whole workflow just for subtitling which is what this video looks at.

EEG specialise in subtitling solutions so it’s no surprise their Sales Associate Matt Mello and VP of Product Development, Bill McLaughlin wanted to run a live Q&A session which, unusually was a pure one-hour Q&A with no initial presentation. All questions are shown on screen and are answered by Matt & Bill who look at the technology and specific products.

 

 

They start off by defining the terms ‘closed’ and ‘open’ captioning saying that open captions are shown in the picture itself, also known as ‘burnt in’. Closed, indicates they are hidden which often refers to the closed captions in the blanking of TV channels which are always sent but only displayed when the viewer asks their TV to decode and overlay them onto the picture as they watch. Whether closed or open, there is always the task of getting the subtitles at the right time and merging them into the video to ensure the words appear at the right time. As for the term subtitles vs captions, this really depends on where you are from. In the UK, ‘subtitles’ is used instead of ‘captions’ with the term ‘closed captions’ specifically referring to the North American closed captions standard. This is as opposed to Teletext subtitles which are different, but still inserted into the blanking of baseband video and only shown when the decoder is asked to display them. The Broadcast Knowledge uses the term subtitles to mean captions.

The duo next talk about live and pre-recorded subtitles with live being the tricky ones as generating live subtitling with minimal delay is difficult. The predominant method which replaced stenography is to have a person respeak the programme into a trained voice recognition programme which has a delay. However, the accuracy is much better than having a computer listen to the raw programme sound which may have all sorts of accents, loud background noise or overlapping speakers leaving the result much to be desired. Automatic solutions, however, don’t need scheduling unlike humans and there are now methods to input specialist vocabulary and indeed scripts ahead of time to help keep accuracy up.

Accuracy is another topic under the spotlight with Matt and Bill. They outline that accuracy is measured in different ways from a simplistic count of the number of incorrect words to weighted measures which look to see how important the incorrect words are and how much the meaning has changed. Looking at the videos from YouTube, we see that automated captions are generally less accurate than human-curated subtitles, but they do allow YouTube to meet its legal responsibility to stream with captions. Accuracy of around 98% should be taken, they advise, as effectively perfect with 95% being good and below 85% there’s a question whether it’s actually worth doing.

When you investigate services, you’ll inevitably see mention of EIA-608 and EIA-708 caption formats which are the North American SD and HD standards for carrying captions. These are also used for delivery to streaming services so retain relevance even though they originated for broadcast closed captions. One question asks if these subs can be edited after recording to which the response is ‘yes’ as part of a post-production workflow, but direct editing of the 608/708 file won’t work.

Other questions include subtitling in Zoom and other video conferencing apps, delivery of automated subtitles to a scoreboard, RTMP subtitling latency, switching between languages, mixing pre-captioned and live captioned material and converting to TTML captions for ATSC 3.0.

Watch now!
Speakers

Bill McLaughlin Bill McLaughlin
VP Product Development
EEG Enterprises
Matthew Mello Matthew Mello
Sales Associate,
EEG Enterprises

Video: What is 525-Line Analog Video?

With an enjoyable retro feel, this accessible video on understanding how analogue video works is useful for those who have to work with SDI rasters, interlaced video, black and burst, subtitles and more. It’ll remind those of us who once knew, a few things since forgotten and is an enjoyable primer on the topic for anyone coming in fresh.

Displaced Gamers is a YouTube channel and their focus on video games is an enjoyable addition to this video which starts by explaining why analogue 525-line video is the same as 480i. Using a slow-motion video of a CRT (Cathode Ray Tube) TV, the video explains the interlacing technique and why consoles/computers would often use 240p.

We then move on to timing looking at the time spent drawing a line of video, 52.7 microseconds, and the need for horizontal and vertical blanking. Blanking periods, the video explains are there to cover the time that the CRT TV would spend moving the electron beam from one side of the TV to the other. As this was achieved by electromagnets, while these were changing their magnetic level, and hence the position of the beam, the beam would need to be turned off – blanked.

The importance of these housekeeping manoeuvres for older computers was that this was time they could use to perform calculations, free from the task of writing data in to the video buffer. But this was not just useful for computers, broadcasters could use some of the blanking to insert data – and they still do. We see in this video a VHS video played with the blanking clearly visible and the data lines flashing away.

For those who work with this technology still, for those who like history, for those who are intellectually curious and for those who like reminiscing, this is an enjoyable video and ideal for sharing with colleagues.

Watch now!
Speaker

Chris Kennedy Chris Kennedy
Displaced Gamers,YouTube Channel

Webinar: DVB Subtitling Systems


On-Demand Webinar
This webinar will provide an overview of the recent revision of bitmap subtitles and the recent specs for UHD Subtitles.

The DVB specification for TTML-based Subtitling Systems, approved in July 2017, has now been complemented by a revision of the existing specification for bitmap subtitles, creating a comprehensive suite of subtitling specifications from DVB. This approval also marks the completion of the current generation of specifications for Ultra High Definition Television – DVB UHD-1.

The agenda for the webinar is:
•Introduction
•Bitmap subtitle specification (EN 300 743) Update (DVB Bluebook A009)
•TTML introduction
•New DVB TTML specification (Bluebook A174)
•Deployment considerations DVB Subtitling

Experts conducting the webinar include:
Dr. Peter Cherriman, Senior R&D Engineer, BBC Research & Development and Chair of the TM-SUB
Paul Szucs, Senior Manager, Technology Standards, Sony Europe
Stefan Pöschel, Engineer, Production Technologies, IRT

Watch Now!