Why It’s Hard to Re-Use Your Live Captions or Transcripts for Post-Production
Updated: April 16, 2021
What Is Live Captioning?
The dictionary defines captioning as “the title of a scene, the text of a speech, etc., superimposed on the film projected onto the screen.” So what happens when a Deaf or hard of hearing viewer attends a live experience with no written script such as a theater performance, lecture, class, or council meeting? Live captioning or real-time captioning is done when you instantly translate spoken words into written words. You might be wondering how this is done, or maybe you assume Siri or Alexa can help us out. But, it’s a bit more complicated than that.
In order to create live captions, a trained stenographer uses a special keyboard or typewriter for shorthand use (a stenotype) with a phonetic keyboard and unique software. The phonetic symbols are then translated into captions and displayed on screen.
Why It’s Hard to Re-Use Your Live Captions
Because of the unique process used to create live captioning, there are challenges when it comes to re-using live captions or transcripts for post-production captioning. Several of these difficulties include a delay in timing, decreased accuracy, and a lack of completeness – all which cause cognitive dissonance to the viewer.
Although slight, there is often a delay with live captioning. This is due to the process, where a trained captioner first needs to listen to the content, then type the words and take into account the computer’s processing time. Often this delay is not consistent throughout the file, like it might be if done by a machine. This lag is frustrating for viewers who can either hear or lip read. Seeing words on the screen that don’t match up with what is actually being said creates a cognitive dissonance, or, psychological stress.
It is possible (and not very hard) to re-sync captions once they are done, thus allowing the live transcript to be altered to be synchronized correctly. However, if you’re working with a super low quality or incomplete baseline transcript it might actually be more efficient to start from scratch. A transcript of at least “B” quality would be needed in order to save time and make creating accurate captions easier.
Another challenge with producing captions in real-time is accuracy. This is especially true in cases of complicated names or advanced vocabulary such as in sports. The captioner would need to have lists of terms or names preloaded into the software in order to get them right. Many times, this is not done, or if it is done, it’s not done well. In addition, skilled stenographers are becoming more and more scarce. Accuracy is suffering even more because of this.
Inaccurate captions (similarly to untimely ones) create a psychological stress for the viewers, as they might feel they can’t rely on captioning. This can be quite frustrating for caption users, especially for those who need captions as an accommodation to gain equal access to video content.
It is obvious that the process of producing live captions is complex and challenging to do in a timely manner. Because live captioning is done by humans, words or phrases are often omitted because of the difficulty in keeping up. To make matters worse, accuracy rates for live captioning are typically calculated out of what is written, rather than the complete content. This means accuracy rates for real-time captions are quite skewed when taking into consideration the entire content of the original experience. One of the main focuses in accessibility laws is to allow all users, regardless of ability, to have as close to an equal experience as possible. Leaving out words would leave a Deaf or hard of hearing person at a great disadvantage over their hearing peers, and would be quite frustrating for a hearing viewer as well.
A Different Standard
It’s sometimes frustrating that repurposing live captions isn’t simpler. Frankly put, it comes down to one thing – what might be acceptable for live captioning isn’t necessarily the same as what’s acceptable for post-production captioning. People generally assume that live captions are done by machines, but this is not the case. Stenographers work tirelessly to create live captions. Because of their purpose and differing processes, there is a certain expectation and standard set for broadcast media which differs from the standards set for the static web/video.
How Much Do Closed Captioning Services Cost? (And Why Price Isn’t Everything)
Video is a valuable means of communication & business tool at organizations across industries – but without closed captioning, your video is virtually inaccessible. As global video production (and consumption) continues to increase at exponential rates, it’s critical for organizations to consider…
SDH Subtitles vs. Closed Captions: What’s the Difference?
“Tomato, tomahto.” That’s what people might think when seeing the words “subtitles” and “captions,” but, there actually is a difference. Before fully understanding the difference between subtitles for the deaf and hard of hearing (SDH Subtitles) and closed captions, it is helpful…
Twitter Accessibility Improves With the Ability to Add Captions to Videos
If you’ve been on Twitter recently you may have noticed several changes, including some changes to the overall look and feel of the website. Twitter says these changes – including a new proprietary font, high color contrast, and increased space on the…