Why It’s Hard to Re-Use Your Live Captions or Transcripts for Post-Production
Updated: June 3, 2019
What Is Live Captioning?
The dictionary defines captioning as “the title of a scene, the text of a speech, etc., superimposed on the film projected onto the screen.” So what happens when a Deaf or hard of hearing viewer attends a live experience with no written script such as a theater performance, lecture, class, or council meeting? Live captioning or real-time captioning is done when you instantly translate spoken words into written words. You might be wondering how this is done, or maybe you assume Siri or Alexa can help us out. But, it’s a bit more complicated than that.
In order to create live captions, a trained stenographer uses a special keyboard or typewriter for shorthand use (a stenotype) with a phonetic keyboard and unique software. The phonetic symbols are then translated into captions and displayed on screen.
Why It’s Hard to Re-Use Your Live Captions
Because of the unique process used to create live captioning, there are challenges when it comes to re-using live captions or transcripts for post-production captioning. Several of these difficulties include a delay in timing, decreased accuracy, and a lack of completeness – all which cause cognitive dissonance to the viewer.
Although slight, there is often a delay with live captioning. This is due to the process, where a trained captioner first needs to listen to the content, then type the words and take into account the computer’s processing time. Often this delay is not consistent throughout the file, like it might be if done by a machine. This lag is frustrating for viewers who can either hear or lip read. Seeing words on the screen that don’t match up with what is actually being said creates a cognitive dissonance, or, psychological stress.
It is possible (and not very hard) to re-sync captions once they are done, thus allowing the live transcript to be altered to be synchronized correctly. However, if you’re working with a super low quality or incomplete baseline transcript it might actually be more efficient to start from scratch. A transcript of at least “B” quality would be needed in order to save time and make creating accurate captions easier.
Another challenge with producing captions in real-time is accuracy. This is especially true in cases of complicated names or advanced vocabulary such as in sports. The captioner would need to have lists of terms or names preloaded into the software in order to get them right. Many times, this is not done, or if it is done, it’s not done well. In addition, skilled stenographers are becoming more and more scarce. Accuracy is suffering even more because of this.
Inaccurate captions (similarly to untimely ones) create a psychological stress for the viewers, as they might feel they can’t rely on captioning. This can be quite frustrating for caption users, especially for those who need captions as an accommodation to gain equal access to video content.
It is obvious that the process of producing live captions is complex and challenging to do in a timely manner. Because live captioning is done by humans, words or phrases are often omitted because of the difficulty in keeping up. To make matters worse, accuracy rates for live captioning are typically calculated out of what is written, rather than the complete content. This means accuracy rates for real-time captions are quite skewed when taking into consideration the entire content of the original experience. One of the main focuses in accessibility laws is to allow all users, regardless of ability, to have as close to an equal experience as possible. Leaving out words would leave a Deaf or hard of hearing person at a great disadvantage over their hearing peers, and would be quite frustrating for a hearing viewer as well.
A Different Standard
It’s sometimes frustrating that repurposing live captions isn’t simpler. Frankly put, it comes down to one thing – what might be acceptable for live captioning isn’t necessarily the same as what’s acceptable for post-production captioning. People generally assume that live captions are done by machines, but this is not the case. Stenographers work tirelessly to create live captions. Because of their purpose and differing processes, there is a certain expectation and standard set for broadcast media which differs from the standards set for the static web/video.
4 Tips for Combatting Zoom Fatigue (When There’s SO Much Video)
Whether or not you’ve heard the term before, you likely know what Zoom fatigue feels like. The shift to a more remote workforce has resulted in someone joining a Zoom meeting 300 million times every day, meaning so many of us have…
Captions & Interactive Transcripts Boost Student Performance, Study Finds
Instructors often search for out-of-the-box ways to improve student performance in the classroom. These days, due to the pandemic, many classes are conducted virtually and remotely. What strategies or tools can instructors incorporate into their curriculum to support student success and keep…
Audio Description for HBO Max Is Coming Soon
Per a settlement agreement, WarnerMedia Direct, LLC has pledged to increase accessibility for people who are blind and low vision by providing audio description for HBO Max. HBO Max, launched May 27, 2020, is an over-the-top (OTT) American subscription video-on-demand streaming service…