Why It’s Hard to Re-Use Your Live Captions or Transcripts for Post-Production
Updated: April 16, 2021
What Is Live Captioning?
The dictionary defines captioning as “the title of a scene, the text of a speech, etc., superimposed on the film projected onto the screen.” So what happens when a Deaf or hard of hearing viewer attends a live experience with no written script such as a theater performance, lecture, class, or council meeting? Live captioning or real-time captioning is done when you instantly translate spoken words into written words. You might be wondering how this is done, or maybe you assume Siri or Alexa can help us out. But, it’s a bit more complicated than that.
In order to create live captions, a trained stenographer uses a special keyboard or typewriter for shorthand use (a stenotype) with a phonetic keyboard and unique software. The phonetic symbols are then translated into captions and displayed on screen.
Why It’s Hard to Re-Use Your Live Captions
Because of the unique process used to create live captioning, there are challenges when it comes to re-using live captions or transcripts for post-production captioning. Several of these difficulties include a delay in timing, decreased accuracy, and a lack of completeness – all which cause cognitive dissonance to the viewer.
Although slight, there is often a delay with live captioning. This is due to the process, where a trained captioner first needs to listen to the content, then type the words and take into account the computer’s processing time. Often this delay is not consistent throughout the file, like it might be if done by a machine. This lag is frustrating for viewers who can either hear or lip read. Seeing words on the screen that don’t match up with what is actually being said creates a cognitive dissonance, or, psychological stress.
It is possible (and not very hard) to re-sync captions once they are done, thus allowing the live transcript to be altered to be synchronized correctly. However, if you’re working with a super low quality or incomplete baseline transcript it might actually be more efficient to start from scratch. A transcript of at least “B” quality would be needed in order to save time and make creating accurate captions easier.
Another challenge with producing captions in real-time is accuracy. This is especially true in cases of complicated names or advanced vocabulary such as in sports. The captioner would need to have lists of terms or names preloaded into the software in order to get them right. Many times, this is not done, or if it is done, it’s not done well. In addition, skilled stenographers are becoming more and more scarce. Accuracy is suffering even more because of this.
Inaccurate captions (similarly to untimely ones) create a psychological stress for the viewers, as they might feel they can’t rely on captioning. This can be quite frustrating for caption users, especially for those who need captions as an accommodation to gain equal access to video content.
It is obvious that the process of producing live captions is complex and challenging to do in a timely manner. Because live captioning is done by humans, words or phrases are often omitted because of the difficulty in keeping up. To make matters worse, accuracy rates for live captioning are typically calculated out of what is written, rather than the complete content. This means accuracy rates for real-time captions are quite skewed when taking into consideration the entire content of the original experience. One of the main focuses in accessibility laws is to allow all users, regardless of ability, to have as close to an equal experience as possible. Leaving out words would leave a Deaf or hard of hearing person at a great disadvantage over their hearing peers, and would be quite frustrating for a hearing viewer as well.
A Different Standard
It’s sometimes frustrating that repurposing live captions isn’t simpler. Frankly put, it comes down to one thing – what might be acceptable for live captioning isn’t necessarily the same as what’s acceptable for post-production captioning. People generally assume that live captions are done by machines, but this is not the case. Stenographers work tirelessly to create live captions. Because of their purpose and differing processes, there is a certain expectation and standard set for broadcast media which differs from the standards set for the static web/video.
Video and Closed Captioning Trends for 2021
The 2021 State of Captioning report is now available to read in full. In it, you’ll find some of the top video and closed captioning trends for this year. The State of Captioning (SOC) is an annual report by 3Play Media and…
NAD v. Netflix ADA Lawsuit Requires Captioning for Streaming Video
The internet offers a unique challenge: how do we ensure that all of our digital products, services, and communications are accessible to people with disabilities? What are companies required to do to accommodate such users? Federal disability laws still await comprehensive updates…
The Best Online Transcription Jobs
Millions of Americans got to experience working from home during the pandemic. As many can now attest to, there are a number of benefits and a lot more flexibility when working remotely. When set up successfully, online work has many benefits, including…