Why It’s Hard to Re-Use Your Live Captions or Transcripts for Post-Production
Updated: January 4, 2018
What Is Live Captioning?
The dictionary defines captioning as “the title of a scene, the text of a speech, etc., superimposed on the film projected onto the screen.” So what happens when a Deaf or hard of hearing viewer attends a live experience with no written script such as a theater performance, lecture, class, or council meeting? Live captioning or real-time captioning is done when you instantly translate spoken words into written words. You might be wondering how this is done, or maybe you assume Siri or Alexa can help us out. But, it’s a bit more complicated than that.
In order to create live captions, a trained stenographer uses a special keyboard or typewriter for shorthand use (a stenotype) with a phonetic keyboard and unique software. The phonetic symbols are then translated into captions and displayed on screen.
Why It’s Hard to Re-Use Your Live Captions
Because of the unique process used to create live captioning, there are challenges when it comes to re-using live captions or transcripts for post-production captioning. Several of these difficulties include a delay in timing, decreased accuracy, and a lack of completeness – all which cause cognitive dissonance to the viewer.
Although slight, there is often a delay with live captioning. This is due to the process, where a trained captioner first needs to listen to the content, then type the words and take into account the computer’s processing time. Often this delay is not consistent throughout the file, like it might be if done by a machine. This lag is frustrating for viewers who can either hear or lip read. Seeing words on the screen that don’t match up with what is actually being said creates a cognitive dissonance, or, psychological stress.
It is possible (and not very hard) to re-sync captions once they are done, thus allowing the live transcript to be altered to be synchronized correctly. However, if you’re working with a super low quality or incomplete baseline transcript it might actually be more efficient to start from scratch. A transcript of at least “B” quality would be needed in order to save time and make creating accurate captions easier.
Another challenge with producing captions in real-time is accuracy. This is especially true in cases of complicated names or advanced vocabulary such as in sports. The captioner would need to have lists of terms or names preloaded into the software in order to get them right. Many times, this is not done, or if it is done, it’s not done well. In addition, skilled stenographers are becoming more and more scarce. Accuracy is suffering even more because of this.
Inaccurate captions (similarly to untimely ones) create a psychological stress for the viewers, as they might feel they can’t rely on captioning. This can be quite frustrating for caption users, especially for those who need captions as an accommodation to gain equal access to video content.
It is obvious that the process of producing live captions is complex and challenging to do in a timely manner. Because live captioning is done by humans, words or phrases are often omitted because of the difficulty in keeping up. To make matters worse, accuracy rates for live captioning are typically calculated out of what is written, rather than the complete content. This means accuracy rates for real-time captions are quite skewed when taking into consideration the entire content of the original experience. One of the main focuses in accessibility laws is to allow all users, regardless of ability, to have as close to an equal experience as possible. Leaving out words would leave a Deaf or hard of hearing person at a great disadvantage over their hearing peers, and would be quite frustrating for a hearing viewer as well.
A Different Standard
It’s sometimes frustrating that repurposing live captions isn’t simpler. Frankly put, it comes down to one thing – what might be acceptable for live captioning isn’t necessarily the same as what’s acceptable for post-production captioning. People generally assume that live captions are done by machines, but this is not the case. Stenographers work tirelessly to create live captions. Because of their purpose and differing processes, there is a certain expectation and standard set for broadcast media which differs from the standards set for the static web/video.
Behind the Scenes: The Making of an Accessible Campus at WSU
In April of 2016, Wichita State University (WSU) received a complaint through the Department of Education’s Office for Civil Rights (OCR) over an accessibility issue in a face-to-face classroom setting. Now, as a result, the institution is in the middle of a campus-wide accessibility…
Captioning and Transcription for Online Video Content
More video content is uploaded to the web in one month than TV has created in three decades. By 2019, 80% of the world’s internet traffic will be video. How does any single video stand out in the sea of this much…
Best Practices for Caption Quality
The DCMP defines captioning as “the key to opening up a world of information for persons with hearing loss or literacy needs.” However, not all captions are created equally. Standards and guidelines for captioning quality from the FCC, DCMP, and WCAG can…
Subscribe to the Blog Digest
Sign up for our blog digest. Your privacy is important to us. We’ll never share your email address.