5 Thoughtful Tips for Closed Captioning Videos
We are thrilled to have Sean Zdenek, one of the premier experts on closed captioning research in the country, present his expertise in the webinar The Future of Closed Captioning in Higher Education.
Sean Zdenek is an Associate Professor of English at Texas Tech and the author of Reading Sounds: Closed-Captioned Media and Popular Culture. His presentation covered how closed captions are used in educational video and how we might optimize the viewing experience with captions.
Watch his full presentation in the video recording below, or read on for five quick tips from Sean’s Q&A with attendees.
1. Is there a rule for how many lines a caption frame can have?
SEAN ZDENEK: Two to three lines per caption frame is average. But I’ve also seen some experts who say, if you need four lines because of the context, do it. That’s rare, though.
If you go to the Captioningkey.org, there’s a pretty good set of style guidelines.
2. How can school administrators convince faculty that captioning is not a nice-to-have, but a requirement? Do you have any tips on how to build faculty buy-in?
SEAN ZDENEK: Embrace a universal design framework.
“ Everybody wants to search video, and you can’t really search a video or index a video without captions. It’s all based on caption technology.”
But if you look around the web at some of the accessibility pages for various universities, you see them leaning heavily on universal design, this idea that captions can provide a lot of benefit to students.
And we’re not just talking about a small population. This is something that can benefit a lot of students.
I’d also refer to the power that captioning can deliver through interactive transcripts and other technologies.
Everybody wants to search video, and you can’t really search a video or index a video without captions. It’s all based on caption technology.
Search engines don’t know what’s inside a video unless you can translate that into text, so you can tout the SEO benefits of captioning.
3. You mentioned it makes a difference where on the screen the captioning lines are displayed. Can you elaborate?
SEAN ZDENEK: When two people are talking at the same time, you need to be able to distinguish. You need to be able to distinguish speaker A from speaker B, especially if this is a short call and response.
So if one speaker says, “how are you?”, and the other says, “fine,” the caption reader needs to be looking at the captions and then also looking at the speakers to see who is saying what. That’s a simple example. I think it just gets more complex from there.
In a single caption at the bottom of the screen, really the only way to distinguish those two lines is with a preceding hyphen or a speaker ID.
It’s much more effective to move those captions underneath each speaker, and that’s what placement is all about.
The best example is what’s called a two shot, when you have two people on the screen at the same time and both of their contributions fit into the same caption. Then you can put each speaker’s contribution underneath each speaker.
4. Do you have recommendations or resources for producing good captions of non-speech sounds?
SEAN ZDENEK: Yes, the Captioning Key is helpful. But here’s something important to consider.
One of the problems with style guides is that they tend to assume you already have the words and now you just need to format them.
How many lines? What is the reading speed? Where do you break your captions? Do you use parentheses or brackets? Etc. etc.
I don’t know that there are a lot of good resources about rhetorically inventing words for sounds that might be unusual.
This is one of my criticisms of the style guides– not that style guide should do everything, but the style guides tend to assume that you already have those words.
And sometimes you need help inventing those words, like is this a growl or a roar?
Maybe one good resource might be a new book called Reading Sounds by yours truly, in which I analyze a ton of clips.
5. What are your recommendations for captioning without blocking critical graphics on screen, like for a video of someone giving a PowerPoint presentation?
SEAN ZDENEK: Ah, yes. I’ve seen a lot of student video projects with some onscreen text all the way at the bottom of the screen. And then if you want to caption that, the captions often end up covering titles, speaker titles, or other information that’s already displayed.
Well, I think you have to make students aware of how that affects captioning. You have to talk about captioning at the very beginning.
That way, they realize that there are going to be words on the screen for those people who need them.
And that might alert them to the fact that you can’t put a title, a hard-coded title, in your video at the bottom of the screen. I think you need to have a kind of safe space at the bottom of the screen or reserve the very bottom of the screen for closed captioning.
So talking about captioning right away, early on, I think may be one way– sort of folding it into discussions about how to make a PowerPoint or how to make video.
Note: an alternative solution is to use vertical caption placement to move captions to the top of the screen when text or other important visuals occupy the usual spot for captions.
Hungry for more information about closed captioning best practices and resources? Check out our upcoming webinars and register for free:More: a11y, caption frame, caption quality, captioned video, captioning best practices, CC, closed captioning, closed captions, inclusion, online learning, Reading Sounds, Sean Zdenek, subtitling, vertical caption placement, video captioning, video subtitling, video transcript, video transcription, web accessibility, webinar