Home

Plans & Pricing Get Started Login

« Return to video

The Future of Closed Captioning in Higher Education [Transcript]

LILY BOND: Welcome, everyone. And thank you for joining this webinar entitled The Future of Closed Captioning in Higher Education. I’m Lily Bond from 3Play Media, and I’ll be moderating today.

And I am thrilled to be joined by Sean Zdenek, an associate professor at Texas Tech University, as well as the author of Reading Sounds– Closed-Captioned Media and Popular Culture.

We have about 45 minutes for this presentation, followed by 15 minutes for Q&A. And with that, I’m going to hand it off to Sean, who has a wonderful presentation prepared for you.

SEAN ZDENEK: Yeah. Welcome. I’m excited to be here. I wanted to thank Lily Bond and 3Play Media for working with me, inviting me, and helping me to get this presentation ready for you. Today, let’s talk about the future– what a great topic.

I’m an associate professor, as Lily mentioned, and the author of Reading Sounds. The book covers over 500 examples from popular movies and TV shows. And you can check out those examples, the video clips, on readingsounds.net. So what are we going to cover today? I’d like to look at some definitions. I know that many of you– maybe all of you– already know what closed captioning is. But I think we can push at our definitions a little bit as we work towards possibly more expanded, more robust definitions of closed captioning.

I also want to offer a promising view. I played around with the adjective there. Promising at one point was idealistic or hopeful. I think this is going to be a kind of hopeful or idealistic view of a captioned future.

And then I think I’m going to let some realism come in at the end of the presentation because the future that I would like to see– well, it’s going to take some labor and some policies and some money, as well.

I’d also ask you to keep in mind I’m coming at this from a faculty member’s perspective. I teach courses, undergrad and graduate courses. And I think about captioning. I teach students how to caption.

But I’m not an administrator. And some of those discussions about budgets and so on are just not part of my purview at the moment. But I’m hoping I have some valuable things to offer from a faculty member’s perspective. Towards the end of this presentation, I want to talk about what departments and institutions can do to shape the future.

So we won’t be talking about legal requirements or lawsuits today, although those are really important, and especially the first one listed there, Harvard and MIT sued over the lack of closed captions in some of their publicly accessible online videos, online courses. That’s some important stuff there for us to consider.

But we aren’t going to be talking about specific legal issues and lawsuits today or specific costs or specific technologies or specific third-party vendors. I think names may slip out here and there of technologies and so on. But that’s not the focus, as far as I’m concerned today.

And then demographic info you can find. I was reviewing some demographic info this morning from a document I found actually on the 3Play Media website. This is important information, too– how many students with disabilities are attending college and how have those numbers changed over time.

On this presentation, I added a quote here at the bottom of this slide, I think, after the presentation was uploaded to SlideShare. But it reflects the importance of making sure that our courses are accessible out of the box. That way, we’re not waiting until somebody with a letter of accommodation comes into our classroom and says, hey, you need to provide captions.

The goal here, and this may be a legal requirement, as well, is to make sure that our courses are inclusive from the start. That way, we’re not retrofitting our courses or making ad hoc accommodations.

So what is closed captioning? There are so many definitions out there. I chose these three. I guess at times, I feel as though definitions could be critiqued a bit or expanded. Sometimes, I feel they’re overly limiting– but just three definitions to put us all on the same page here.

Closed captioning and subtitling are both processes of displaying text. Wikipedia says FCC closed captioning displays the audio portion of a television program for individuals who are deaf or hard-of-hearing. And WhatIs.com– closed captions are a text version of the spoken part developed to aid hearing-impaired people, but also useful for a variety of situations.

I think these definitions are OK. The WhatIs definition, I think, would need to be expanded to include non-speech sounds. So we have to be careful when closed captioning is reduced to speech only. If the audience is assumed to be deaf or hard of hearing, then they need access to both speech and non-speech. I might also point out that the FCC definition and the WhatIs definition both assume that all of the sounds are captioned. The audio portion seems to suggest that the entire audio is captioned, just as the spoken part seems to imply that all of the speech is captioned. And if you watch closed captioning on TV or in the movies, you know that all of the sounds are not accounted for in closed captioning.

Let’s keep going. So my definition, I think, is a bit more open-ended. I hope it’s not too vague. As far as I’m concerned, closed captioning should provide access– access to audiovisual content for viewers who are deaf or hard of hearing.

And my definition can account for things that more sound-based definitions cannot, like silence captions. I love the very concept of a captioned silence because it throws all of our definitions into disarray, I guess. We tend to assume that closed captioning is about sound, but it’s much more than that. There are times when silences need to be captioned.

So my own approach continues to put pressure on some of our assumptions about captioning. And to be quite honest, I’m interested in kind of prodding and poking a little bit by offering some claims that may seem counterintuitive. But I think we need to work towards more robust, more expanded, maybe more supple definitions of closed captioning.

So closed captioning is not simple transcription. A number of people have pointed this out, if by transcription we mean just copying down the words. And if closed caption is just copying, we could put a team of untrained people on the task. But if it’s much more than that, then we need to keep in mind that captioning is a skill and at times requires some creative solutions, especially when you’re dealing with non-speech sounds.

I like to joke that captioners don’t caption sounds, which I know sounds counterintuitive. But really, captioning is about meaning. And I’ll show you an example in one second.

I realize that captioning is about sound. But there are some really interesting examples of how captioning is more than that or not exactly about copying down what you hear as a captioner.

I’ve also played around with this idea that captions produce a new text. And you can see how this might happen through something like a speaker ID. If you don’t know who’s speaking, the captioner needs to let you know by identifying that speaker by name. These are called speaker identifiers.

So there’s a really quick example– Man of Steel, 2013. The word Superman is only uttered three times. But in the captions, the speaker ID for Superman is mentioned, I think, it’s 14 times if you follow that link.

So the movie itself is really kind of playful and coy about that name. It’s rarely uttered. It’s only uttered at the end of the movie. At one point, Lois can’t say the whole name. She can only say “Super” before she’s interrupted.

But on the caption track, Superman appears all over the place as a speaker ID. And for me, this is one indication of how captions can start to produce a different experience of the text. The experience through captions is different in this way from the experience through sound alone.

And then my own work has emphasized a number of what I call transformations of meaning, things that captions do in that move from sound to writing. I say they contextualize, clarify, formalize, equalize, linearize, time-shift, and distill.

And I’m not going to talk about each one of those, although I could if there were questions. But my point here is that there are some significant things happening, especially in pop culture captioning, when you move from sound to writing.

Here’s that example of how captioners don’t caption sounds per se. I like this example. And it was kind of mind-blowing to me because it’s really not a sound at all.

The caption here is “TURNS TAP OFF,” and we can’t see Daniel Radcliffe turn the tap off. So the captioner tells us he turned it off. But that’s not a sound, right? That’s an action.

And maybe I’m splitting hairs here. But the captioner could have said, could have typed, “SQUEAK,” or something like that, whatever the tap sounds like when you shut it off. But that wouldn’t have helped us understand the meaning of that sound in that particular context.

We need to know that he’s done washing his hands. He’s a doctor here. And so “TURNS TAP OFF” is, I think, the more effective solution over trying to describe how the sound sounds.

So captioners don’t caption sounds. They don’t describe sounds per se. But they describe the function and purpose of sound in specific contexts, or at least I would argue.

So Joe Clark in the context of placement– we’ll talk about placement in a second– he says it’s not enough to tell us the mere text of what is being said. You see this in different places from captioning advocates.

Here’s a tweet that I added. I don’t think this is on your slide show, but it was a tweet from yesterday from Brenda Brueggemann, someone quoting Brenda Brueggemann here, Chad Iwertz.

Transcriptioners are traditionally taught to capture the words– the words– but there’s so much more– so captioning advocates, captioning scholars or researchers, pointing out that captioning can’t be reduced simply to the words. And then I would just add here at the bottom of slide 9 that captioning isn’t just about sound, but it’s also about information about sound, what I call meta-level information.

So a speaker ID isn’t a sound. It’s information about who is speaking. And there are other examples like this, too, like screen placement. Where you place words on the screen has a meaning.

And then I would also include information about our assumptions about the physics and reception of sound itself. So if lips are moving, like mouthed speech, but no sound is coming out, the captioner may have to include a caption “SILENCE,” there or just put “MOUTHED.”

So our assumptions about sound and how it works need to be accounted for in closed captioning, along with some of this other meta-level information.

So quality, then– we’re really concerned with accuracy for good reason. But I think there’s room here as we move into the future, or as the future rushes to us, there’s room here to expand or just be reminded that quality captioning is more than accuracy. And I don’t mean to diminish accuracy, especially in higher ed. Of course, accuracy is vitally important.

Accuracy generates a lot of chatter on Twitter and elsewhere. And caption fail videos are lots and lots of fun if you haven’t explored them before. But let’s add on to our list of criteria.

And I like the acronym PACT, P-A-C-T, for Placement, Accuracy, Completeness, and Timing. These are the four criteria from the FCC. And then I believe the FCC also adds some information about or a reference to interface capabilities.

But if you look at something like the Captioning Key style guidelines, 30 pages or so, and you begin to get a sense that captioning is more than accuracy and it’s more than just copying down words.

As far as placement goes, I love this phrase, again, from Joe Clark– positioning carries meaning. And it’s not just the words, but where they’re placed on the screen. Especially if you have multiple people talking, you need to be able to distinguish one speaker’s caption from another.

And if you don’t have placement, you have to put them both at the bottom of the screen and use preceding hyphens or something like that. And it can be confusing. It’s much more effective to use positioning to suggest who is speaking in a two shot, for example.

Well, let’s talk about some robust interfaces and giving users control of them. After I cover these two quotes here, I think we’re going to talk about interfaces here in a second.

So within this context, this context in which captioning quality is more than just accuracy, I think it sort of leads to one conclusion that people may need to be trained in order to do it well. Joselia Neves– she’s talking about subtitling for the deaf and hard of hearing, which is the term that’s in use in Europe. So I’m kind of summarizing here and putting closed captioning in there, too.

But I would agree that closed captioning requires special training because there are some complexities here. And then about the economics, I received an email from a professional captioner a couple years ago and I asked her if I could post that email anonymously, a really telling email about the economics of professional offline captioning, captioning for movies and TV shows, and the extent to which money is driving quality.

It turns out that pop-on style captions are more expensive than scroll-up style captions. And that has to do with how long it takes to time the captions. And I know scroll-up is necessary in live programming. But I think you’re starting to see more and more prerecorded shows done in scroll-up because it’s cheaper.

So when we talk about quality and I introduce all of these additional elements, the economic reality kind of creeps in there. You sort of get what you pay for. And you can do it quickly, but it may not have all of these sort of more sophisticated aspects that I think users require.

So about interfaces, then, we can look at YouTube and other places and find a whole bunch of options for customizing closed captions, which is really cool. I think users need to be able to display captions the way that they want. This is about visual design and having control as a user over the design of the captions.

So YouTube offers a number of options that are easy to locate on the interface. Hulu offers a number of options, as well. So if you’re watching The Simpsons late at night, as I was, you might want to change the color, the default color, of the captions because those two yellow colors are almost identical. And I wrote a whole blog post on this because I got really interested in the exact color of yellow for Hulu’s default and then what The Simpsons yellow– what that shade was.

Anyway, so users need to have control over these visual settings. And so I think that’s part of building robust interfaces. As we take some of this information into higher ed, we need to make sure that users can customize the look of the captions.

We can already do this on YouTube. I think we just need to make sure this is a priority in higher ed. I think we can push these ideas a little bit further, making sure that users can access those options.

So it turns out that people are sort of tweeting at the TV manufacturers kind of regularly and tweeting at the game system manufacturers, asking how do you turn the captions off? Sometimes, it’s hard. You got to go five or six menus deep to figure out how to turn the captions off or on– so making sure you can access them, making sure those options are robust.

And then something that I don’t think exists yet is the ability really to carry your preferences with you. If you want yellow captions everywhere with a black background, you should be able to carry those preferences with you– I know this is maybe going too far into the future here– but like a style sheet.

And you can do this if you’re a low-vision user. You can install a customized style sheet and view the web in the way that you want. Perhaps the same should be true with closed captions and visual design.

Placement should be supported. I know placement is going to make interfaces, do-it-yourself interfaces, more complex. Amara.org doesn’t support placement, but placement carries meaning. And I think as we move forward, we need to be thinking about some of these other ways of helping students understand the meaning.

And then also transcripts– transcripts are great for studying. You can print them out. You can also make them large– large-print version for someone with low vision. You can print them out and listen along if this is a lecture– a course lecture. And you can highlight them and put margin notes in them and other things, as well.

Let’s keep going as we sort of maybe dig deeper into the power of closed captioning. I love this metaphor of baking them in, captions that are baked in rather than added on after the fact.

We can look at interactive transcripts here. And by baked in, I mean the sort of power that’s going to come by thinking about captioning from the start. What do we get if we think about captioning and sort of build that power into our course videos from the start?

Well, we can get interactive transcripts. And a number of companies support interactive transcripts, which are really cool. And if you don’t know what these are, they’re not that new. I think they’re are at least five or six years old.

If you don’t know what they are, take a look. You can search the transcript. If you find a keyword, you can click on it and you’ll be taken to the moment in that video where that word is spoken– really kind of cool for an educational context, sort of combining search with the ability to go right to that spot in the video.

This goes hand-in-hand with search engine optimization, of course. You can reach more people if Google can understand or index the content of your videos. If you have a whole course full of videos with interactive transcripts, students can search them. Imagine being able to search an entire semester’s worth of videos.

Let’s say the professor talks about cell mitosis or something like that. You can find all of the instances in which the professor talks about cell mitosis and then create, as you can see farther down on this slide, create a mash-up, what 3Play calls media clipping.

And I haven’t played with this personally. But I love the idea of being able to create a kind of mash-up of a number of clips that might all talk about cell mitosis over the course of a semester. And you can see that learning then becomes more engaging and becomes more customized. And it becomes, I think, much more effective than trying to skim videos in order to review a lecture video prior to an exam.

So searchable, interactive, and then something we haven’t really explored. And I know we need to solve the problem of the first caption stream, but there is a lot of power here to work with multiple caption streams.

We have more and more international students. I think that’s a given. It’s certainly a given here at Texas Tech. We have more and more non-native speakers of English.

What about a second track in a different language? I realize this is costly and maybe a bit idealistic. But there’s power here, I think.

What about alternative caption tracks? So we’re talking about higher ed. But in K through 12, what about an easy reading track? I’m not sure exactly what that would look like. We could talk about that together. Instead of a verbatim track, maybe there’s a second track.

So captions become a kind of learning tool that are able to help students at the places they are.

What else? Personalized video streams and course study tools, I’ve already [AUDIO OUT]. I like the idea of perhaps assigning a caption track to each student. Maybe I talk about this on another slide.

And then I know this is sort of something we can’t necessarily do today, but students who can annotate lecture videos, bookmark them. And maybe this is done through caption technology, creating a kind of personalized caption track underneath the sort of official verbatim track.

I think the sky’s really the limit on what we can do. HDTVs, as you know, support six or seven caption streams. And TED.com supports an unlimited number of subtitle streams. So this is something that could be explored in the future.

Lecture capture– I think students love lecture capture. This is when a course lecture is captured automatically and probably not captioned. You all can correct me on this, but my sense is that course lectures that are captured are only captioned when there is an identifiable need.

But it turns out that hearing students really love lecture capture. If you click on that link or have access to that link on student testimonials at CSUN, I think they’re talking about uncaptioned lectures. But they love lecture capture.

One student says he or she would sign anything to make lecture capture available in all of the courses. And this is just the ability to kind of skim and review lectures at home.

But imagine combining lecture capture with captioning technology, some of the things I was just talking about on the previous slide. First of all, you can search by keyword across multiple lectures. That’s what captioning allows you to do, gives you the power of search.

You can create some of these mash-ups or media clips based on your search queries. You can interact with the transcript via interactive transcripts. You can use the transcript as a study guide. You might be able to add your own annotations or bookmarks as maybe a kind of secondary or tertiary caption track.

You can also see what segments are most popular via heat map technology, which is not new. And I’m not sure what happened to it on Hulu, but I’m going to get to that on the next slide.

Also, professors have access to this data, too, or they could in the future. And they can understand a little better how students are reviewing the lectures, what they’re searching for, and they can adjust their teaching by attending to this data.

Heat maps are pretty cool. I first came across heat maps a number of years ago– maybe 2009 or 2010– through Hulu. This is a way of visualizing what viewers are searching for and viewing in the various shows.

So this heat map example is from an episode of Glee, and it spikes about 30 minutes into the episode. You can see the popularity spikes there. And that’s because that’s where the kids are singing “Defying Gravity.”

So presumably, people are searching for “Defying Gravity,” finding that clip, and watching it. But then other people could come back. Other viewers could come back and see what’s most popular and click on it.

Now, imagine if these are students. You can kind of crowdsource your learning. Maybe you don’t know how to study for the exam. Well, you could see what the class finds most popular.

And this may not be the best way to search, best way to study for an exam. But this could be one way of kind of crowdsourcing learning, by checking out what’s been most popular in the various course lecture videos. And then this data, of course, is interactive. So you can sort of click on any one of those bars.

A lot of stuff is happening in higher ed. And I’m not going to talk about all of these. But I just wanted to sort list a few pedagogical trends that I think have implications for closed captioning– the flipped classroom, especially, because in a flipped classroom, the professor is making a number of videos before the semester starts, usually.

Those may or may not be captioned. And then students watch those videos outside of class. And then in class, something sort of different happens. The professor is not lecturing quite as often– maybe 10 minutes, I’ve read in one scenario. And then the rest of the class time is devoted to who knows? Discussion or course projects or working on math problems or whatever.

But you can see the central role that video plays. And, well, we need to bring captions in, too. And I think we can combine the course lecture in a flipped classroom with some of the power of closed captioning.

Well, let’s keep going here in the interest of time. But there’s a lot going on. We’re not just talking about course lectures. We’re talking about all kinds of video content for a pretty wide population of users.

Well, how do we get there given that we’re not doing all of these things right now? Well, I think we have to address some of the confusion and maybe hesitation, maybe uncertainty that faculty, I think, are feeling.

And you may all want to fill me in on your sense of the reactions here. Faculty have asked questions, I think, at least on my campus, why now?

Here on my campus, we received an email from the president’s office, I believe, saying, hey, caption all of your stuff. And it was kind of sudden. And in the meantime, our university has worked to provide a kind of a richer infrastructure to support captioning.

But why now? Who’s responsible for this? What kind of help am I going to receive? What’s the goal or purpose? What content is covered, and so on?

But especially support for faculty who have not captioned beforehand, and because captioning is time-consuming. Even if you know how to do it, it still can be pretty time-consuming. I think one way forward here is universal design. So the problem is inflexible pedagogies and so on– one size fits all. We assume a kind of imaginary average student. Universal design for learning should be multiple slides here.

But just sort of quickly, universal design for learning sort of grows out of universal design and really emphasizes options. The word “multiple” appears a lot. The word “option” appears a lot– providing multiple means of representation, not just via audio, but also perhaps via text or via closed captioning. Multiple means of student action and expression because students are diverse. They learn in different ways.

The idea here is that universal design for learning can reach all students– maybe I should have started there– not just students with disabilities. But if you approach pedagogy from a universal design for a learning perspective, the argument is that you have a better chance of reaching a wide variety of learners, both able-bodied and disabled.

And then multiple means of engaging with course content, as well. That can motivate learners and can play to learners’ different styles and strengths.

So closed captioning can help enact UDL here as an alternative to audio or by providing a different modality. Maybe there’s a second language for non-native speakers– providing transcripts that can be customized in appearance because that’s what the user prefers based on his or her learning style or perception abilities, that sort of thing.

This could be one way forward. In terms of closed captioning, we could combine universal design for learning with all of these literacy studies. In the context of closed captioning, nothing has been studied more, I think, than literacy.

These literacy studies go back to the mid-1980s. We have about 30 years of literacy studies in closed captioning. And this could be one way to suggest that captions can benefit everyone.

I think the results are pretty positively correlated, suggesting at least in the K through 12 arena that captions can have a positive benefit in a number of areas– retention, note-taking, and so on, listening, grades, reading speed. What we need are literacy studies. And since you get to college and you still need to focus on learning how to write– but I think we need studies in higher education since most, maybe all of these studies, are K through 12-focused. I found one sort of anecdotal study on grades. Grades increased in the section in which the videos were captioned according to this professor, I think, in San Francisco State. But it’s all K through 12 stuff.

It’s still pretty promising in terms of the connection between literacy and closed captioning. You put a kid in front of a TV and the captions are on, they might get some benefit from that in terms of literacy. I guess that’s the conclusion there.

There’s a long bibliography link. And then we can also include students who are deaf and hard of hearing, but also other students, too– students with learning disabilities. Students on the autistic spectrum may benefit from having a second stream of input– not just listening, but also reading. Non-native speakers I mentioned a couple of times, and also older or returning students.

Let’s keep going. I would say that captions clarify, contextualize, distill, and formalize. And these functions can promote learning, literacy, and understanding.

Twitter would say something like, let me turn the closed captions on so I don’t miss anything. This is the kind of universal design in Twitter terms. But it fits, I think, with the higher ed context, as well.

Students may retain that content a little better if they’re reading it at the same time that they’re listening– not to leave out deaf and hard of hearing students, but to recognize that captioning can benefit a lot of our students.

I’ve included some examples here of additional context in which captions can be valuable. Doesn’t matter ability or disability here. I think these are just additional examples here.

A student studying late at night– so a noisy area or a quiet area are the kind of paradigmatic examples. It seems when we talk about universal design and captioning, we always invoke the noisy bar or the noisy airport. But noisy environments are a good example of a need for closed captioning– maybe riding a bus or something.

In my own work, I’m interested in cultivating what I’ve been calling caption studies. I think this is just an additional way of getting the word out about the power and usefulness of closed captioning.

I think there’s a lot of work that can be done here that can be united under this banner of caption studies, including big data studies. If anyone has access to a large database of captioned files, I’d love to have access to it.

So sirens in closed captioning– do they always wail? And the answer is mostly, they do. I have a small corpus there, but I’d love to explore non-speech siren captions, believe it or not, with a very large corpus.

But there are a lot of things that we could do, from literacy studies to usability studies. We could interview and survey students. And we can develop new hardware and software, as well.

So as I head towards a conclusion here, and now I worry that I’ve been talking a mile a minute because I see the time here. So I’m doing OK. Let me conclude here with a little dose of reality and talk about creating a departmental culture of accessibility, and then conclude with some notes about an institutional culture of accessibility.

I think there are a number of things that departments can do to promote accessibility and promote closed captioning in particular, and as a way to help convince faculty that closed captioning has wider benefits in terms of literacy, in terms of learning, and so on.

And so I list some things there on this slide, slide 26, about making sure that departments are providing regular support and training. We’re moving from a kind of traditional course model to what Randy Bass calls a post-course era. And the post-course era is team-based. It’s partnership-based. The classroom of the future is going to be one in which we will have to work with other entities on campus to make sure our courses are accessible and stuff. But at the departmental level, just making sure faculty know that support is available.

Outside speakers can come in. We had the director of Student Disability Services come to one session. Accessibility workshops– I don’t think we’ve held any recently in our department. We have, but not for captioning. I’m sorry.

So accessibility ability workshops as part of a list of other kinds of workshops on using technology and design or whatever. I think this semester or this year, a PDF accessibility workshop was offered in our department. Do-it-yourself tutorials, having an accessibility liaison– you can see this is sort of more practical suggestions.

Integrating accessibility into courses– I teach a course on web accessibility in the English department. And it covers disability studies, as well.

Publishing or promoting good work in accessibility– that’s where that link goes, to one of our PhD students who incorporated captioning as a project in an undergraduate course.

Rhetorically widening the audience– I truly believe as a hearing person that captioning has wide benefit at the same time that I recognize that closed captioning’s primary stakeholders are deaf and hard of hearing people. But there’s a wide audience out there that can benefit.

Tracking student data– asking them what they thought about captions.

User testing– we have a usability lab here in our department. We also have eye-tracking technology, as well. I think some of our eye-tracking studies of closed captioning need to be updated.

Anyway, just establishing the kind of culture in which accessibility is part of the normal routine of teaching courses and supporting each other, and keeping in mind that one size doesn’t fit all. There’s tremendous diversity here, almost a really kind of daunting level of diversity among the deaf and hard of hearing population.

Of course, there’s tremendous diversity here. Being deaf doesn’t mean necessarily not being able to hear anything. There are a lot of varying levels of hearing ability among the lowercase d deaf population.

And then you bring in capital D, cultural Deafness, and things become even more complex. Within hard of hearing population, you have people who are born hard of hearing and others who might have lost some hearing later on in life– so a tremendous amount of diversity within that population, within diversity in class formats, in what faculty are teaching. I just think diversity needs to be kept in mind when supporting faculty and addressing their needs.

Also, copyright we really didn’t talk about too much, but copyright is one of those thorns here. I think some accessibility units in some universities won’t caption unless they can secure copyright.

And then there are just different file types and different lengths and different distribution channels, from YouTube to DVDs and so on– so lots of diversity.

There’s a great PowerPoint here in my PowerPoint. I’m linking to another PowerPoint, which I’ve never done before– but a great study presentation from a couple people at UC Boulder. And they’ve got some great scenarios here, just different faculty scenarios, teaching scenarios, that raise some really challenging questions for an accessibility unit on campus.

And they conclude with a number of big ideas. I encourage you to take a look at some of the scenarios– big ideas include, well, complexity, which I’ve mentioned; creating an institutional policy; this notion of hybridity– you’ll see it [AUDIO OUT] institutions– UT, Stanford, my institution, Texas Tech– having contracts with outside vendors, but also having some local in-house captioning solutions.

Captioning is expensive, time-consuming. There are some copyright hurdles that need to be jumped, I guess.

Establishing those partnerships– if we’re moving into a post-course era, we need to establish partnerships with other units on campus so we can ensure that our courses are accessible.

This may also force faculty to resist a little bit, and maybe change how they teach because now they’re doing something that maybe they would not have considered doing but are now required to do. That’s kind of a big topic. And resistance can be a good thing if it changes how people teach for the better.

And then, of course, there are technical challenges, too, like extracting captions from a DVD. Well, [AUDIO OUT] a big problem. But you have to figure it out. And there are all kinds of technical challenges involved in captioning and extracting captions.

As we scale up from the departmental level to the institutional level– well, instead of a step-by-step guide, which might be outside my wheelhouse, I wanted to compile a number of key terms that I came across as I was researching this topic and reflecting on it. You can see some of the same key terms, again, about copyright; multiplicity, referring to multiple contracts; multiple solutions, because the terrain is diverse; hybridity; in-house solutions– maybe captioning labs, which we have here at Texas Tech; but also outside vendors or contractors, like 3Play Media.

Training and quality come up again. Automated workflows– making sure that this process is as seamless as possible. What else? The problems are complex, but also simple.

I could teach anybody to close caption a YouTube video in a few minutes. But I think it would take a lot longer to talk about some of these more complex issues.

So there is a kind of interesting tension here between complexity on the one hand and the simplicity of some of these do-it-yourself tools. Collaboration is important as we move towards this team-based model. Maybe we use a term like partnerships– establishing partnerships with outside entities, as well as other units on our campus.

Communication is always important. And we can continue to rhetorically widen the contexts in which captioning can be valuable in higher education.

Well, I had a question here. But I am certainly interested in continuing the discussion, and also hearing from you about your perspectives. What do you think the future holds, and how do we get there? Thank you.

LILY BOND: Thank you so much, Sean. That was a fabulous presentation, and there are a lot of questions coming in. And Sean, to start out, we have a question here– is there a rule for how many lines a caption frame can have? I’ve seen more in one line. Is that appropriate?

SEAN ZDENEK: Sure, yeah. Two to three is average. And I’ve also seen– and you know what you can look at? If I can type this in here– actually, let me just answer it because now I can’t see how to type it in right away.

If you go to something like the captioningkey.org, there’s one pretty good set of style guidelines. I think two to three lines is average. But I’ve also seen some experts who say, you know what? If you need four lines because of the context, you shouldn’t feel as though you can never go higher than two or three. But I think only in rare cases will you see four lines high.

LILY BOND: Great.

SEAN ZDENEK: But yeah, definitely go to two or three if that’s what you need. I think what’s more important is reading speed, Captioning Key– they start talking about editing captions at 150 or 160 words per minute. The average speaking speed on TV is 141 words per minute, according to a late ’90s study. Thank you.

LILY BOND: Another question here– as a faculty member, maybe you can provide insight into how can the administrators convince faculty members that captioning is not a nice-to-have, but rather a requirement? Do you have any tips on how to build faculty buy-in? SEAN ZDENEK: Sure. I don’t want to repeat some of the things I’ve already said. But I think a universal design framework– I know I mentioned universal design for learning. But if you look around the web at some of the accessibility pages for various universities, you see them leaning heavily on universal design, this idea that captions can provide a lot of benefit to students. And we’re not just talking about a small population, but this is something that can benefit a lot of students.

I guess I’d also refer to some of the power that captioning can potentially hold through interactive transcripts and other technologies. Everybody wants to search video, and you can’t really search a video or index a video without captions. It’s all based on caption technology.

Search engines don’t know what’s inside a video unless you can translate that into text. So maybe other powerful features of captioning could be one way in– maybe through case studies, as well, of successful teaching that was leveraged on closed captioning and searching.

I don’t know. I’d be curious to hear what other ideas people had about that, too.

LILY BOND: Thanks. The next question here– you mentioned it makes a difference where on the screen the captioning lines are displayed. Can you elaborate? And when does it appear on the top or the bottom?

SEAN ZDENEK: Yeah. It’d be great to show an example. It would take me a minute to find it, but I do talk about a couple examples in my book.

When two people are talking at the same time, you need to be able to distinguish. You need to be able to distinguish speaker A from speaker B, especially if this is a short call and response.

So if one speaker says, hey, or one speaker says, how are you, and the other says, fine, the caption reader needs to be looking at the captions and then also looking at the speakers to see who is saying what. That’s a simple example. I think it just gets more complex from there.

In a single caption at the bottom of the screen, really the only way to distinguish those two lines is with a preceding hyphen. I guess you could add a speaker ID, too, but that’s not usually what happens. It just adds more text to the caption that you have to read.

It’s much more effective to move those captions underneath each speaker, and that’s what placement is all about. The best example is what’s called a two shot, when you have two people on the screen at the same time and both of their contributions fit into the same caption. Then you can put each speaker’s contribution underneath each speaker. Does that help?

LILY BOND: Yeah, that makes a lot of sense. Thank you. Someone else is asking– there are a few questions kind of about the accessibility team at your school and what that looks like, how many faculty or staff there are, whether the budget was established for your institution or by department. There are just a lot of questions about the structure of accessibility at your university.

SEAN ZDENEK: Yeah, I think that’s great. I think I can offer some help on that. And I’m sorry that I can’t offer a sort of a complete picture.

About two weeks ago, I spoke over the internet through Skype or whatever with the accessibility coordinator for our campus. We had a really nice conversation. She talked about how many videos they are captioning.

But this accessibility ability coordinator position is associated with our College of World e-learning. It’s like our online college. And it grew out of a piloted captioning lab in the fall of 2014. So in the fall of 2014, Texas Tech– and I don’t know. At some point, I think Texas Tech did partner with 3Play Media. But they also do some in-house stuff. They were using undergraduates as well as some graduate students in the captioning lab.

And then at some point, the captioning lab was sort of morphed into or became part of this College of World e-learning, our online course offerings division with an accessibility coordinator who oversees it. She told me this semester that they captioned– this isn’t going to help with money or budgeting, but she told me a couple weeks ago that they captioned 400 videos at the beginning of the semester. And now they’ve captioned a total of 586, or 175 hours.

And I’m not sure how many of those hours were captioned by outside vendors and how many were captioned in-house. LILY BOND: Great. Thank you, Sean. That’s a helpful overview.

Another person has a thoughtful question. Sometimes when people communicate the advantages of captioning for non-disabled students as a way to sell the idea, I think it can unintentionally send the message that providing access to deaf students is not enough reason in and of itself to caption. How might we communicate these other benefits without inadvertently communicating that access for deaf people is not a good enough reason to caption?

SEAN ZDENEK: Yeah, that’s a fantastic question. When I was putting the slides together on rhetorical widening, there are criticisms of this concept of rhetorical widening. I think Jay Dolmage and other rhetorical scholars have called out this practice.

It’s the idea that captioning only becomes valuable when mainstream audiences recognize its value. And I have grappled with that a little bit. That’s why at the beginning of this presentation, and I think in a couple other places, as well, I referred back to what I call the primary stakeholders.

I have an immediate family member who is deaf. This is the context for me. I think these are the most important stakeholders. It all has to sort of begin and end there.

I don’t know if I’m offering a very good answer. I’m aware of some of the criticisms of rhetorical widening. I think universal design needs to go hand-in-hand with the recognition that more and more students are attending college, more and more students with disabilities are attending college. And we need to make sure that we do as much as we can for them, I think, perhaps not losing sight of them.

I don’t know. It’s a hard question. I try in my own presentations to make it clear that this begins with– slide 2 for me included a reference to deaf and hard of hearing students.

At the same time, I’m hearing and I cannot live without captions. I think there’s a way, perhaps, maybe to sort of embrace multiple audiences here without losing sight of the important deaf and hard of hearing audience.

LILY BOND: Thank you, Sean. That’s a really great answer. Another question here– in your captioning lab, do your student workers manually caption the videos or do you use some type of software?

SEAN ZDENEK: I’m not sure. I’m not sure. I do know, and I’m sort of scanning my notes here– I do know that we make available for free Movie Captioner. And I’m not sure if that’s what they use. I could look that up and find out.

I do know in some cases, the student captioners are reviewing caption material. And that might be material that has already been captioned by a third party. But I’m sorry. I’m sorry, I don’t know. And I’m curious, so I’m going to find out.

LILY BOND: Great. Thanks, Sean.

Another question here– do you know what department is responsible for ADA compliance at your university or who monitors that compliance?

SEAN ZDENEK: Sure. I think there is someone in the provost’s office who is our ADA coordinator and is the first line of contact. And then I think it moves down through our Student Disability Services office. Don’t hold me to that. But I believe that’s the sort of line of command.

LILY BOND: Great. Thanks. Another question– do you have recommendations or resources for producing good captions of non-speech sounds?

SEAN ZDENEK: Sure. I mentioned Captioning Key a couple times. I think one of the problems with the style guides is that they tend to assume you already have the words, you already have the words and now you just need to format them– how many lines? What is the reading speed? Where do you break your captions? Do you use parentheses or brackets, or information about speech or IDs and so on.

I don’t know that there are a lot of good resources about rhetorically inventing words for sounds that might be unusual. This is one of my criticisms of the style guides– not that style guide should do everything, but the style guides tend to assume that you already have those words.

And sometimes you need help inventing those words, like is this a growl or a roar? Maybe one good resource might be– here we go– might be a new book called Reading Sounds by yours truly, in which I analyze just a ton of clips.

I don’t know if that’s a good resource. But I think there might be a limitation there in helping people create those words for non-speech sounds.

LILY BOND: I have found your blog very valuable, Sean, in dealing with exactly that issue and looking at sounds that do not have words that accurately depict them. So I do think that’s a great resource.

Another question here is what are your recommendations for captioning without blocking critical graphics, like a video of someone giving a presentation with PowerPoint?

SEAN ZDENEK: That’s great. People like to– if they don’t know about accessibility, I’m not trying to slam people who make videos. But you’ll see a lot with student video projects some onscreen text through iMovie or something that is all the way at the bottom of the screen. And then if you want to caption, if you want to caption that video, it ends up covering titles, speaker titles, or other information that’s already been put on the screen.

Well, what was the question exactly? Do I have suggestions?

LILY BOND: The question was do you have suggestions– sorry, I’m just looking for it here– yeah, your recommendations for captioning without blocking critical graphics.

SEAN ZDENEK: Yeah. Well, I think you have to remind students. If this is a professor presentation or a student, it doesn’t matter whose presentation. But if these are students in a classroom, I think you have to talk about captioning at the very beginning.

That way, they realize that there are going to be words on the screen for those people who need them. And that might alert them to the fact that you can’t put a title, a hard-coded title, in your video at the bottom of the screen. I think you need to have a kind of safe space at the bottom of the screen or reserve the very bottom of the screen for closed captioning.

So talking about captioning right away, early on, I think may be one way– sort of folding it into discussions about how to make a PowerPoint or how to make video.

LILY BOND: Thanks, Sean. Someone else is asking if you can share any expertise on audio description for blind or visually-impaired students.

SEAN ZDENEK: Well, the companion style guide for Captioning Key is called Description Key. That might be one way to go.

There are other resources out there that I don’t have close at hand that I’ve used in my own teaching talking about audio description with students. But one that comes to mind here right now might be the Description Key.

LILY BOND: Agreed, the Description Key is a great resource for that. So Sean, we’re about out of time. But a few people have asked where they can find your blog. If you want to plug that or put it into that chat window, people would appreciate that link.

SEAN ZDENEK: Sure, yeah. For some reason, I cannot type in here, as I said earlier. Instead of just clicking around here because I haven’t used this interface before, I’ll say readingsounds.net. And Lily, maybe you can type it in for me. I’m sorry.

LILY BOND: Yep, I just sent it out to everyone.

SEAN ZDENEK: And I also just have an old blog from 2009 to the present, just different examples and analyses at seanzdenek– S-E-A-N-Z-D-E-N-E-K– .com. And feel free to contact me about anything you find or other questions or you want to stay in touch and I’d be happy to continue the conversation.

LILY BOND: Well, thank you so much, Sean. That was a great presentation and a great conversation to have. And I appreciate you taking the time to join us today.

SEAN ZDENEK: Great. Thank you for having me.

LILY BOND: And thank you to everyone who attended. A reminder that we will be sending out a recording tomorrow, so keep an eye out for that. And I hope everyone has a great rest of the day.

SEAN ZDENEK: Thank you.