Accessible Video Captioning in Higher Ed and MOOCs [Transcript]
CJ JOHNSON: All right. We’re ready to get started. Welcome to the 3:30 panel. This is, Accessible Video Captioning in Higher Ed and MOOCS. I am your friendly moderator, CJ Johnson, founder of 3Play Media. Our company focuses on creating timed-text for online video, and using that timed-text for video captioning, search, and interactive user experiences.
Our panel today, we really want to focus most of our time on answering the questions you might have about publishing video accessibly. But we want to just start by letting each of our panelists give a brief overview of what they do and describe a component of the accessibility process.
First, Kathy Berger, from the University of New Hampshire, will give us a quick background on rules and regulations surrounding video accessibility. Next, Nicole Croy, from Regis University, will take us through her experience in building a scalable captioning workflow for a lot of videos. You might find, if you’ve tried implementing video accessibility, captioning one or two videos is much different than from captioning 100 or 1,000 or more. And finally, Brett Paci. Got it?
BRETT PACI: Yeah. Pretty close.
CJ JOHNSON: Yes, from MIT OpenCourseWare. Will talk about publishing video with captions and creating interactive user experiences, specifically with using YouTube. So with that, I’ll let Kathy take over.
KATHERINE BERGER: Good afternoon, everyone. This is kind of the last session, so we hope we won’t make you fall asleep. But I am Kathy Berger. I am the Director of Disability Services at the University of New Hampshire, about an hour and a half north of here.
I’ve actually been in disability services for just over 10 years now. I was at a small, private college kind of similar to Regis for about three or four years. And since that time, I’ve been at the University of New Hampshire. And so I’ve seen a lot change as far as accessibility, as far as what’s available. A lot has really changed in the very recent past, in the past year or two.
So just some background data. There’s a significant amount of people in the United States that have a disability. The last census that was done was in 2010, so a lot of this information is from that census. I’m going to focus more on students with disabilities and veterans. There’s many more veterans taking advantage of the GI Bill in Massachusetts than there is in New Hampshire, so they’re seeing a bigger influx.
But I also look at, higher education is a pathway to a career, to a job, to a paycheck, ultimately. That’s why many of us go to college, because, at the end of the road, we have to work. Our parents tell us we have to go out. We have to work. We have to get a job. We have to get out of the house. We cannot stay with them forever.
And so just a few stats is that, as a whole, 13% of Americans fail to finish high school, as a whole. 40% of people with disabilities fail to finish high school. A lot of this stuff that we’re going to talk about does not happen in the high school. So again, not necessarily a lack of education to them, but a lack of them being able to access the material.
So 15% of households in America earn less than $15,000. 50% of people in America that are a household that is headed by someone with a disability actually earn less than $15,000. And so the big thing is that people with disabilities are not employed.
And that is because of the lack of education. And it’s not because they’re not smart for the material. The material is not being accessible to them. So there’s not equal access in that component. So actually, 66% of people with disabilities in America, ages 16 to 64, are unemployed. So I really look at it as, the more accessible we can make things, the more employed people with disabilities and workers will be.
So just a few other stats. Of the 11% of post-secondary students reported as having a disability, the largest are students with learning disabilities like dyslexia. 20% are attentional disorders. And 15% are psychological disorders.
There’s been a lot of talk, especially in the past couple of years, about veterans returning from war and about them coming back on the GI Bill, using that, and having access into the classroom. The number one disability is a latent hearing loss. So that’s deafness, hard of hearing.
I always say this word incorrectly, but tinnitus is the number one hearing loss. Does anybody not know what tinnitus is? OK. It’s a constant ringing in the ears.
So imagine sitting in a classroom and having constant ringing in the ears, trying to focus on that. I actually know a colleague at another university that used captioning in a very, very intuitive way, as an accommodation. Because the ringing was so impactful to this veteran, they actually had him sit in another smaller room and had everything captioned so that he was actually seeing things live and streaming.
So about laws. Laws, everybody knows about the boring laws. But I’ll start with Section 504.
Section 504, of the laws that you see, it’s actually the only law that specifically talks to higher education. People are often very surprised about that. But if you happen to have nothing better to do on a Friday night, and you read over the laws, 504 is the only one that actually has anything to do with higher education specifically written into it.
So 504 is the first anti-discrimination law. And so a person with a disability can sue because of discrimination. The Office of Civil Rights oversees a lot of complaints. So if you’re a faculty member, or if you’re teaching a class and you don’t have material that’s accessible, a student can sue you or the university based on discrimination. And that falls under, actually, 504.
ADA came after that. And actually, right after that came 508. 508 really only pertains to public entities, so public places. Now of course, there’s conversation surrounding is the university a public place. Is the website a public place? That is probably true. And actually, there’s a bunch of litigation in court right now about that.
Everybody knows, probably, the Americans with Disabilities Act. It was amended most recently a couple of years ago. The ADA really supports 504. And that’s where accommodations came from. Accommodations did not necessarily come from 504. They actually came from the Americans with Disabilities Act.
Public colleges and universities fall under Title II. And privates fall under Title III. A lot of private colleges used to say, oh, we don’t have to do this because we’re not under ADA. It’s not true because, if somebody is actually needing accommodations, or if they’re going to sue, it’s going to be anti-discrimination. And that’s 504. So all universities and colleges are covered, especially if they have federal assistance.
Lots and lots of notes. So the most recent one, President Obama signed into law the CVAA. And all these laws don’t take effect the same day. Often, there are things that kind of roll. So things happen every six months. And that’s what’s kind of happening with the law that’s going on now.
It applies to the fact that captioning has to be done for anything live, as well as anything produced full-length video. It does not apply to YouTube or small things like that. And I think that’s unfortunate, because a lot of faculty that teach classes are using short YouTube videos and clips. And by law, those don’t have to be captioned. But if you can’t hear it, how do you know what’s going on? And you’re left out in the classroom.
So again, phases are happening within this most current law. And I think it’s in 2014 is when the final deadline is for that. So if you don’t know anything about this and you’re needing to capture a captioning, I would suggest that you go and you read about this, because this is going to be the up-and-coming law that, I think, is going to be litigated and put into effect.
So benefits of captioning, I can go on and on about benefits of captioning. We were talking about it. And Nicole, I think, is going to go into more detail about it. Absolutely everything that’s on the slide are benefits of captioning.
We talk about ESL students, English as a Second Language. On the UNH campus, at least, there’s a huge push. We have a huge influx of students whose English is a second language.
Even though they pass that TOEFL and they get a good score, still, English is very difficult for them. And so the captioning has a huge benefit. They can see it. They can hear it. They can go back, and they can watch it again. Such a huge benefit on the campus.
Again, many veterans are coming back that are hard of hearing. People across the lifespan are living longer, taking classes. Driving here today, just on 95, I saw a sign that says, one in three babies born today will live to be 100. Like, wow. Wow!
There’s going to be a lot of hard of hearing going on. And they’re going to take classes much longer than you and I. So again, access to that and having everything captioned.
The aging population– whoops. Is that me? Large lecture halls, you may not have large lecture halls on your campus. We do. If you’re sitting way in the back of the classroom, even if somebody’s talking clearly, you may not be able to hear that, even with perfect hearing. So again, having captioning available, either while it’s happening, or when students can go back and watch it afterwards, is extremely beneficial.
It does increase learning. I was just talking briefly to Nicole before. And I can’t remember the name. There’s a very small college that actually did a quick little study.
They captioned their lectures for students with disabilities. Actually found that, for all their students, the grades increased quite largely. And then those students in the classrooms who benefited from it, who didn’t have disabilities, also were talking to other professors about, why aren’t you using this in your classroom?
And I cannot remember the name of the college. The college has implemented it in almost all their classes right now. Again, they’re small. It’s easier to do. Not so easy to do at a large university. But they’ve shown a huge increase.
Grades obviously show learning. But they also were showing that the students were able to actually retain the information for much, much longer. So I want to keep that very brief about the law. And I’ll turn it over to Nicole.
NICOLE CROY: So I’ll give you just a quick background on me. I fall under the College of Professional Studies at Regis. We offer graduate and undergraduate programs. And within that, I work in the Department of Learning Design. And so we’re responsible for the instructional design and development of enterprise-level online and blended courses.
We have approximately 475 courses online. And of those courses, about 60% include video. So we have a substantial source of video.
Like Kathy touched on, accessibility policy and law, there’s a lot of gray areas. There’s a lot to it. It’s hard to interpret. It depends on the expert that you talk to.
So we kind of have taken the stance, whatever the policy says, whether we fall under Section 504, 508, whatever, we feel we have a duty to make all of our content accessible to all learners. So to fulfill that duty, about a year and a half ago, we created an accessibility policy at the college level, because we received some resistance at the university level. So we said, OK, at the college level, we’re going to create a policy to help us guide our course development and make sure that we’re fulfilling that obligation.
We based it on the WCAG 2.0. We used to Web Accessibility in Mind, also known as WebAim, as a resource. So if you’re looking for any sort of sources to go to to shape your own policy, I highly recommend that. We’re also a Quality Matters Program college. And so we use their rubric on accessibility in our policy to also guide that. And afterwards, I’ve brought a few copies of our policy. So if you wanted to grab one of those to see our rationale and what we did, you’re welcome to do that.
But based on that policy, prior to implementing it, we were providing just an external transcript to all of our videos. So the student would get the video. And then there was a link to a transcript that actually launched in Notepad. And they could go through and read the transcript.
As we were shaping our accessibility policy, we kept going back to– I believe it’s 508 that says, “an equivalent experience for all users.” We were saying, how is this model providing an equivalent experience for, say, a deaf student? They’re having to read the transcript over here. It’s not time synced. So it’s not really the same as watching the video and being able to hear it at the same time.
So because of that, we decided that we needed to closed caption all of our videos. We started out with using a proprietary video format that was able to provide the closed caption files. We purchased their streaming server.
And it worked. It provided closed captioning to students, but it relied on a plug-in. Because it was proprietary, the student was required to keep the latest version of the plug-in on their system.
And as we know, student environments now, there’s a multitude of them, depending on their operating system, the browser they’re using. It just quickly turned into a tech support nightmare, having to keep those latest– every call to the tech support desk was, I can’t see this video. Well, have you downloaded the latest update from the vendor?
So in addition to those tech support issues, the process of actually getting the closed caption video embedded into our online courses was rather cumbersome. The process started out with, we would have to convert the video to that proprietary video format. And then we would upload it to our server.
We were then using a vendor to provide the transcription and closed captioning files. So we would go out and upload the video file to their server. We would then make the request saying, please transcribe this video. About three to four days later, we would receive an email back saying, your transcription’s been done. The files are ready.
We would have to go out to their website, log-in to the vendor website again, download the closed captioned files, upload those files to our server, and then write the embed code and put it into our courses for the closed captioned video. So you can see numerous steps. And it was a very manual process.
We were running into the tech staff that was responsible for doing this. They may get the email and say, oh, you know what, I’ll grab those files tomorrow. And then they’d forget. So we’d have videos out there that didn’t have closed captioning in them. .
After having these tech support issues and the time that it was taking, there was some grumbling about, should we even be doing this? And about the same time, our CIO was searching for an across-university platform for video, a way for us to better store our video. And so we looked into and landed on using Kaltura, which is a video platform. And we quickly found out that 3Play Media integrates with Kaltura. So we were given a trial to use the integrated system, and quickly found out that it was the answer to our needs.
So now, this is what our captioning process looks like. We upload the video, no matter what the format, because Kaltura can take a multitude of video formats and ingest it. We upload it to Kaltura.
While we’re there, we add metadata. One of those metadatas is simply a tag, “3Play.” And what that tag does is it pings 3Play’s server to come out and grab that video file and start the transcription.
While that is happening, we grab the embed code of the captioned player, take that, paste that into our HTML for the course, and upload that to our LMS. Within eight hours, two days at the most, 3Play finishes the transcription and pushes the closed captioning back into the Kaltura system, and we have closed captioning.
So it’s really one step on our end. We do it all at once. And it’s pretty seamless. So as you can see, we’ve cut our process substantially.
So then, in addition to streamlining our process, reducing the tech support calls, we kind of realized some secondary benefits. All of our videos are now mobile-ready. So whether a student’s accessing it iPhone, iPad, Android, Windows, it doesn’t matter. They are going to get that closed captioned video.
And then, in addition, we found that we’ve increased usability for all learners, not just those with disabilities, through the interactive transcript, which I’m going to demo here, maybe. Did it go? So this is a video from our Master’s of Science in Organizational Leadership. Is it going to play for us?
The instructor’s explaining how to– I don’t know that we’re going to get sound– but she’s explaining how to brainstorm using a tree diagram. So it’s a very linear concept of step one, do this, step two, do this. With the interactive transcript, the student’s able to go– and say they missed step two. They can find in the verbiage here, in the transcript– I don’t think I can get that over.
Anyway, they can click on one of the words here and be taken right back to that point in the video. In addition, they’re able to download and print the transcripts. So we’ve heard use cases of students that they print out the transcript and highlight it and use it as a study guide for the assessment later. So it’s really been a feature that students love, not just deaf students.
So we feel like it’s made the process very cost-efficient. Not only have we decreased and streamlined our process for actually implementing this, we’re fulfilling our duty that we set out to make all of our content accessible to all learners. And we’ve enhanced our videos in the long run. So we’re very pleased with the process. Now I’ll turn it over to Brett.
CJ JOHNSON: Oh, PowerPoint point crashed. Awesome. OK. [INAUDIBLE]. That’s the fastest that’s ever recovered in the history of PowerPoint.
NICOLE CROY: You’re going to love that I used animation.
CJ JOHNSON: We’ll just get a quick flashback review. It’s all set.
NICOLE CROY: Right, in case you missed anything.
CJ JOHNSON: All right. Ready to go?
BRETT PACI: Yes, sir.
CJ JOHNSON: Good.
BRETT PACI: OK. So again, I’m Brett Paci. I’m the video publication manager for MIT OpenCourseWare. In case you don’t know what OpenCourseWare is, it’s an initiative that started a little over 10 years ago to publish virtually all of MIT’s course content online, for free, openly, and encouraging people to also share these materials. So we basically put up all the course materials that we can for anyone to use. And that’s an important part of this, as you’ll see throughout the presentation.
So as the video publication manager, I’m responsible for arranging capture of the courses, arranging lecture capture, meeting with faculty, then getting the videos edited and published on the site. And we publish our videos to a variety of distribution platforms like YouTube, iTunes U, Internet Archive, and others.
And our current process was to, once we published the entire course with all the other lecture notes, assignments, sometimes we get homeworks that the actual students taking the class did, we get their permission to post those as well, after we do that and publish the videos, then we begin the process of having them transcribed in subtitles. So we’ve talked a lot, I think, already about workflow, cost, and everything. But obviously, we definitely feel it’s worth doing this for many reasons.
So how do we have our videos transcribed and subtitled? Obviously, we upload them to 3Play, which can be done from YouTube or from any other link. They do the process of having them transcribed.
Now in the past, we’ve always had students review the subtitles after we have 3Play do them. 3Play gets them very accurate. We thought that it was very important to get them to 100% accuracy, but we also realized that’s probably not possible, ever, no matter how many people and no matter how many students you have review the files.
But because of the accuracy that 3Play is able to achieve, we’re moving away from that to a process where we just have the captions published right away, without having anybody review them. And we find that they’re really up to the quality where we’re able to do that comfortably. And that’s going to allow us to actually have the subtitles and transcripts ready as soon as the video publishes.
The nice thing, I think, about this panel is that, obviously, we all care about making videos accessible. None of us here are doing this because we’re forced to by law, because we’re currently not. Now maybe somebody we will be, but that’s never been what it’s about for us. We do that just because we want to help the users.
Again, we publish our materials to be freely and openly used and shared by anyone in the world. So obviously, we want to make them as usable as possible. And as it was mentioned, of course, it’s good for the hearing impaired. But also, a big benefit we’re finding is with the non-native English speakers.
To put you a little bit in their shoes– some of you might be in those shoes right now– imagine you’re attempting to learn some difficult subject matter, nuclear physics, for example, at MIT. And it’s hard enough, right? But then you kind of have this second mountain to climb, which is the language itself. So you’re learning difficult subject matter. Then you’re doing it in a language that’s not your mother tongue. And that makes things even more difficult.
So we find that providing transcripts and subtitles helps mitigate that second mountain, that second layer of difficulty, to help put people on a level playing field. And by doing this, you could essentially help people who don’t speak English as their first language bypass that second leg altogether, potentially, if these resources are translated. And so we found that we’ve had many instances– we encourage translation into other languages. We don’t do it ourselves, but we have translation partners that have translated our courses into Chinese and many other languages. And providing the transcript and the subtitles makes that a lot easier.
And recently, we just had a request, for example, to translate one of our courses into Haitian Creole. And so we were able to give them the SRT files that were generated by 3Play to help facilitate that. And of course, even if it’s not translated, as has been discussed, people can slow down. People can read the transcript or look at the subtitles and learn more at their own pace, even if they’re not quite hearing everything or understanding everything as well the first time.
Somewhat like what Nicole just mentioned, we also worked with 3Play’s video plug-in. We worked on an experiment– I’ll say at this point, because it’s not yet implemented on our site, but our hope is that it will be– that, again, included the interactive transcript, which, of course, is the blue highlighted text that follows along with the speaker. But something else we added to it, as well, aside from search, this particular video, we added Search this Course.
Sometimes you might come across something, or maybe you come across something in a later lecture, and you really can’t remember what that term is all about. And so you search the rest of the course. And that brings up– I don’t know if you can see in this image, but when you find the word you’re looking for, there are little markers placed throughout the timeline, or the scrubber of the video, so you can just click right to that word.
And when you do the search under Search Course, and let’s say you click on lecture five and you go to that lecture, it will give you those same markers. So it won’t just take you to lecture five, it will take you to lecture five and show you where that keyword or phrase appears in that entire video. So again, that just makes everything a lot more useful for our users.
And when we did that, we asked people to– if you can see the red box here– to take a survey and let us know what they thought about it. We asked people to “describe the impact of the interactive video features on your viewing experience.” And 100% of the responders felt that it either enhanced or significantly enhanced.
And nobody selected the other two options. I don’t even remember what they were. But there was a consensus that it definitely enhanced the experience and made it easier to learn from those materials.
I have a couple of quotes that I can share from that. For one– again, this part here, and I think Nicole just kind of demoed this for us– but the ability to select a word within the transcript and start the video from that point. You probably feel like I do when you watch a video on YouTube, or anywhere, really. It’s a pretty opaque experience. You can’t really see what’s in the video, aside from what’s playing at that very moment. But this is a way of kind of making it more transparent, by being able to search throughout the transcript for that video and just click to wherever it is that you want to go.
And this is interesting because, for us, a lot in the open ed movement and with MOOCS as well– OpenCourseWare is not a MOOC, but we are closely connected with edX and what’s going on there. And we often hear that you can’t replicate the classroom experience online, which I agree with. But we try to look at it as, what can we do that can’t be replicated in the classroom?
So some of the things that have been mentioned today, these are things that we can do to slow down the learning experience for people to learn at their own pace and use this kind of interactivity. And that’s something that you really can’t replicate in the classroom. You can’t stop the professor and have them go back. Well, you could try.
Personally, I really like this next one, because of my own background. This is one of the responses. “Because I’m Italian, so it’s easier for me to understand.”
And I wished I had been able to talk to this person and ask them what exactly. But one of the other quotes, I think, explained it well. This person said, “Since I’m not an English–” well again, it’s grammatically a little off– “it helps me understand the class better. It also helps me not to lose focus during the video.”
I don’t know if you’ve ever tried to learn another language. You probably find, in your native language, you can kind of tune out. You can get distracted. You can daydream.
And you can kind of check back in at will. And your brain kind of catches you up on everything that you just tuned out for. But doing that in another language that’s not your own language is a lot more difficult.
So aside from helping you not lose focus, you can kind of go back, as has been mentioned, and see whatever it was you missed. Or if you didn’t quite grasp something, you can very easily get back to it. So again, our audience being the entire world, we hope, again, we try to share these materials with everyone. And we want to make it as easy as possible for them to actually learn from it, and not just present it to them.
So again, there’s not really a whole lot in it for us. There’s nothing in it for us monetarily. But we get hundreds and thousands of thank you emails, thank you posts on our Facebook page. And as you can see from the quotes that I already showed, people definitely appreciate it. And it’s enhancing their learning experience.
But other than that, it also makes your site and your materials more discoverable, searchable, and visible. Any time you have a transcript or an SRT file uploaded to YouTube, it makes that entire video searchable. There are things that you can do on YouTube, but they change all the time. So I can’t really demo it for you, because YouTube’s changed the way it works. And we wish they would stop doing that. But again, it does make your video more searchable in YouTube.
And our hope is that we can create an expectation of this. So we’ve seen this interactive transcript being implemented in various places. EdX, for example, uses the interactive transcript.
And we’re hoping to create an expectation that this is how materials will be presented going forward. And it will be easier for people to convince administrators or whoever they have to convince that it’s worth it to go ahead and do this. And that’s all I have. I’ll give it back to CJ.
CJ JOHNSON: All right. Well, thanks, everyone. That was really great.
And also I should mention that Brett is a master artist. He illustrated his own slides. So those awesome drawings were– he puts a lot of effort into his slides. At this time, I’d like to open it up for questions, if anyone would like to ask about any piece of accessibility. Yes?
AUDIENCE: The transcription services that you guys are using, are they using technology to accomplish the transcription? Or are they using humans to do the transcription?
CJ JOHNSON: All right. First of all, I have to say I am going to repeat each question, so that we have it on record for the camera, not just because I like hearing my own voice. The question is, for the transcription services that you all are using, how much of it is technology, versus how much of it is human?
NICOLE CROY: Well, we’re using 3Play Media. And so I think CJ could probably address that.
CJ JOHNSON: Yeah. From the 3Play perspective, I can say that, in actual percentage terms, we’re about a little over 60% automated. And the rest of it is human correction. So by the end of the process, we have 99.6% accuracy. And that’s basically using the best-of technology, state-of-the-art in speech recognition machine learning, along with a very efficient human cleanup process.
But you can use vendors across the range there. Some are purely automated. Some are purely human.
AUDIENCE: What’s the price [INAUDIBLE]?
CJ JOHNSON: It kind of depends a little bit on volume and the type of content you have. We’re actually in aisle 500 of the exhibit hall, if you’d like to discuss in more detail. The base price is $2.50 per minute. And it’s prorated to the second. So it’s however much content you have, based on that price point.
AUDIENCE: Nicole, from an institution approach, when your instructors are uploading those videos, does each department have their own line item for captioning? Or is that across the board institutionally that you have a capital budget for that?
NICOLE CROY: It comes out of my department’s budget, the Department of Learning Design. So we’re responsible. We have an enterprise model for our courses. So our facilitators aren’t necessarily uploading their own video.
Like the video that we uploaded is used across the facilitators. Maybe three or four sections will be using that same video. So the cost of that captioning comes from our budget.
KATHERINE BERGER: And I can tell you, at UNH, because we use Kaltura Integrity along with 3Play– so a couple years ago, when we wanted to start in this venture, the IT department, which is significantly large, actually funded most of this venture. And so now what they’re doing is they’re trying to branch out so that each college will be paying for what is theirs.
AUDIENCE: This question is for all of you. I know we’ve discussed videos and captioning. But what do you guys do to accommodate students who have screen readers?
CJ JOHNSON: I’ll repeat that question. So the question was about accommodations for screen readers for students.
KATHERINE BERGER: Do you want me to go ahead and do that? So that’s one of the things that, whenever IT is going out there looking at something, my little voice in the back their head is always, make sure that it’s accessible with screen reader.
Does anybody not know what a screen reader is? I don’t want to assume. Everybody knows what a screen reader is? Yes?
And so I ask them to make sure they ask the vendor that question. Can it be used with Window Eyes? Can it be used with JAWS? And currently, at UNH, we don’t have a problem with that.
NICOLE CROY: And we develop to a screen reader, so including all tags and all images and that type of approach.
CJ JOHNSON: Any other questions? Yes?
AUDIENCE: Have any of you taken the transcriptions one step farther and done any foreign language translations?
CJ JOHNSON: The question was about doing translations of the source transcriptions.
BRETT PACI: Yeah. I know, for us, we don’t actually do the translations ourselves, but we have translation partners. So whenever possible, when we have the courses transcribed, the video lectures transcribed, we give those to whoever will be doing the translation. It makes it a lot quicker. They can do it without it, but obviously, it’s a lot more effort. And it takes a lot longer.
CJ JOHNSON: Any other questions? Question? You got one? Yep?
AUDIENCE: Do you guys ever utilize videos that aren’t solely owned by you? So YouTube content that might not necessarily have a transcript, how do you go about [INAUDIBLE] for that? Do you need permissions from the owner of the video?
CJ JOHNSON: So the question is about using content that isn’t necessarily owned by you as the publisher. If you found, for example, a video on YouTube that doesn’t belong to you, but you want to present it and have that accessible for your students.
NICOLE CROY: That’s actually been a hot debate within our college lately because, with 3Play Media, we do have the ability to get a transcript from an external source. We can just take that link and plug that link in. And 3Play is able to provide a transcript for us.
We’ve consulted a few people. And our copyright expert on campus said that we should at least put it out there and ask is it OK for us to provide the transcript. But we’ve taken the stance that, because we want to make all of our content accessible, we are providing the transcript for those external sources.
But the standard practice is to, at least first, contact the content owner and see if they’ll just say– I mean, I’d say 90% of the time, we get the answer of, oh, sure. Great. Feel free.
AUDIENCE: I just want to ask, how does budget enter into the equation when you decide which videos to caption?
CJ JOHNSON: So the question is about how budget enters into the equation when deciding which videos to caption.
KATHERINE BERGER: Do you want to go first?
BRETT PACI: Yeah. So our goal is to caption all of the video that we produce, especially going forward. My job position didn’t exist two years ago. So there were a lot of videos that were published– we even published them before 3Play was a thing. So we have a lot of backlog of video that hasn’t been published. So we’ve been trying to chip away at those, submitting them to 3Play for captions.
As far as budget going forward, we only capture five or six courses of full video lecture per semester, so 10 or 12 courses a year. And it falls within our budget to go ahead and do that. So for that, we don’t really have to think about it.
Going back for older courses, sometimes we look at if the audio quality wasn’t great when it was initially recorded, it’s very helpful to have the transcript as well, even though it still may be somewhat incomplete. But it’s a lot better than not having it. Otherwise, we look at the popularity of a course.
Sometimes we just go by how much feedback we get. The users let us know if there’s a course that they especially would like to see transcribed from the past that hasn’t been. And we try to bump those up to the front.
KATHERINE BERGER: I was just going to say, from where I stand as a disability services provider, if there’s a student in the classroom that has a need, if it’s a needed accommodation, then it has to be done. It does lie in the college. It does not come out of my departmental budget. It comes out of the college’s budget.
So it’s very much like back in the day, when video– though some still aren’t– videos weren’t captioned. VHSes weren’t captioned. And professors were teaching, we would say, you have to have it captioned. It’s not my department’s responsibility.
If you own it, it is your responsibility to caption it. If your department owns it, it is your department’s responsibility to capture it. And so, automatically, if it’s a student with a disability, that course gets done. There’s not a question.
But I do know, because UNH is going more and more online courses and they’re beefing up their online courses, they are also trying to hack away a little bit at a time. And the benefit of that is, let’s be honest, there’s some courses that are taught the same way just about every time. There’s not a lot that changes in variability. And so the great benefit of that is those courses are used over and over and over again. And that’s where it becomes a little more cost-effective to the university.
NICOLE CROY: I would say, at Regis, because we’ve taken the stance that we’re going to be proactive, as opposed to reactive, like what Kathy just explained where we get notification that a student with a disabilities is taking this online course, and we have to go out and scramble and try to get it to where it meets accessibility, we’ve just said, all of our courses are going to be accessible. So the cost of providing that transcription and the closed captioning is just kind of built into the cost of each course. So we have a set amount of, this is what we can spend to develop this course. And that’s just part of it.
CJ JOHNSON: Anybody else? Last chance. All right. Well, we have about 10 minutes left, actually. I’ll be milling about a little bit.
And if people want to discuss a little more offline here, I think I’m happy to. And you guys will be here for a little bit. We can maybe talk to you here in the room, until the time’s up. Does that work for everyone?
All right. Well, thanks for everything. Thanks for our awesome panelists for sharing their experiences.