Plans & Pricing Get Started Login

« Return to video

Advanced Workflows for Closed Captioning [Transcript]

LILY BOND: Welcome, everyone, and thank you for joining this webinar entitled “Advanced Workflows for Closed Captioning.” I’m Lily Bond from 3Play Media, and I’m joined today by Kara Zirkle, who is the IT accessibility coordinator at George Mason University. We have about 45 minutes for this presentation, followed by 15 minutes for Q&A. And then before we get into the presentation, Kara and I just wanted to get a better idea of the audience. So we have a quick poll question for you.

The poll should read, how are you currently captioning your content? And you can select we’re not captioning, we do everything in-house, we use a hybrid of in-house and outsourcing to vendors, we send everything to one vendor, or we use multiple vendors. So I’m going to give everyone just a couple of seconds to answer that, and we will see what the reaction looks like.

So as you can see, really interesting. Kind of across the board. But it looks like there’s definitely some in-house, some vendors, some people who are not captioning yet. So we will be pretty broad in our presentation and make sure we cover all of those different options.

So just to go through an agenda quickly, I am going to go through the laws and lawsuits that impact closed captioning requirements, and then we’re going to break it up by different types of workflow. So I’m going to go through what a workflow using integrations looks like, and then Kara’s going to go through George Mason University’s workflow, their timeline for captioning, and economic analysis. And then I’m going to look at some DIY captioning workflows and how to use an API to automate your process. And then, of course, we’ll have time for Q&A at the end.

So to get everyone on the same page about the accessibility laws and requirements for captioning, the first major accessibility law in the United States was the Rehabilitation Act of 1973, which has two sections that impact closed captioning. So Section 504 is a very broad anti-discrimination law that requires equal access for individuals with disabilities, and it’s very similar to the Civil Rights Act. And then Section 508 was introduced later in 1998 to require that federal communications and information technology be accessible. And so closed captioning requirements are written directly into Section 508 and are often applied to Section 504.

And Section 504 applies to both federally and federally-funded programming, whereas 508 just applies to federal programming. However, many states reference 508 requirements, and if you receive any funding through the Assistive Technology Act, you are also required to comply with Section 508. So these laws do extend pretty broadly across organizations in higher education.

The next major accessibility law in the US was the Americans with Disabilities Act of 1990. It had five sections, and Titles II and III impact closed captioning requirements. So Title II impacts public entities, and Title III impacts commercial entities, and specifically calls out places of public accommodation. So the big question with Title III of the ADA is what constitutes a place of public accommodation.

The ADA was written in 1990 before the internet was as prolific as it is today, and so it wasn’t really written into the law specifically. However, the ADA has been tested against online businesses in the last 10 years or so, and I’m going to go through a couple of those major lawsuits in a minute.

And then finally, the 21st Century Communications and Video Accessibility Act of 2010 covers online video content that previously aired on television. So this is definitely a niche that applies more to the broadcast industry, but the law there is pretty strict, and it now includes any video clips from full-length television programming that goes online needs to be captioned.

So to go through a couple of the lawsuits that call out the Americans with Disabilities Act. Netflix was sued by the National Association of the Deaf in 2012 for failing to provide closed captions for most of its Watch Instantly movies and television shows that were streamed online. So this was really the first time that Title III of the ADA, a place of public accommodation, was applied to an internet-only business. As I mentioned, it was written into the law to apply to physical structures like wheelchair ramps, so testing it against online businesses was new in this case.

And it’s a really landmark lawsuit. Netflix argued that they do not qualify as a place of public accommodation, but the NAD argued that the ADA was meant to grow to expand accommodations as the world changed. And the court ended up ruling in favor of the National Association of the Deaf, saying that the legislative history of the ADA makes clear that Congress intended the ADA to adapt to changes in technology, and excluding businesses that sell services through the internet from the ADA, would run afoul of the purposes of the ADA.

So that’s a pretty strong declaration in favor of online businesses being included in the ADA. Netflix ended up settling and agreed to caption 100% of its streaming content, and the case set a really profound precedent for any company streaming video content across industries, including entertainment, education, health care, and corporate training content.

So a more recent lawsuit was the National Association of the Deaf versus Harvard and MIT. They were sued in February of 2014 for providing inaccessible video content that was either not captioned or was inaccurately captioned. So this is the first time outside of the entertainment industry that the accuracy of the captioning was considered in the legal ramifications. They were using crowdsourced or YouTube automatic captions for their captioning on a lot of their videos, and the court decided that that is not OK. And so the argument here is that educational online videos are a public accommodation, regardless of whether or not the ADA originally applied to the internet.

And in June of last year, the Department of Justice submitted a statement of interest supporting the plaintiff’s position that Harvard and MIT’s free online courses and lectures discriminate against the deaf and hard of hearing. And they said that the United States respectfully submits the statement of interest to correct Harvard’s misapplication of the primary jurisdiction doctrine and its misunderstanding of the ADA in Section 504.

The final argument for this case was held in September. We are still waiting on a decision, but in February, the judge denied Harvard and MIT’s motion to dismiss the lawsuit. So that indicates that the case will move forward, and hopefully, we will have a decision soon. And regardless, the outcome of this case will have huge implications for higher education.

So now that we have gone through some of the laws, we’re going to jump into the workflows that you can use to implement captioning. And to get a better idea of how you are already prioritizing captioning, we have another poll for you. How do you prioritize content for captioning? You can select we caption all of our content, we caption by request only, we caption as much as we can afford, we caption our most popular content, or we don’t caption anything. I’ll give you a few minutes to answer that, and we’ll see what the audience looks like.

So again, pretty across the board. It definitely looks like most people are captioning by request only, so that’s kind of a reactive approach to captioning. Some people are captioning all or as much as they can afford, and a good number of you aren’t captioning yet. So that’s a great basis for us to use moving forward.

And I’m going to go right into our first workflow option, which is using integrations to implement closed captioning. So what is an integration, and how does it work? An integration allows you to link your video platform or player with your captioning vendor to automate the process of captioning. And it basically makes captioning extremely simple for you. What it does is you first link your accounts. Usually, that requires an API key of some sort that you get from your captioning vendor or from your video platform. And that’s a one-time linking that you will never have to do again.

And from there, you can actually tag your videos for captioning directly within your video platform or lecture capture system. And those videos will be sent directly to your vendor for captioning. They’ll process the files, and when the captions are complete, they’ll post them directly back to the videos, so you never have to worry about uploading or downloading caption formats or how to associate those with your video files. And so it makes it a very simple process on the users’ end.

It’s a little bit more complicated than that. But the main point is that you really only have to worry about that first step. Link your accounts, and upload your video. Then tag it. And then all of this other stuff happens in the background. There is usually an in-depth captioning process, where the vendor will make sure that you receive an accurate file. There is sometimes an editing platform that you can make changes on, but the main point is that it will link those captions to your video file, and any changes that you make will be able to propagate directly to that video.

So now that I’ve gone through integrations, Kara’s going to go through George Mason University’s process, since they do use integrations as one of their captioning implementations.

KARA ZIRKLE: Thank you, Lily. First and foremost, I want to applaud that 20% who caption all of their stuff, and I would love to chat with you sometime after the fact, because we do by request, as well as everything that we can herd into our office from word of mouth or anything else that you can. So we try to capture as much as we can, but we don’t caption everything. So I would love to chat with those folks.

So with that said, moving along, so timelines, workflows, and costs. If we go to the next slide, it is just a small idea to show you everything that we’ve worked towards from the very beginning in 2009 when we started to say, OK, captioning– it is a larger issue. What do we do? How do we get started? We had purchased an application. It’s a server-based software that has a voice recognition to it called DocSoft. And we thought, OK, fantastic. We have this device. Now let’s tell all the faculty and staff how to use it.

No one used it. So everyone started to say, well, you have the technology. You do it for us. That’s commonly what we hear from faculty, because they don’t have time to do any additional work. And so we’re like, OK, what do we need to do to look at this aspect? We had actually submitted a proposal to our provost and president’s group to basically say, if you can provide us a small amount of money– and I say small, but it’s a large amount depending upon what your budget may be– but we were looking at a captioning proposal. And it would be a DIY model.

So we were approved of that. We hired students. We purchased technology. We did a lot of training, and we launched a pilot. And we did a lot of lessons learned, and as you know, lessons learned don’t always come up with all of the best things in the world. So we did do some backlash in some stuff that I’ll show here in a little bit. But with all of this said and done, it allowed us to hire a full-time accessible media coordinator, who helped with our process within our lessons learned. And we then slowly started to move into a hybrid process. So we did some DIY stuff, still in-house. We started to then say anything that’s a 15-minute limit, we could do in-house, but really, it was becoming more cost-effective to start sending things out.

From there, we actually implemented an RFP request, and the university implemented Kaltura, which is a video management system, which greatly helped us in regard to captioning, but it was also a larger need for a university base. So we didn’t actually have a hand in Kaltura. It was just timing was perfect for everything. So with that request for a proposal, it allowed us to then bargain with the vendors to get the best prices possible for our captioning.

We now go out for an invitation for bid every year, and today, we now have multiple contracts with vendors, and we also look at things such as turnaround time cost versus subject matter experts, topics. We look at the type of applications and APIs that they may have. So if one vendor can give you a cheaper amount per minute for a 24-hour turnaround, but another vendor can give you a cheaper amount for a seven- to 14-day turnaround, both of those are worth looking at, because if you get your request in early enough in time, you can still save more money, because you’re working with different vendors for different things.

So that’s kind of where we’re at today. We don’t do anything in-house, unless it’s for the simple fact of we’re looking for captions that might already be out there on the web or anything else from there. So looking at the next slide, to give you an idea, because we presented on this model for quite some time, but we always have something that we’re learning from.

So when we first started our process way back when, things slowly kind of changed a little bit through spring 2014. And in our top, you can see a very convoluted model to where we tried to have a very simple process, but at the same time, it depended on if we did it in-house, if we used DocSoft, if we used YouTube, and then also– oh, by the way, if we used a third party, and it kind of came out through Kaltura or anything else.

Now we’ve really simplified that method to where it goes more of a straight line, and our accessible media coordinator preps the file and either sends it out for the third party use or uploads it to the YouTube or Kaltura, and then sends the link or pushes the video out from there. So it’s a much more streamlined process. The accessible media coordinator, Courtney Shewak– if you have any direct questions, she’s really the best person to talk to, and she lives and breathes this every day.

So if you go the next slide, please, we actually try to do a cost comparison. So you can see– and this is where I say lessons learned– we don’t always come out on top. Our very first year, we actually lost money based on what we actually thought would be a good process with students in-house doing a lot of stuff. And as we all know, students– they can be a hit or miss. They have a very high turnover ratio because of graduation and other things, but we also have different learning perspectives, to where we had one student, he was a powerhouse. He double, triple-timed the other student. Sometimes it’s a Mac versus a PC and the different technologies that are available, so we had to learn all of these things as a growing process.

So we started out with $2.94 a minute for our FY2012. We are now up to our FY16, and we are now completely down to $1.39. So you can see across the board our savings. It’s starting to go down some, but that’s not surprising, just for the simple fact of you can see that our total jobs are starting to go a little bit lower from last year, but our total in minutes are still starting to stay about the same. And this actually isn’t the most updated number that we’ve had. The last time I did this is when Lily and I were at CSUN, so we probably have a good 30 to 50, give or take, that we can probably add to this, if not more.

So if we go the next slide, so what do the overall numbers show? So what we try to do is we show the completed media requests. We have a website that we try to do a very streamlined process to where, regardless of if you’re faculty or staff, if you’re an instructional designer, if you’re an administration office that you have a promotional video you’re wanting to put on your website, we have a one-stop shop that you can go to our website. You can, under About Us, request services, and we have an online form that people can submit. And it asks you for all the information that we need. And so by using that and implementing that– that was between 2014 and 2015– you can also see a little bit more of a rise in numbers, because it also helped make everything as a one simple area.

You go to the next slide. Another thing that has also helped us with our numbers is our overall university has had a web audit. So this has started about a year or two years now in the making, but we actually started to review Priority 1 and 2 websites for accessibility.

Now, you might ask me, what am I talking about websites for when this is captioning process? But what this has actually done is the website overhaul has allowed us to use their marketing to go out to the different academic units and administrative offices and talk to them about videos on their website. We were really looking mostly at the Distance Education videos and things like that, and by using this website overhaul, we were able to get out to a lot of different schools and talk to them about the accessibility in the videos that were going to be on their website.

So if you go to the next slide, please. A breakdown of what this shows is part one, we have our compliance of Distance Education versus our compliance of our website. So you can see over the couple years, our simple 17, 12 automatically jumped to 40 and then 62 requests as of FY16. So you can see a larger request is a jump within compliance for the web because of where we’re actually at within our overhaul. We’re still very early now in the design of our websites and our new content management systems, so we can actually see that this is going to be a greater number growing, as well as with the compliance of DE accommodations and so on and so forth. And it’ll help us give an idea of where that compliance breakdown versus that accommodation is going to fall into play.

So if we go to the next slide, please. So part two– we’re actually trying to do another breakdown of the different schools and colleges. We have learned that there’s no such thing as too much data, so the more we can break down, the more information it’s going to give us, whether that is going to be for our costing, our captioning, our marketing, whatever it might be. The more information, the better. So always look at that. If you’re starting this captioning process or if you’re already knee-deep into it, start thinking about how you can start saving numbers and grabbing data, because it will always help you in the long run.

But with that said, the video breakdown for the academic units– so we’re looking here at a couple different schools. One is engineering, and another is CHSS. It’s our social sciences. So these are two of our larger units that are actually starting to use our captioning services.

So if we go to the next slide, please. This is why. So the top three units and schools or colleges is our Volgenau School of Engineering, College of Science. These two are fully online courses, so that’s why we’ve had such a jump within those, because they are our distance education. Now our School of Public Policy– they are now due to the web overhaul. So they’ve had a spike for the simple fact that they had a lot of websites that had videos, and they were in the early overhaul portion of the websites themselves. So that’s caused a jump within that aspect. So reasons for request were up within distance education courses. We are slightly down for face-to-face courses, but that could also be due to some of the students who are deaf that needed accommodations have graduated. So that’s always going to be a fluctuation.

We have our web compliance, which is up 4.2%, which is part of that web overhaul that I was talking about. And then the disability accommodation themselves is also down 4.6%. So part of that is some of that face-to-face portion as well.

So if we go to the next slide, please. So one of the other areas that we’re starting to look at is libraries. How is it affecting our captioning? And we ask that for the simple fact of the library purchases media, and they purchase databases. And now a lot of these databases are becoming video databases, so have you worked with your library unit to have something in regard to the Copyright Office or something in regard to your purchasing within the library? We actually worked with our library, and they were able to provide an accessibility coordinator/instructional designer position, and that individual is a liaison between our office and library staff.

So now we’re able to work directly with the library, and the library is a large animal all in and of itself. So look to see where their procurement is. Look to see if you have any information within that procurement aspect to ensure that when they purchase databases, and it’s a video database, is there anything in there for that database to actually have to provide captioning, or is it going to fall back on the university? If it falls back on the university, you might be putting out double money– one for the captioning and one for the library database. So these are just some things to think about and look to see in regard to your library resources.

So if you go to the next slide, please. The breakdown of what that shows is we have out of request, Kaltura, our video management system. So basically, it’s a University of Mason YouTube. That’s really what it is, and it’s just our brand. But you can see that the numbers within Kaltura are certainly rising, versus YouTube, which is– this is the numbers from 2012 through now. So YouTube has still been higher, because Kaltura has only been implemented within the last couple of years now. But it’s quickly catching up to YouTube.

We also have an email requests. That’s usually going to be those one-time requests of, who are you? I heard you do free captioning. Can we get this done? Something along those lines. But we have seen 26 library databases, so it’s a very small number and minute to everything else, but that’s 26 databases that if we could work in the procurement language for captioning within a vendor, if they could provide captioning within 24 to 48 hours’ notice of needing it, then, perhaps, that money could be going to the database individual that they’re putting out the money for that captioning versus the 26 videos that we’ve had to pay to caption, as well as the library also pay to have a database. So this is why we’re doing the breakdown in regard to the media file delivery.

So if you go to the next slide, please. So our next steps– are overall access workflow. There’s never too many different stakeholders that you can work with and reach out to. Our main ones are distance education, our online office, and our library. Also, again, tracking media, seeing where they’re at. Do you have a TV station for the university? We have a GMU TV here, so trying to work with them in regard to some of the new videos that they are recording and developing, versus sometimes the Distance Education groups. We’ve worked with the instructional designers as stakeholders to ensure that the instructional designers let all of the faculty know we provide free captioning.

Anything really is on the table when it comes to marketing and improving, so marketing– never too much. We do some targeted marketing based on our semiannual mailings. We mail out a couple times a year just to let people know, just a reminder, in case you’re new or in case you forgot, this is what we provide for free. We also do for monthly training for our faculty and staff. We include this as library accessibility, document accessibility, video, web, whatever it might be, but even though the person is there for one specific training, we always kind of combine everything into it a little bit, because it really is unwebbing the web of accessibility and everything that falls into it.

Otherwise, we also do DE course reviews. That’s part of working specifically within our Distance Education. So any new courses that have been a proposal and have been accepted to be a new course in the Distance Education program, we do a review of that course. And then we work with a faculty member to ensure the videos are going to be captioned, the video players are accessible. And we also look at department champions, so there are just some allies that are out there that they keep pushing for you. Look for those folks. Make friends with those folks. Let them help you.

And then everything located in one place– when I was talking about our website, ati.gmu.edu. We’ll also post the PowerPoint that we have here on our website, but that is our one-stop shop for anything when you’re looking at requesting services, regardless of whether it’s for video captioning or web testing.

And finally, improve your costs and your timelines.

The RFP was a great way for us to reduce pricing per minute, but outsourcing all of our requests have actually allowed more time for our accessible media coordinator, Courtney Shewak, to work hands-on with various different faculties and departments, so that way, we can actually slowly pull in different folks that may not have necessarily worked with us to begin with. So it allows her to actually go and mold a new process all together and bring in new customers.

So I think that might be it for me. Lily?

LILY BOND: Thanks, Kara. As always, very valuable insight into your process. So I know that a lot of you are doing in-house captioning. So as we move into our last two types of workflows, I’m going to go through some DIY processes and then look at some API workflows. And then we’ll get to questions.

So before I talk about how to do DIY captioning, I want to talk about what is good enough for captioning. It’s a big question. People worry about accuracy rates a lot, so all of the laws really emphasize that an equivalent alternative must be provided for video content. And so when you think about an equivalent, what does that mean? This chart is one that I love, personally, because it looks at word-to-word accuracy. And then it shows what those percentages look like when you put a lot of words together.

So you have to multiply the chance for every word in the sentence, so for an eight-word sentence with 85% accuracy for each word, you end up with just a 27% likelihood of accuracy for that sentence, or for a 10-word sentence, you’d end up with 20% accuracy.

So when you’re looking at creating captions or outsourcing to vendors, looking at the accuracy rate is actually really, really important, because it can go downhill really quickly. And as an example of what 85% accuracy looks like, an eight-word sentence, as I said, can be about 27% accurate, and a 10-word sentence can be as bad as 20% accurate. And this screenshot is from a YouTube auto-captioned video. It’s a nine-word sentence that’s about 22% accurate, so two words are correct. And given the chart, that’s 85% accuracy word-to-word rate.

So the text on the screen says, “Plaques double dealing allowing double the minnow for them and.” Well, what was really spoken is “Flax, double the vanilla. Always double the vanilla– cinnamon.” So you have to be really careful when you look at accuracy and when you’re creating caption files, because that accuracy rate does really go downhill.

So the first step when you are looking at creating caption files for your videos is to create a transcript for your video. So transcription takes about five to six times real time, so you need to make sure you leave time for this process if you’re doing it in-house. And it’s really important to follow standards for captioning, and make sure that your transcripts are consistent across files. I’ll go through a few standards for a transcription and captioning in a minute.

But once you have a transcript for your video, you can create your own caption file by setting the timings. So on the screen here is an example of a WebVTT file and an SRT file. These are both fairly simple formats to create. You can see that the WebVTT file has a timecode, a beginning timecode and ending timecode.

It allows for some alignment, and then the text. And an SRT file has the caption frame number, the timecodes, and then the text. So these are pretty simple to create on your own, and they’re easily digestible. And you can use these types of files for platforms like YouTube videos, BrightCove, Wistia, HTML5 video, and JW Player.

But my recommendation is to use YouTube for captioning. YouTube provides a really great starting point, because while they create inaccurate automatic captions, they do create a starting point for you. So one option is to create your transcript from scratch, and then use YouTube to set the timings. And that will help you create the caption file without having to worry about typing in the beginning timecode, the ending timecode. YouTube can do that for you.

And then you can actually download the SRT file from YouTube, and use that to create other types of caption formats. Another way would be to download the automatic transcript file, and edit that, and then reupload the accurate file. And you could also edit the automatic captions within the platform itself. So there are some tools in YouTube that help you do that a little bit more easily, and then you can publish those edits.

A couple of notes– thanks to Kara and some of her experience. She’s noticed that the timings have been off on some of her YouTube videos and that when she creates the accurate transcript file and uploads them to set the timings, that is accurate, but not the timings that YouTube has been creating itself in the platform. And also, she’s noticed that it’s taken a really long time for YouTube to create the automatic captions when she uploads the videos. So those are a couple of things to look out for. If you have an urgent request, you might need to take a little bit of time to allow for the automatic captions to populate.

A few standards for caption quality– these are the FCC standards. They were implemented recently. They only legally affect the entertainment industry, but because there are so few standards in the captioning world, they’re a really great starting point for all industries.

So their four points are that the captions should match the spoken words to the fullest extent possible. They should coincide to the greatest extent possible with the spoken words and sounds. They should run from the beginning to the end of the program, and they should not obscure any relevant visual content on the screen. So if you’re watching a documentary, and a text bubble shows up in the lower third indicating the name of the speaker, you should move the caption file to the top of the screen so that it doesn’t block that content.

And then, as I mentioned before, if you are using an in-house method, it’s extremely important that your transcriptionists are trained in best practices and standards so that you end up creating accurate and consistent files. So some of the standards that are important to consider are that you should always use proper spelling and grammar.

You want to make sure that you use capital and lowercase letters. People are used to reading accurate grammar, and you shouldn’t give that up just to create a caption file. You should include speaker identification, and you can set your own standards for that, but it should be consistent. You should include all relevant sound effects, so to determine if something is relevant, it should be important to the plot or narrative of the story.

So if there are keys jangling behind the door in a horror movie, that is important to the plot. There is someone trying to get in, and you should include the sound effect, keys jangling. But if someone’s walking down the street, and keys are jangling in their pocket, then you would not need to include that, because that’s not important to the plot.

You should use punctuation to make the speaker’s intent clear, so you can say, hi, exclamation point, and get across the same emotion as someone shouting, “Hi!” And it’s easier for the viewer to read.

And then, you also want to look at doing verbatim transcription. So this is really important for things like scripted TV shows, where every “um” is intentional. But those can be left out if it obscures the speaker’s meaning in a different type of file.

And then some caption frame standards– a caption frame should stay on the screen for a minimum of one second. The caption should not obscure other visual information, as I mentioned in the FCC standards. You shouldn’t allow a caption frame to hang on the screen through silence. So if there’s 15 seconds of silence, then you don’t want whatever was spoken last to be on the screen throughout that entire silent time.

A caption frame should not exceed 32 characters per line or more than three lines of text at a time. You should use a non-serif font like Helvetica Medium, because it’s easier to read. And then caption frames should be precisely time synchronized to the audio, so those timings are really important to convey meaning.

And then here is a list of common caption formats and where you may need to use them. On the top right is an SRT file, which I’ve already shown you an example of. And on the bottom right is an SCC file, which uses hex frames. And it’s obviously more difficult to understand and create from scratch. My recommendation is that if you are doing in-house captioning, you start with an SRT or WebVTT file, because those are easier to create and gauge the accuracy of, and then use a free caption converter to create any of these other formats. There are a bunch out there. We have a free caption converter on our website that you’re welcome to use to create more complicated formats.

And then I’m going to move on to using an API. If you are using a captioning vendor, this is a really great way to automate and simplify the process if you are not using a major video platform or player. So what is an API? An API is an application programming interface, and it’s basically computers interacting with each other without the user interface. So a lot of us are used to going into an account system and clicking a button to submit a file, and then clicking a button to download. An API does that all just using computers without any human interaction.

So integrations are also built using APIs, so all of the automation that happens there on the back end is using APIs. And APIs can be great for designing workflows that suit your custom needs. So they automate repetitive manual tasks, particularly at scale, which reduces cost, the number of hours of labor that you need to put in, and then it also reduces the complexity of your workflow.

So a few things that you can do with an API and captioning– you can set up commands that allow you to manage captioning, translation, and alignment. You can view any information about your media files. You can request captioning, translation, or alignment, and then download finished captions and transcripts, as well as add interactive transcripts to your videos.

So as an example of a great API custom workflow, Penn State College of Arts and Architecture eLearning Institute has developed a customized captioning workflow using our API, and they host all of their media assets for online courses on the ELMS Learning Network, where they have almost 2,000 videos. And it took Bryan Ollendyke, who is an instructional technologist at Penn State, and his team less than a week to develop and test the API integration. And for them, it was really a matter of scale, and he said that it saved them from hiring two to three other people to be a dedicated part of the media staff to pretty much just click buttons.

So that’s really saved them a lot of cost, and it has automated the process. And so the workflow for them is that anyone can upload a video and set the transcription status to Needs Transcription. And then it goes to either Bryan or one of his team members to approve the request and set the status to Send to 3Play. And that’s basically it on their end. So the servers run on an interval to check for new send requests and new completed files. And so every night, they batch and ship off all Send to 3Play requests, and they check on any files that are processing. And then they pull in any completed caption and transcript files in whatever specified formats they need.

So when they’re done, the captions automatically download and post back to the video, and the staff doesn’t need to know HTML or understand caption formats to get captioning on their videos. And they’ve automated the process and reduced cost and been able to transcribe all of their videos at scale.

And with that, we’re going to move on to Q&A. As we compile some of these questions– and please keep asking them– I just wanted to mention a few upcoming webinars that we have. In May, we have “The Future of Closed Captioning in Higher Education,” and then “The Anatomy of an IT Accessibility Coordinator,” which Kara will be presenting again. And I’m sure you’ve all enjoyed her great speaking ability here, and she has a lot of great information for you in that webinar as well. So I urge you all to sign up for that.

And great. OK. We have lots of questions.

Kara, I’m going to start with you. There’s a lot of questions about the funding for GMU’s captioning. Did the funding come out of one central account, or were there multiple accounts? And how did you take care of billing multiple accounts, if that was the case?

KARA ZIRKLE: So I’ve tried to go through and answer a lot of things while you were talking, so just in case we don’t get to everything, I just want to say that and give that caveat. We actually put in a proposal to our higher-ups and received that as an addition, basically, to our budget. And basically, that allowed us to charge for captioning. We exceeded that every single year due to the amount of captioning requests we had.

So it finally became a smack on the hand– sorry, we went over that budget again. And they finally just said, OK, we’re going to put you in overhead funding. And so now we have the overhead funding. That’s where it’s coming from. There’s no multiple budgets or anything else like that. The only multiple we have is the multiple vendors we use and submit charges out from that.

LILY BOND: Great. Thanks, Kara. A question about how accessibility laws affect just audio recordings. I can answer that. So for video files, a caption, a timed text caption file is required to comply with all of the laws, but for audio-only recordings, a transcript is sufficient. However, one thing to keep in mind is that if you are using a recorded PowerPoint presentation, that is considered a timed media presentation, and that would require captioning.

Another question for Kara– did you apply for grants from the federal government?

KARA ZIRKLE: No, we did not. We don’t have any one within our office that has grant experience, so I don’t know that we would do very well in any of that.

LILY BOND: Great. Thank you. Another question here is, how do you handle third-party videos on sites like YouTube that may not be properly captioned? So I believe that’s a question of copyright, wondering about videos that you do not own and whether or not you can caption those. So copyright law is likely a fair– captioning educational videos that you do not own is likely a fair use of that content. And we would recommend using something like a captions plug-in, where you could embed the YouTube video, and then embed an accurate caption file below that video, so you don’t have to worry about copyright law at all. But in the case of copyright law versus captioning law, captioning is most likely to prevail.

KARA ZIRKLE: Basically, the way we look at it is which law trumps which, and which one would cost more money in the end by not following?

LILY BOND: That’s a good way to look at it.

KARA ZIRKLE: We also do a lot of pulling videos down from YouTube, and we will then have those captioned. And then we put the captioned video onto an API-specific channel that is a private or unlisted channel to where, then, the user requesting it has that link and everything. So then they have the captioned version, but we’re never really making any change to the actual YouTube video that might be seen and questioned in regard to copyright.

LILY BOND: Thanks, Kara. Another question for you– who performed the GMU web audit?

KARA ZIRKLE: So I did remember seeing that one. If you’re talking about our overall university large audit, shoot me an email, please, because we’re in Phase II, and we’ve used a couple different vendors, neither of which our office was directly involved in. But if you’re talking about a web accessibility audit, then shoot me an email as well, because that’s everything that we did out of our office.

LILY BOND: Thanks, Kara. A question here– what is the legal accuracy rate for captions that institutions need to meet? So unfortunately, none of the laws state an actual number in terms of accuracy rate, but the general assumed necessity is 99% accuracy. Most schools will try to hit that compliance level, and most vendors will also try to do that.

Another question here– does Vimeo do any captioning? Vimeo allows you to add captions. We are working on an integration to automate the captioning process, but if that’s asking whether Vimeo does automatic captioning like YouTube does, I believe the answer is no. I’m pretty sure YouTube and soon, Facebook, are the only ones actually producing automatic captions for video.

KARA ZIRKLE: Correct. And Vimeo actually only allowed captions to be added within the last year and a half maybe. Before that, they didn’t even allow captions to be included, which I understand from a university perspective, Vimeo versus YouTube– Vimeo gives a lot more back-end data for tracking purposes and everything and is sometimes more preferred, but at the same time, YouTube has always offered at least a file to be uploaded or something. So that’s where YouTube has usually been the one that we’ve mentioned and used over Vimeo in regard to conversations as well.

LILY BOND: Yeah, that’s definitely true that it’s recent with Vimeo. Kara, another question for you. Since you’re using multiple vendors, how does George Mason consolidate the data to track how many hours of video has been captioned, what the cost was, what the requests were, et cetera?

KARA ZIRKLE: That’s a detailed question answer. That can go directly to Courtney Shewak, who’s the accessible media coordinator. But when we do our charging, we’re charged based upon our minutes, so it’s a little easier to be able to pick that information, as well as we also keep everything in an internal database within our office. So we can track all of that information as well. So we get a lot of that from our database and keeping record of things also.

LILY BOND: Thanks, Kara. Another question here– where do you see the future of captioning and accessible media demands heading in the future? Are the laws heading towards a mandate that everything must be captioned? That’s a great question, and it’s something that a lot of people are wondering about. The laws are definitely heading towards more captioning. There’s no doubt about it.

The Department of Justice has come out on the side of captioning requirements numerous times in the National Association of the Deaf’s lawsuits. And the ADA is being expanded to include that requirement. Section 508 has a refresh that includes WCAG 2.0, which is the Web Content Accessibility Guidelines. So there will be more standards and requirements for captioning in the future. Kara, do you want to talk about the trends that you’re seeing?

KARA ZIRKLE: I mean, we’re certainly seeing– just like today, we had a town hall meeting on push for more distance education courses, and that’s where a lot of the efforts at our university are looking at going. The more that that’s being pushed, it automatically, as a cause and effect, is going to cause a rise in captioning, and where some of those requirements are going to be, and everything else due to the heightened lawsuits that we’ve seen in education. But really, the question is, the difference of are we looking at a crystal ball versus a genie in the bottle kind of thing as to what we want and hope for? I guess it’s too soon to tell.

LILY BOND: Thanks, Kara. So someone else is asking, if the dialogue in a video contains incorrect grammar, do you amend the grammar in the captioning, or caption verbatim aside from ums? So that’s a great question. I mean, the quality of the captioning file is really about– it’s finding a medium between what was actually said and what is readable to the viewer. Our recommendation would be to use verbatim as much as possible, unless it interferes with the readability of the caption file. Kara, do you have any separate ideas about that?

KARA ZIRKLE: No. Again, when it comes to the nitty-gritty stuff and everything, I’ve been hands-off for a little while. So again, that would probably be best to go to Courtney.

LILY BOND: Someone else is asking, do you have any experience with transcribing content into multiple language options for your captions? If so, are there any recommended services for this? So that’s talking about subtitling. Once you have an accurate English transcript or a caption file, it’s really easy to translate that into other languages.

We offer translation as an option once you have the English captions with our service, but there are other great tools like Amara, which is a crowdsourced subtitling platform, where you could submit your English caption files on Amara and request languages. And then they would crowdsource people to complete them. YouTube also just implemented crowdsourced translation and subtitling, as well as crowdsourced captioning on their player. And so those are some great free options, if you know that you have a pool of translators that would be interested in watching your video and translating it into other languages.

Kara, has GMU done any translation?

KARA ZIRKLE: Not that I know of. A lot of times we do have a lot of videos coming in with very thick accents and everything, so we have a good partnership with some of our foreign language departments. But I don’t know that we’ve had anything for our captioning requests.

LILY BOND: Thank you. A question here– have there been any legal rulings that have sided with the video provider and not the National Association of the Deaf or Department of Justice? There was one ruling, but it was in the lower courts, and it was not to be taken as precedent, whereas the cases against Netflix and Harvard and MIT are in a different circuit that allows precedent to be applied. I would have to do a little bit more research to make sure that there are no others, but that’s the only one that I know of offhand.

Another question here– does GMU provide captioning when you stream live events? Can you talk briefly about the process of obtaining real-time captioning if you do so?

KARA ZIRKLE: So that actually comes from– our Assistive Technology Initiative Office is housed under our Compliance, Diversity, and Ethics, which has our ADA coordinator within that office. So if it’s any public event, those services are managed through the ADA coordinator, and they’re actually considered CART services, not captioning. So those are coming from two different budgets, and if it’s anything CART service-related for students for in class, then that would come from our disability services, also a different budget area. So as separate as we are, we work together for different things to where we actually are the technical support sometimes for those things, but they’re very different for what we have to work with.

LILY BOND: Thanks. Someone is saying, I’d like to hear more about what was just mentioned– subtitles that can be displayed in a separate window or box from the video program. So I was referring to– we provide a captions plug-in, which is just a simple embed code that holds the caption file from your account, and that can be played– it can be associated with a video player, but played together with an embedded video file. So in that case, you wouldn’t have to then download a video caption, and then republish that video file if you’re worried about copyright. You could just embed the existing video file and associate a captions plug-in with it.

Kara, another question for you. Have you experienced the need for sign language embedded in a video instead of captioning as an accommodation?

KARA ZIRKLE: No, not as an accommodation, since not all individuals who are deaf know American sign language. Then, captioning is usually the best alternative to go as a form of anyone to be able to access and read. And ultimately, it would probably go back to your disability services of asking the definition of equal access. It could be a can of worms.

LILY BOND: Yes. So I think we have time for just a couple more questions. Someone is asking, what kind of judgment should be used for accurate captioning when the speaker is intentionally using an accent or dialect like the character in a play? You should definitely indicate that accent or dialect, and then it’s a choice between whether you phonetically type it out or whether you use brackets saying, speaking with a French accent– whichever is more consistent with your standards.

And then for Kara, one final question. What was the process of requiring instructors to submit videos for captioning? We’re having trouble with compliance and need ideas for strongly encouraging instructors to either self-caption or submit videos to be captioned.

KARA ZIRKLE: So we worked a lot with our instructional designers on that, because they are the front-facing groups that work with the faculty member. Now granted, that’s only going to be for those who work with the instructional designers. We also have an accessibility committee overall that we’ve worked very hard to have a point of contact in all of the academic units, to where we also send some of our marketing information that we had talked about earlier to those individuals to push out the fact that we do offer free captioning. And then we make it an easy streamlined service of a request for someone to just submit the information.

So it’s just really more so of getting people aware of we’re here, and this is what we do for free, versus making it difficult to actually have faculty do it. Now that they know that they don’t have to do anything other than submit, it’s become a little bit easier. But definitely use any of your funnels and your trap options that you can, such as instructional designers or Center for Teaching and Excellence or whatever else you might have within your university.

LILY BOND: Thanks, Kara. And I think that’s it for today. Kara, thank you so much for joining us. George Mason has such a great example of a captioning workflow and a move to a proactive captioning implementation, so thank you for sharing that with everyone.

KARA ZIRKLE: Absolutely. Happy to be here.

LILY BOND: And thank you to everyone who joined us. We will be in touch shortly with a link to view the recording and the slide deck. I hope everyone has a great rest of the day.