Home

Plans & Pricing Get Started Login

« Return to video

Video Captioning for Accessibility: University of Florida and Regis University Case Studies – Transcript

TOLE KHESIN: Thanks everyone for joining us today, and thanks to Kaltura for organizing this event. This session is titled “Video Captioning for Accessibility.” My name is Tole Khesin with 3Play Media. We are based in Cambridge, Massachusetts, and for the last five years, we’ve been providing captioning and transcription services to our customers in higher ed, government, and enterprise.

We’re joined today by Nicole Croy from Regis University. Nicole is an e-learning technologist in the Department of Learning Design at the College of Professional Studies. Nicole has been with Regis University for 11 years, and she is web accessibility certified. We’re also joined by Jason Neely from the University of Florida College of Education Office of Distance Learning. While pursuing his Ph.D., Jason works at the multimedia services group, where he maintains the Kaltura and 3Play Media systems.

We have about an hour for this session. For the first part of the presentation, I’d like to talk through some of the captioning basics, how to create captions and workflows, recent and upcoming accessibility legislation impacting captioning, value propositions and benefits, and some of the latest technologies that take captions and transcripts to the next level. Then, I will hand things off to Nicole from Regis University, who will talk about their university accessibility policies and the technologies they’ve implemented. And then subsequently, Jason Neely from the University of Florida will take over, and he’ll talk about the solutions they’ve implemented to meet their accessibility policies. And then finally, we’ll open it up to Q&A and an open discussion.

And on that note, if you have any questions or comments while we’re going through this, please feel free to type them in the questions window, and we will address them at the end. Also, if you think of anything after the session ends, please visit the 3Play Media virtual booth, and we’ll be happy to continue the discussion. And you’ll also find captioning and accessibility resources that you might find helpful.

So let’s take it from the very beginning. What are captions? Captions are text that has been time synchronized with the media so that it can be read while watching the video. Captions assume that the viewer can’t hear the audio at all, so the objective is to convey not only the spoken text, but also sound effects, speaker identification, and other non-speech elements. So just to make sure that everybody is on the same page, I’d like to go through a little bit of captioning and transcription terminology.

So first of all, captioning versus transcription. The difference here is that a transcript doesn’t have any time sync information. It’s just the text. You can print it out in a Microsoft Word document, for example. In contrast, captions are time coded so they can be displayed at the right time and so that people can follow along with the text while they’re watching the video. From an accessibility point of view, transcripts are sufficient for audio content and audio podcasts.

But whenever you introduce a video component, you need to have time-synchronized transcripts and/or captions. And one thing to note is that the video doesn’t have to necessarily be a moving picture. For example, it could be a slideshow presentation with an audio track. And in that case, you would still have to have captions.

When we talk about captioning versus subtitling, the difference there is that captions assume that the audience can’t hear any of the audio. And so that’s why captions will include the non-speech elements– the sound effects, any kind of background noises, speaker identification. In contrast, subtitles assume that the audience can hear the audio, but they just don’t understand what’s being said. So subtitles usually will not contain those non-speech elements, and usually subtitles are associated more with translation to a foreign language.

Closed versus open captioning. The difference there is that closed captions can be turned on and off by the viewer, whereas open captions are actually burned into the video, and they can’t be turned on and off. With online media, publishers have really moved away from open captioning. Closed captions are pretty standard these days, and there are many different advantages in terms of workflow. But also from a user experience point of view, closed captions are much easier to use because you can turn them off if they’re obstructing something on the screen.

Post-production versus real-time. This is really a question of when the captioning is done. Post-production means that it’s after the event is recorded. Real-time means that you’re scheduling stenographers to transcribe the content live while it’s being spoken. There are advantages and disadvantages to each type of process.

How are captions used? Captions originated in the early 1980s as a result of an FCC mandate specifically for broadcast television. But now with the proliferation of online video, the need for web captions has expanded greatly. And as a result, captions are being applied across many different types of media and devices, especially as people become aware of the benefits and as accessibility laws become more stringent.

So with that, I’d like to talk a little bit about the accessibility laws that impact captions. Sections 508 and 504 are from the Rehabilitation Act. Section 508 is a fairly broad law that requires that all federal electronic communications and information technology be made accessible to the employees and the public. Section 504 has a bit of a different angle. It’s basically an anti-discrimination law that requires that people with disabilities have equal access to electronic communications. Both of these laws apply to all government agencies, public educational institutions, and really any organization that takes federal subsidy. I should also note that many states have enacted similar legislation that mirrors the federal laws.

Next is the 21st Century Communications and Video Accessibility Act, often referred to as the CVAA. So this law was passed in October of 2010 and requires that closed captions get added to any kind of media that aired on television. And it’s actually being phased in over the next few years. The first phase actually took effect September 30 of this year, and this requires that all media that airs on television and in parallel is broadcast on an internet website have closed captions.

But the stipulation there is that it only applies to media content that has not been edited for internet distribution. So if you have a movie or a show that airs on TV and then you duplicate that, essentially, on a website, then you have to have captions. So this would apply to sites like Netflix and Hulu. But the stipulation is that if you take that video and you edit it somehow– for example, you create clips from it– then that does not have to be captioned yet.

The next phase is in March 30 of 2013, and so that will require closed captions for real-time and near real-time content. So that will apply to news and sports content that are broadcast on TV and simultaneously on websites. On September 30 of next year, the phase will apply to all content that is being published on a website, even if it’s been edited. So this basically broadens the phase one requirements. So now if you have a show or a movie and you create clips from it, or you take out commercials, or you do anything to it, you still have to have captions if you’re publishing it on a website.

And then, the next phase is actually in March of 2014, and this will require captions for a much broader swath of content, including the archival programming. So things that aired on TV a long, long time ago and are now somewhere on the internet, they will have to have captions as well.

So we’ll talk a little bit about the value propositions and benefits of captioning. Although accessibility is usually the primary driver for adding captions to media, there are many other benefits that people are becoming aware of. We have about 400 customers, and many of them have added captions to their content in cases where accessibility was not the primary driver. They did it for one of these other reasons. So I’d like to just kind of go through these. I think it’s really interesting.

So accessibility, I think most people know about that. There are close to 50 million people in the US that are deaf or hard of hearing, so that’s a very large contingent. But most people that use captions, they’re neither deaf nor hard of hearing. They turn them on for any number of reasons. A very common reason is that they know English as a second language, and having captions, especially in an education environment, just really helps to follow along. Especially in the case where maybe a professor has an accent or is difficult to hear, captions are really, really helpful in that respect.

Another advantage is the flexibility to view the content anywhere. So let’s say you’re in a library or at a workplace where you can’t turn on the sound. Turning on the captions is a great way of consuming that content, whereas otherwise you’d have no audio to it.

Another huge part of all this is the ability to search through the video. Now unless you transcribe video, it’s really, really difficult to search through it and find what you’re looking for. It may be easy to do without a transcript if you have a five minute video. But let’s say you have hours and hours of lectures, and now you’re searching for something specific. Unless you have that transcript, it’s just impossible to find what you’re looking for. So the search is a huge, huge advantage that comes out of the transcription process.

Another benefit is reusability, and this sort of is coupled with search. If you can find what you’re looking, for you can reuse those pieces of video in other applications. Some of our customers tell us that professors, after they have a semester where the lectures have been recorded, they get a copy of all those transcripts. And they can use those transcripts to write a textbook or to write an article. There are many different use cases. When you think about it, people speak at about 150 words per minute, so a typical 60-minute lecture will have about 9,000 words in it. So if you have 20 or 30 lectures in a semester, that’s a lot of content to draw from and to repurpose.

Another benefit is navigation. So text is actually a great way to navigate through video, especially through interactive video plug-ins, which I’ll actually do a very quick demo in a minute to better explain how text can be used to navigate through video. Another advantage is SEO, or Search Engine Optimization and discoverability. So this really applies in cases where you are trying to maximize the viewership of the video. And the interesting thing there is that unless you transcribe that video, search engines don’t really know anything about that video, and they’re unable to index it properly. Whereas if you take all of that text content and you allow search engines to crawl it and index it, it makes that video infinitely more discoverable by people.

And then finally, because many videos that people are producing are intended for a global audience, and videos are starting to get translated into other languages, transcription is really sort of the source. You really need to transcribe a video before you for translate it. So that capability becomes possible.

So we’ll talk a little bit about the captioning process, and we’ll move into different workflows. So this is the way that we do it. There are different ways of captioning video. But basically, the process with our company is that you take the video, and you upload it to our service. We process it and create the captions and transcripts. And then, you can download the captions, and then you can publish them, and you can publish that depending on which player or platform you’re using.

To upload the media files with the 3Play Media system is actually very simple. There are a number of different ways to do that. You can upload directly from your desktop. If your videos are on a public server, you can just paste in links, and our system will fetch them. You can upload over FTP. You can do it over the API. There are a number of different ways to do it.

Then, after we process the media files, you can download captions and transcripts, depending on which format you need. And I’ll talk about that in just one second. And then finally, you can publish those captions. And there are different ways of publishing captions, again, depending on the player or platform that you’re using.

I’ll talk a little bit about caption formats. So there are many different captions formats, and the one that you need depends on the type of video player or platform that you’re using. In the top right corner of this slide, what you’ll see is an example of what captions look like. This is actually an SRT captions format. It’s probably the simplest captions format. You can see three captions frames there. And basically, the information in each caption frame is the text and the in point and the out point, so when those captions become visible and then when they stop being visible.

But there are many other captions formats as well. We actually provide all of these different formats, depending on what you want to do. And it’s also worth noting the emergence of the WebVTT format, which is being developed in conjunction with HTML5. Although there hasn’t really been wide adoption yet for WebVTT, the vision with HTML5 is that we’ll have an open, universal standard that allows publishers to publish video without using any third party plug-ins like Flash. This will also greatly simplify captioning, and it will work across all browsers and devices. And the way that it’ll work is you’ll use the track element, which will just point to the WebVTT captions file. But that’s a little ways out.

As you can see, the process of captioning and transcription can be a little cumbersome. There are a lot of elements that have to come into play, and our objective is really to try and simplify the captioning and transcription process as much as possible. And to that end, we’ve developed integrations with many different video players, video platforms, and lecture capture systems, including Kaltura. And with Kaltura, we have a really great bi-directional integration. So you really don’t even have to worry about a lot of these things that we’ve just been talking about, because everything happens automatically.

In Kaltura, you just select which files you want to have captioned. You basically press a button, and that’s it. Everything else is done automatically. What ends up happening behind the scenes is that Kaltura will send us the video files. We will process them and then send the captions back to Kaltura, and they’ll get reassociated with the video file. So it’s a very, very simple process.

The captions plug-in is another example of the tools that we’ve built to try and simplify the captioning process. So the way that this works is that it’s actually a JavaScript plug-in that can be embedded on any web page just with a few lines of code. And the way that it works is that it communicates with the video player, and it just displays the captions. And this will work with almost any video player.

The interesting thing about it is it’ll even work with video players that don’t support captions. So this is actually a screen shot from Penn State University. They’re using a Vimeo player, and Vimeo does not natively support captions. There’s no way to get captions with Vimeo. But by adding this plug-in, it’s very simple. It’s just a few lines of code. It’ll automatically communicate with the video player, and it’ll display captions. It also supports multiple languages, and it’s actually searchable. You can actually click on that magnifying glass and search for keywords and then jump to that exact point in the video.

In addition to captions, we also create video plug-ins that take advantage of the time text data that we’ve already created for the purpose of the captions. We use that time text data to make the video searchable, more engaging, and SEO-friendly. So what you’re seeing here are screenshots of an interactive transcript, which I’ll actually demo right after this. But what it does is it highlights words as they’re being spoken. You can search through the video using the transcript. You can click on any word to jump to that exact point in the video. You can even create clips of the video using the text.

So there’s a lot of really interesting features available with these tools. One of the advantages of these tools is that there’s really no cost to them, because you’ve already paid to have the content captioned and transcribed, and these tools are just using that data. So in most cases, there isn’t even any cost to add these to your web page.

So with that, I want to show you a quick demo, and then I will hand things over to Nicole from Regis University. This is an example of a website that has hundreds of hours of recorded interviews. So if we click on one of these thumbs– so what we see here is the video’s rolling in the top left corner– this is a Kaltura player– and below the video is an interactive transcript. So what it’s doing is it’s highlighting words as they’re being spoken. You can click on any word to jump to that exact point in the video. You can search through the transcript in that video.

On the right hand side, we have another plug-in called Archive Search which allows you to search across the entire video library, to search across hundreds of hours of videos. So for example, if we search for “linguistics,” it’ll show you where that word was spoken in each video. So each one of these rows is a timeline for each of the video interviews. And if I click on one of these sections, it’ll expand that section of transcript, show you where the word “linguistics” was spoken. And then if I hit Play, it will switch out videos and jump to that exact point in the video.

So this is really an example of video plug-ins that really make the entire video library more accessible. And as I said before, the beautiful thing about these plug-ins is that they’re really easy to install. They automatically communicate with the video player. And there’s really no additional cost, because you’ve already paid to have that content transcribed, and it’s really just feeding off that time text data.

So with that, I would like to hand things off to Nicole Croy from Regis University, who will talk about the solutions that they’ve implemented to meet their university accessibility policy.

NICOLE CROY: Thank you, Tole. As he said, I’m here today to share Regis’ journey to a more streamlined, accessible video process. I’m hoping that you gain some helpful insights to also help you in your journey. First, let me give you a brief overview of Regis and my department. The Department of Learning Design collaborates with course authors to design and develop online and blended courses. Currently, we have a little under 500 courses online, and of those, roughly 60% contain video.

I’m sure many of you have discovered when you were researching web accessible video that there’s a lot of gray area. There’s conflicting information out there, and a lot is left up to interpretation. At Regis, we believe that we have a duty to make all required course content accessible to all learners. To help us with that duty, just last year we developed a college-level accessibility policy. We based our policy on the Web Content Accessible Guidelines 2.0 version. I would highly recommend that if you’re looking to start your own accessible policy that you refer to the WCAG on W3C’s website.

Also, Web Accessibility In Mind, also referred to as WebAIM, which is a collaboration between the Center for the Persons with Disabilities and Utah State University, they have a lot of great references. And I’ve provided links here to the specific areas of their website that talk to video specifically. Many interpret the WCAG to state that an external transcript alone is sufficient. We tend to disagree. Section 508, as Tole explained previously, states “an equivalent experience for all users.” We take that to mean that the user should be able to follow the dialog while they are visually watching this video. So our policy states all multimedia files have synchronized captioning.

We’ve closed caption all of our online videos. I’m sure many of you out there that have worked with video captioning are thinking, wow, what a task to caption all of those videos. Well, before utilizing Kaltura and 3Play media, this was a very daunting task. Our previous process relied on a proprietary player. While this player and video format were able to deliver closed captioning, it was a very daunting task, and it ended up not being very compatible with all users’ environments. We had several, several support ticket issues with students that were no longer able to access it. A lot of that resulted in they had to have the latest version of this player.

In addition to the tech support issues, we also had the issue that the process was very cumbersome to actually get those captions into the online course. Our original caption process looked like this. We would upload the video to our streaming server. We would then upload the video to our transcription company’s server. We would submit a request for transcription. Around three to four days, we would receive an email stating the transcription had been done. We would log into our transcript provider’s website, download the transcript and caption files, upload those files to our streaming server, code the HTML player, and finally place the caption video into our course.

As tech support issues continued to rise, we decided to start researching alternative options. We started out with YouTube and using their voice-to-transcription service. While the captions were provided, we found that the accuracy level was not acceptable, and it required manual cleanup for the poor accuracy. This again increased our workload time. So based on the recommendation from our CIO, we started researching Kaltura as a video platform.

With the integration of 3Play Media, our closed captioning process now looks like this. We upload the video file to Kaltura’s server. In the metadata, we place a search term of “3Play.” This automatically generates a transcription request from 3Play’s side. We select a pre-designed caption player, download the embed code, copy it into the course. In as little as eight hours or as much as two days, the transcription file is completed, matched back up to the file in Kaltura, and we have a captioned video. As you can see, much more streamlined and almost half the effort of our previous process.

In addition to streamlining our captioning workflow, we also gained some unanticipated benefits. With Kaltura’s multitude of flavors and the auto-detect feature, we now can deliver customized video based on our user’s device, with that meaning all of our video are now mobile ready. Another benefit that we realized was through the use of 3Play Media’s interactive transcript. Learners that may have a learning disability, English as a second language, or students that simply learn better by receiving content in multiple sensory modes benefit from the transcript.

And very similar to what Tole demonstrated, I’m going to show you how the interactive transcript is being used by a student. So here, we have a video from one of our Masters of Science and Management courses. The instructor is demonstrating using a tree diagram to help make ideas and possibilities happen. With the interactive transcript, the student is able to watch through the process and gain the knowledge.

But then after reviewing, they may want to go back to a specific idea that was discussed. Instead of having to re-watch the video, they can simply scroll to the first reference of that content, click, and be taken specifically to that point in the video. So they’re only reviewing the content that they are in need of. And it’s a very nice feature, and we’ve gotten a lot of really great feedback from it.

So to conclude, with 3Play Media and Kaltura we’ve not only substantially streamlined our workflow for providing closed captioning videos to students. We’ve also increased our delivery and enhanced usability for all students, not just those with disabilities. I’d like to thank you for your time. I look forward to answering any questions. And I would now like to bring up Jason Neely with the University of Florida.

JASON NEELY: Thank you, Nicole. Today, I’m going to be presenting on how we make our videos accessible in the University of Florida’s College of Education. So I’m going to give you a little background on our office. The College of Ed Distance Learning Office grew from three full-time employees and a GA– which is yours truly– to just over 15 full-time employees and five GAs. Our office staff consists of instructional design and support. We have a few developers, a graphic designer, and a marketing and recruitment team.

The growth that we’ve experienced with our staff is in conjunction with the growth that we’ve experienced in some of our online courses. In the ’07-’08 academic year, we offered about 70 courses with just over 1,700 students. And then the latest numbers that I was able to get access to, which is the 2010-2011 academic year, we had 136 courses with just over 2,900 students. We’ve also expanded with outside projects that we have going on, and some examples of this are we have worked with other entities in the development of learning environments for teacher professional development. We actually provide some support for online courses with the University of Florida K through 12 Research School. And actually a project that’s near and dear to my heart, because I am a Ph.D. candidate studying marriage and family therapy, is that our office is working with the UF Counseling and Wellness Center in developing an online therapy program for anxiety.

Something else that I’m going to point out here with this last bullet is the College of Ed Distance Learning Office is separate from the rest of the university. As I’m sure many of you know, University of Florida is a gigantic university with right around 50,000 students. And because we are separate from the university, this allows us to be a little more agile and, I think, better serve our customers’ needs.

I’m going to briefly talk about the University of Florida’ accessibility policy. UF is actually in the process of updating its current policy. And similar to some of the links that Nicole provided, it is based on the WC3 Web Accessibility Initiative, and you can see the link is included here. Something else that I wanted to point out is that with UF, the essence of the UF accessibility policy is it that it’s demand oriented. So what this means is that when a student with special needs comes to the Disability Resource Center, that kind of gets the ball rolling as far as making courses and programs accessible to meet this student’s particular need.

I’ve also included a link here to wrightslaw.com, and that is a website that is actually very rich with resources for not only K through 12 folks, but higher end as well. And it’s useful for resources for teachers, parents, students themselves, and policymakers. It’s not the most attractive website, but like I said, it’s actually a very rich resource for anything that you might need in relation to accessibility. And in fact, it actually has an international directory for people that are outside the US and might need some resources and support in dealing with accessibility issues.

So next, I’m going to talk about video accessibility in our courses. A lot of the instructors in our department use videos to enhance their instruction. Videos come from all kinds of sources, including DVD clips. And in fact, we actually have a lot of instructions that have old VHS tapes that we are able to digitize and then post in the courses. And our policy is we’ll pretty much post anything if we can digitize it.

A few years of trying to keep our head above water with the growth in our office and the demand for our services, we encountered our first student that had special accommodations that were needed that went beyond the extra time allowance for an exam. And so, as I’m sure you can probably guess from this presentation, that special accommodation was that the student was deaf, and this student needed captioning and transcripts for video and audio content. So in conjunction with the demand-oriented policy, we began a search to find the most streamlined and efficient way of getting the content posted for this student.

And just one other example here. Another accommodation that we made– it’s actually with this same student– is that in the course, the students had live presentations with video conferencing software. And so we actually had to bring in an ASL or American Sign Language interpreter to interpret the presentations for this student so that he could access them.

This is a screenshot of our course. The OMS that we use is Moodle. And we use Kaltura of for our video services, as you can see here over on the left. And then we do use 3Play for our captioning and transcription services. So now, what I’m going to do is to show a live demonstration of another feature that we use with the web cam captioning. The screen that you see here is one of our courses. This is actually the Impact of Disabilities course. It’s an undergraduate course.

And what we’ve actually had a lot of instructors do is to capture web cam video to enhance instruction in their course, kind of give some live feedback. A topic that might have come up or an issue that might have come up during the course, the instructor is able to use their web cam to kind of address this issue that has arisen. And one of the nice things about this feature and that a lot of our instructors like about it is that gives an extra visual presence in the course, allows a face to be attached with a name.

So the way this works is in Moodle is that I would turn editing on. And when I turn editing on, this allows us to add either an activity, or in this case, we want a video assignment. So I click Video. We’ll give it a brief name. We’ll scroll down here to add video, and we move over here to click Web Cam. So hello. You can see me in here.

Then, the instructor would just simply click Record and begin to record this message. Hello. This is a demonstration of the Kaltura video web cam capture. So they would simply stop the recording, then click Next. Again, give it a brief title. This title here is actually the title that actually goes into the Kaltura management console. You can give any tags you use that. And then, if you have categories set up, you can add a category.

So then, the instructor would click Next. And at this point, the video that was just captured is being processed and sent to the Kaltura server. This should take just a few moments. And then, we’ll add the finishing touches.

You get a quick thumbnail of the video. You can change the player design. I actually prefer the darker player. And you can see the other customization features here.

Then, you simply click OK. Your video shows up, and then scroll down, and then we’ll save it. And there’s the video immediately posted in the course, and students can access this at any time.

Another way that we’ve actually used this is when instructors have posted videos to the courses is that we can now easily, through the integration of Kaltura and 3Play, upload the video to 3Play. And then, they will process it, transcribe it, and then post the captions back to it within as little as 24 hours. And then, you’ve got a fully-accessible video web cam capture.

This concludes my presentation. I hope you found it informative. I look forward to answering any questions that you have. I will now turn it back over to Tole.

TOLE KHESIN: That was really great. Thank you, Jason and Nicole, for those presentations and demos. This completes our presentation. At this time, we’d like to begin the Q&A and open discussion. So please type your questions and comments into the questions window, and we will start answering those questions. And thanks again to everybody.

NIRA SAPORTA: All right. Thank you so much Tole, Nicole, and Jason for a wonderful presentation, a well-synchronized one too. This is Nira from Kaltura, and we’re going to be starting the Q&A session right now. Thank you all for attending and joining and asking a lot of really great questions. We’ll be addressing them right now. I don’t think it will happen, but if we happen to run out of time and we won’t get to your question, we promise to get back to you after the summit is done and provide answers to all of your questions.

We do plan end the session before the end of the hour as to allow everyone time to refresh before the next sessions are lined up, as well as visit our exhibit hall and our networking lounge. And just to mention, 3Play also has a booth in our exhibit hall, so if you have very specific questions, that would be the perfect opportunity to ask those. All right. So let me started with the first question from the audience, and that is for you, Tole. How does the captioning and transcription process work, and how do you handle technical or specialized vocabulary?

TOLE KHESIN: Thanks, Nira. So the way the process works on our end is that we start out by taking the video when we receive it from Kaltura or from whatever other source, and we put it through an automatic speech recognition step. And that gets it to the point where it’s about 70% or 75% percent accurate, which is definitely not usable by itself, but it provides a great starting point for us. And then, the second pass is that we have a professional transcriptionist who will go through every single word and clean up the mistakes left behind by the computer. And subsequently, we have a third step where QA will go through and research difficult words, make sure their grammar and punctuation are correct. And so at the end of the process, it’s pretty much a flawless transcript that’s at least 99% accurate.

Often in education, there’s a lot of content with vocabulary and specialized words. And what a lot of our customers do is through the account system, you can actually upload vocabulary and acronyms that are relevant to your file, or those words could be uploaded to cover an entire folder or even for the entire account. And then, that helps our transcriptionists to decipher the words when they hear them, so that improves the quality even further.

And then, another thing that we do is that we actually have quite a few transcriptionists on staff. We have actually about 400 right now, and we tag them based on their expertise. So often, for example, if you upload a math file, we can route that to a person who is familiar with that type of content.

And then finally, no matter how hard we try, in rare cases there will still be occasional errors. And we have built an editing interface that in the account system what you can do is you can go in and pull up any file that has already been processed. And you can view the video beside it, and you can change the text in the transcript. And the system and the software will automatically account for the time synchronization. And those edits, when you click Save, will propagate to all the captions and output files, so you don’t have to reprocess anything. So that capability’s available as well.

NIRA SAPORTA: All right, great. Thank you. I want to move to you, Nicole, and to ask a more general question that came from the audience, which is, do you feel you got your return on investment for implementing Kaltura and 3Play’s captioning solution?

NICOLE CROY: Thank you, Nira. Absolutely. I think as I demonstrated in the presentation, just in the streamlining of our workflow and the decrease of the workload of our e-learning technologists alone, that has been worth the cost. Also, we’ve realized a substantial decrease in tech support issues with the Kaltura player, so that’s helped alleviate some of the tech support desk staff workload. And then finally, just the increase in usability that students are experiencing with the interactive transcripts, able to view videos on a mobile device. That alone has just made the whole process very worthwhile to us. We were paying for transcription and captioning prior, and the cost that we now pay with 3Play is very equivalent to that cost. And so it hasn’t been an additional cost, but we’ve realized a lot of additional benefits because of it.

NIRA SAPORTA: All right. Fantastic, Nicole. This is really great to hear. And I think there were a number of people in the audience that had questions about cost and how much does it the cost and how much do you pay. Tole, you want to quickly address that? And I’m sure that specific people can visit your booth and ask for a specific quote.

TOLE KHESIN: Yeah, absolutely. So, yeah, as Nira pointed out, if you come to our booth on the exhibit floor, there are some resources there. But the pricing is pretty straightforward. It’s based on the exact duration of each file, and it’s on a per-minute basis. And there are volume discounts.

A lot of our customers, universities, will pre-purchase a bucket. For example, you can buy 100 hours of content, and then every time you use the service, it just debits against the balance, and then it locks in a discount. But it’s all based on the duration. There’s no charge for the integration with Kaltura, the workflow integration with the Kaltura, or for the usage of the plug-ins, the captions plug-in, or the interactive transcript. Those are all free tools that we provide just as an additional benefit of transcribing and captioning the content.

NIRA SAPORTA: All right, perfect. I’m hoping this answers a bunch of questions about cost. But again, feel free to contact Tole directly in the booth and ask all those questions. There’s one question that came in from the audience about how that is integrated into the Moodle Kaltura plug-in. Jason, do you want to maybe take that? I know you also use Moodle, and you’ve implemented the solution, so maybe you can talk a little bit about that.

JASON NEELY: Yes. So just generally speaking, the way that we use Kaltura, 3Play, and Moodle is we have the Kaltura plug-in. So we create basically an HTML page within Moodle, and then we go into Kaltura. And we pull the embed code, paste that into the page in Moodle. And then we actually take the HTML code from 3Play, and then we paste that into the HTML web page in Moodle.

So then, you save that HTML page, and you refresh it. And then, you have the Kaltura video and then the 3Play transcript. We actually have some HTML code so that the Kaltura video and the 3Play interactive transcript are side by side. So that’s kind of the basic process of how we get Kaltura and 3Play into the Moodle element.

TOLE KHESIN: Yeah. And just to add to that. With regard to providing the ability for faculty to initiate captioning requests, that’s something that we have been collaborating with Kaltura. But it’s still sort of in the exploratory stage, and we’re working on that. But that capability doesn’t exist right now.

NIRA SAPORTA: OK, thanks. Yeah, this is definitely something that we’re aware of, that we need to extend more control to end users. And it’s definitely on our road map to handle that at some point. I think another question that came in is for you, Nicole. Someone asked if the Regis campus web accessibility policy would be available to the public. They basically said it sounds like you put a lot of work into it, and I guess they were wondering if that’s something that they could leverage.

NICOLE CROY: Yeah, we actually did put quite a few hours and work into that policy. And I am not exactly sure if that’s public information, if I’m able to share that. But I would recommend if anyone is interested in getting details on that, just send me an email. It’s ncroy@regis.edu. And if I get an answer from the dean of my college as to yes, we can share that, I’ll be more than happy to distribute that. So sorry, but I don’t know right here and now. But definitely reach out to me, and I will let you know.

NIRA SAPORTA: All right, great. A question to you, Jason. For the web cam video that you create, it asks about can you specify directly in Moodle that you want Kaltura to send it to 3Play for captioning.

JASON NEELY: That is from the user, the teacher role that would be uploading the web cam video. I’m pretty sure that there actually is not a way. You have to go into the 3Play management console and let 3Play know that you want this video that’s in your Kaltura repository to transcribe it. And then actually, 3Play has just upgraded some of their features. So once you go into 3Play and let them know you want XYZ video transcribed, they’ll go ahead and start the process and then post the transcript back into the Kaltura video.

So it’s not automatic. There’s a couple of intermediary steps, but it is pretty seamless. Or I should say it’s a lot more seamless, a lot more efficient now than when we first started, which I think was about a year ago.

NIRA SAPORTA: All right. Yeah, that makes sense. It’s a work in progress. And as I said, we are working towards putting more control in the hands of the end user and just trying to keep improving this process as we go on. Another the question for you, Jason, specifically about the digitization of the VHS videos that you mentioned. They were asking about were those videos mainly commercial, or were they were originally produced on campus.

JASON NEELY: It’s a combination of both. As you can imagine, VHS is a much older technology. So there are some old videos that a lot of our instructors like. There’s certain segments from these videos that they want us to kind of crop and post in their course. There’s also other videos that have been produced on campus. So we have a combination of both of those videos that we stick directly in a professor’s course.

NIRA SAPORTA: OK. Another question that came from the audience, and I think that could go to both of you, Nicole and Jason. They’re asking if you’ve worked with the folks at the Described and Captioned Media Program at the National Association of the Deaf.

JASON NEELY: I can say I have not.

NICOLE CROY: And I have not either.

NIRA SAPORTA: All right, OK. But it does sound like if there’s a follow-up question in there, we can certainly take care of that. And another question that came from the audience to both of you, Nicole and Jason, is, based on your experience so far, what kind of feedback have you been getting from people that have used the solution? Or basically comparing before and after, what kind of feedback are you getting from your users?

NICOLE CROY: Nicole here. And I can say every day I continue to hear new uses that our students, our faculty are using, specifically the interactive transcript. I met with a faculty member in our nursing program for the College of Health Care, and she had created some videos for clinics that students that are being trained to be nurses were taking. And these were basic skill videos, like showing how to put an IV in a patient’s hand.

And she had recorded those videos, and then provided the interactive transcript to the students. And she said the students were coming to class, and they had printed out the transcript, and they were using it as a step-by-step checklist of going through the steps for putting in the IV. So I’d say it’s wide open as far as uses. I keep hearing about different use cases continually, and so it’s a really positive feature. We are so happy to have brought it to campus, and we’re just really glad for it.

JASON NEELY: Yeah. This is Jason. For us at UF, I think as I said in my presentation, we’re in the College of Education. So we’re a little bit separate, a little more agile and flexible in terms of what we can do. Because we’ve started using 3Play, the captioning system, a lot of the other departments and decision makers in the broader university have really kind of taken a look at what we’re doing and have been looking into providing a contract for other departments university wide. So I think they like the fact that it is a very streamlined, and in my opinion, cost-efficient process for such a large university in the process of what we were using or what was being done before.

Kind of on the flip side, I think there’s been some questions from the audience about cons, issues that we’ve had. I will say one of the issues that we’ve had– and this is more kind of a user education– is that folks who do not need the interactive transcript, they sometimes get frustrated with it being right there and kind of distracting as they’re playing a video and watching a little bouncing ball go through the video. And that’s actually an easy remedy in that you just can click a button, and it’ll close the interactive transcript. And the issue is– and I was actually just talking with our instructional designer yesterday– we in our office need to do a better job of educating our users of the features of 3Play, how to use certain things, and turn the features off. So that’s kind of a pro and a con, some of the things that we’ve been experiencing at the University of Florida.

NIRA SAPORTA: Yeah, wonderful. Yeah, there’s always considerations after you implement a solution. Things come up, and it’s great to hear that you’ve come up with creative solutions and also working with us to keep enhancing this very important aspect of teaching and learning.

There was one question that I’m going to answer about where can people download the slides of the presentation. So the web recordings of all the sessions are going to be available after the summit is over, so you’re going to have access to that through that. And then, we are about 10 minutes before the hour. We have a couple more quick questions to take, and then we’re going to have to wrap up. So one thing is, does 3Play Media work with live video feed?

TOLE KHESIN: So we do not. We process files that have already been recorded. And so as part of our process, you have to actually upload the file. So we do not do live captioning. It’s a separate service.

NIRA SAPORTA: Right, OK. And then, another question is for you, Nicole or Jason. They’re saying you talked about faculty using this. Is this also available for videos that are being uploaded by students?

NICOLE CROY: Here at Regis, in CPS we just recently piloted a technical communication speech online course, and we utilized Kaltura media’s space to allow students to capture themselves giving the speeches and then upload them. And then we use D2L as our OMS, so they were able to embed into the discussion forums those speeches. So we just started working this last term with students in utilizing that. But so far, it’s gone really great, and we’re getting a lot of good feedback.

JASON NEELY: In short, yes, with a few caveats. Because even though we’ve been using it for about a year, it’s still relatively new. We pretty much screen everything that comes through, and that’s primarily me. So whether it’s a faculty or even a student, if there’s some presentation that they’re having to do, all the videos and stuff will come though me first.

And kind of to answer the question more directly, yes, students can. But it generally comes through us first, because we want to try to keep track of videos. Because if we were to kind of open everything up to all students and faculty, things would get pretty unwieldy really, really quickly. And they actually even are now, even though we’re screening a lot of things. So we’re in the process of trying to come up with a clean process of how faculty and students can, on their own, upload videos, and we can kind of keep track of things and make sure that inappropriate material isn’t getting posted and that sort of thing. So we’re kind of in the process of working on some of those issues now.

NIRA SAPORTA: Yeah, yeah. It’s a wonderful tool. You just need to be able to control it and moderate it. There’s a lot of moderation that’s built in Kaltura specifically, also outside the context of captions, and also, obviously, with this in mind. So again, thanks everyone for joining and for being very involved and asking all these great questions.

There are a few questions that we didn’t get a chance to answer, and we will get back to you in person to address those. And please feel free to visit our exhibit hall and our networking lounge and, obviously, our next sessions that are coming up. Tole, Nicole, and Jason, thank you so much again for coming, for presenting, for putting this together. And look forward to the rest of the day

TOLE KHESIN: Thank you.

JASON NEELY: Thank you.

NICOLE CROY: Thank you.

Interested in Learning More?