« Return to video

How to Implement Accessible Lecture Capture [TRANSCRIPT]

LILY BOND: Welcome, everyone, and thank you so much for joining this webinar entitled How to Implement Accessible Lecture Capture. I also wanted to say Happy International Day of Persons with Disabilities. I’m Lily Bond from 3Play Media, and I’m joined today by Christopher Soran, who’s the Interim eLearning Director at Tacoma Community College, and by Ari Bixhorn, the VP of marketing at Panopto. We’re going to talk to you about integrating closed captioning with lecture capture.

For the agenda today, I’m going to start by going through some of the benefits of captioning, followed by legal requirements and lawsuits. And then Christopher is going to go over the closed captioning workflows at Tacoma Community College and how they budget and prioritize content for captioning. And then Ari will go through some of the trends in lecture capture and accessibility and how to go about building accessibility into lecture capture. And then we’ll go through Q&A.

To begin, just to go through some of the benefits of closed captioning, the primary purpose of captions and transcripts is to provide accessibility for people who are deaf or hard of hearing. There are 48 million Americans who experience hearing loss, and closed captions are the best way to make that media content accessible to them. Outside of accessibility, though, people have discovered a number of other benefits to closed captioning.

Closed captions provide better comprehension to everyone. The Office of Communications in the UK conducted a study where they found that 80% of people who were using closed captions were not actually deaf or hard of hearing. And the closed captions really provide increased comprehension in cases where the speaker has an accent, if the content is difficult to understand, if there’s background noise, or if the viewer knows English as a second language. And captions also provide flexibility to view videos in noise-sensitive environments like offices, libraries, and gyms.

Captions also provide a really strong basis for video search. People are used to being able to search for a term and go directly to that point, and that’s what our interactive transcripts let viewers do within a video. And for people who are interested in SEO, or Search Engine Optimization, closed captions provide a text alternative for spoken content. And because search engines like Google can’t watch a video, this text is really the only way for them to correctly index your videos. Discovery Digital Networks did a study on their YouTube videos and actually found that adding captions to those videos increased their views by 7.3%.

Another benefit of captions and transcripts is their reusability. The University of Wisconsin found that 50% of their students were actually repurposing video transcripts as study guides, so they made a lot of sense for education. Of course, once you have an English caption file, you can translate that into foreign languages to create multilingual subtitles.

And finally, captions may be required by law, and I’m going to dive into some of the federal accessibility laws right now. The first big accessibility law in the US was the Rehabilitation Act of 1973. And in particular, the parts that apply to captioning are sections 508 and 504.

Section 508 is a fairly broad law that requires federal communications and information technology to be accessible for government employees and the public. So this is where closed captioning requirements come in. And section 508 applies only to federal programs. However, any states receiving funding through the Assistive Technology Act are required to comply with section 508. So often, that law will extend to state-funded organizations like colleges and universities.

The ADA is the Americans with Disabilities Act of 1990, and that’s a fairly broad law that is comprised of five sections. Titles II and III of the ADA are the ones that pertain to video accessibility and captioning. Title III requires equal access for places of public accommodation.

And the gray area here is what constitutes a place of public accommodation. In the past, this was really applied to physical structures like requiring wheelchair ramps. But recently, that definition has been tested against online businesses, and I’m going to go over a couple of those lawsuits in a second.

And then finally, the 21st Century Communications and Video Accessibility Act is the most recent accessibility act, and it was passed in October of 2010. It requires captioning for all online video that previously aired on television. So the first lawsuit that applies to captioning that I’m going to go through is the National Association of the Deaf versus Netflix. The NAD sued Netflix in 2012 for failing to provide closed captions for most of its Watch Instantly movies and television shows that streamed on the internet.

And this was the first time that Title III of the ADA, a place of public accommodation, had been applied to an internet-only business. Before, it only been applied to physical structures like wheelchair ramps. And this was a really landmark lawsuit. Netflix argued that they don’t qualify as a place of public accommodation in accordance with the ADA, but the NAD’s lawyers made the case that the internet was not contemplated in 1990 when they wrote the law and that activities listed in Title III now take place on the internet.

So the court ruled in favor of the National Association of the Deaf. And they said that the legislative history of the ADA makes clear that Congress intended the ADA to adapt to changes in technology. Netflix ended up settling and agreed to caption 100% of its streaming content, but this case sets a really profound precedent for companies streaming video content across industries including entertainment, education, and health care.

A more recent lawsuit is the National Association of the Deaf versus Harvard & MIT. Harvard and MIT were sued this past February for failing to provide accessible video content. And the language that they used here was that Harvard and MIT were either providing inaccessible video content that was not captioned or that was inaccurately or unintelligibly captioned. So this was the first time outside of the entertainment industry that accuracy has been considered in the legal ramifications for closed captioning.

And so the argument there was that educational online videos are a public accommodation, regardless of whether or not the ADA originally applied to physical structures. And one of the lawyers for the National Association of the Deaf said, “If you are a hearing person, you are welcomed into a world of lifelong learning through access to a community offering videos on virtually any topic imaginable, from climate change to world history or the arts. No captions is like no ramps for people in wheelchairs or signs stating ‘people with disabilities are not welcome.’”

So in June, the Department of Justice submitted a statement of interest supporting the National Association of the Deaf saying that Harvard and MIT’s free online courses and lectures discriminate against the deaf and hard of hearing. And the final argument for that case was held in September. We’re still waiting on a decision on that, but the final outcome will have huge implications for higher education.

And before I hand it off to Christopher, he’s going to talk about our integrations. But I wanted to give you a brief overview of how that works so that you have some context. On this slide are a couple of images of our Panopto integration. You set it up by linking your 3Play Media and Panopto accounts, which just requires entering two credentials from your 3Play account into your Panopto account. And then you can select a single file or a whole folder for captioning.

And when the captions are completed, they’ll just automatically post back to your Panopto video. So that makes it really easy to implement captioning if you’re using one of our integrations. So now I’m going to hand it off to Christopher, who’s going to talk to you about Tacoma Community College’s operational considerations and workflows.

CHRISTOPHER SORAN: Hi, everyone. So at Tacoma Community College, we have about 10,000 students, and we have a very diverse population that we support, which is fantastic. So I’m going to get in a little bit to our campus accessibility policies. So obviously, we fulfill all accommodation requests, but we don’t want to just build ramps to nowhere, just put down a ramp and call it done. We also strive to build a framework for a proactive versus reactive accommodation support.

And so obviously, we follow all the laws and regulations that were just spoken about. But I’d also like to approach with a message about the benefit to students, and not just that it’s against the law. So I like to take a very positive approach about all the benefits that it brings to the students and the faculty and the staff on campus around all accessibility needs, such as captioning. I also am an individual with a physical disability, and so I’m very passionate about accommodations in a personal way, in addition to my role on the campus.

So I’m going to talk a little bit about our captioning workflow. So it starts with we have a support site. So students, faculty, and staff can go to our support site. And they can search our knowledge base, and they can also submit support requests, whether it’s an eLearning support request, an IT support request, whether they have a captioning request. So we have this centralized support location, and so they can submit a request for captioning.

And then our staff are able to see it on the staff side. We’re able to intake all those captioning requests, and then we can process them. The first part of our process is obviously getting it to 3Play so that the videos can be captioned. So you can see here a listing of some of our recent captioned videos.

We do things like you’ll see the folder structure on the left. We organize that based on projects. We have so many projects that we’re able to map all of our captioning requests to the project ID in our project management system.

And then we move onto tracking it. So a key piece is, I think as part of the reporting side of it as well as for the billing side of things, what’s the need? So how much are we captioning? What’s been captioned? What classes have been captioned?

So we track all this in this spreadsheet-type thing here. I can kind of show a little bit more. So we have different categories for different types of things. We also have a big push here on campus for open educational resources, and so we like to mark those and get those shared out for our captioning pieces.

And so one great story that I like to share was that over the summer we captioned two biology courses that usually run about five sections a course, and it was dozens of videos. And so we were able to use the 10-day turnaround for a slightly lower cost turnaround on the captioning. And we spent all summer getting those all ready, just working through those.

And then when fall quarter came around just a couple months ago, the instructor had two letters of accommodation requests for captioning of videos in the class. And she was able to just say, that’s fantastic. The videos are already captioned.

So it was just a really great story about how our proactive approach really paid off for this instructor. And it [? ended up ?] saving the students so that they were able to see the stuff right away, as well as save us some money so we didn’t have to do rush captioning. So I think everybody benefited from that situation, and we were able to reach a lot of students because it’s such a high enrolled [? sections. ?]

The approach we take to captioning– I like to encourage reusable content that gets captioned. Obviously, it’s important if there’s any specific accommodation requests around any other things. But the day-to-day lecture capture can be a lot of video. And we’d love to have an unlimited captioning budget, but there are some limitations and some priorities.

So one of the priorities I like to set beyond some of the accommodation requests is to have the reusable content. So work with faculty, say, are you going to use this from quarter to quarter? Well, that’s fantastic. Let’s get that all captioned so that the students can get the best benefit out of it. We get the best bang for our buck around getting those captions.

So obviously, funding sources for captioning is a big thing. Where do you get that? Is it a grant? Is there local funding available? In our particular case, we’re lucky to have the support of our administration. And our eLearning department is able to fund the captioning, so whether this comes out of your disability support services office or however that gets there. But if you’re looking to get started, my recommendation would be to see if you can obtain $10,000 as a starter piece.

So what you can do with that is develop a workflow. So we were able to develop that workflow about how the requests come in. How do we track those? So what is the demand as well?

So that you can say after that $10,000 is out, we have demand for these more hundreds of hours of video. And so you can kind of gauge those pieces and track that to make additional requests beyond that. So that would be my recommendation for getting started with a captioning project.

The next goal in mind is to caption the top 100 most enrolled courses. And so we’ve started to work at– any requests that come in, we fulfill, any of the accommodation requests or any faculty that bring us requests for any specific videos. But now we’re going to work on a more proactive approach in addition to other proactive measures around getting all the top 100 enrolled classes at the college– getting all those videos captioned. And so we’re working through that right now. And so I think that will bring the benefit to the broadest range of students, not just the ones that we have done so far.

One of the other things I wanted to mention was why did we end up using 3Play. So we have the Panopto lecture capture system. We currently have the Limelight Content Delivery Network, and we’ll be soon transitioning over to Kaltura.

And so we have these systems, and 3Play integrates with all of the systems. And so that really helps lessen the workload and the workflow piece, so we don’t have to manually move captioning files. It’s all integrated. So that’s one of the benefits that we have working with 3Play. That’s all I have. Thank you.

LILY BOND: Thank you, Christopher. I’m just going to hand it over to Ari, who is going to talk to you about accessible lecture capture.

ARI BIXHORN: Great, thanks, Lily. Thanks, Christopher. Support for accessibility in lecture capture has become really important, specifically in recent years. And it’s important to the point where it’s now impacting which lecture capture tools institutions adopt and also, as a result, how lecture capture vendors like Panopto are building their products. So for the next few minutes, that’s what we’ll talk about. We’ll talk about how lecture capture vendors are thinking about accessibility as really a core part of the user experience.

What’s interesting to note is that one of the original goals of lecture capture was to create a more accessible learning environment. So if students had to miss a class due to a personal, medical, or really any other reason, they could still attend the lecture virtually as though they were in the class, seeing the professor and any material that the professor was presenting, either on their screen, on a document camera, on a whiteboard, et cetera. And at Carnegie Mellon University, where Panopto was first developed, one of our initial use cases was explicitly focused on accessible learning.

There were two students in the computer science department at CMU, and they had a physical disability that frequently meant that they couldn’t attend class. So as Panopto was originally being developed, part of the original design was to help these students get an online learning experience that mirrored the in-class experience as best as possible.

Now, over the past few years, lecture capture has sort of transitioned from being a nice-to-have technology to a critical utility that students expect as part of their learning experience. In fact, in March of this year, the Wainhouse analyst group fielded a survey of a few hundred academic institutions. And the survey asked about the types of technology that they currently use and what they planned to use in the future. 80% of higher ed institutions that were part of that survey had some use of lecture capture. It was either mainstream, deployed across campus, or it was at least used in some departments across the institution.

Now, for those four in five universities that had been using lecture capture, the challenges that they face have really shifted over the past few years. When lecture capture was a relatively new technology, the big challenge was simply about scale. In other words, how can we record all of these lectures that are taking place in classrooms and lecture halls across campus?

But now that lecture capture has gone more mainstream, the big challenge is in the management and the delivery of those lectures to the students. And we work with universities of various sizes who are recording thousands or even tens of thousands of hours of lecture video each school year. So accessibility has become a key part of that management and delivery focus. And that’s happened particularly in recent years and in light of some of the recent lawsuits that Lily was just talking about.

Now, this focus on accessible lecture capture I think is highlighted by two statistics. The first is from that same Wainhouse survey that we were just talking about. In that survey, Wainhouse asked a question about which capabilities in lecture capture solutions drove the buying decision. In other words, which capabilities led an institution to choose one lecture capture provider over another?

And what was fascinating to see is that one out of four respondents listed accessibility as one of those key capabilities. So this is to say that accessibility isn’t just an important function in lecture capture, but rather the specific accessibility features available in a lecture capture system could sway an institution from one vendor to another. And I think that’s a really powerful statement about the importance of accessibility in these tools.

The second stat is one that we’ve seen here in our own system at Panopto. Year over year, the number of requests for captions that we’re getting is on the rise. So in the chart on the right, you can see that between the years of 2013 and 2014, the number of caption requests increased by 33%. And as we’re wrapping up 2015, we don’t have final statistics yet. But we are definitely on track for another year of substantial growth in the number of caption requests that come through our system.

So when I look at these statistics, there’s really no question that accessibility has become a critical and really an expected part of the lecture capture experience. So then the next question is, how do we build a lecture capture tool that addresses the needs of accessibility? And at Panopto we thought about this through three specific questions. First, how can we economically create captions?

Number two, how do we simplify the workflow of generating those captions from the recorded lectures? And number three, outside of captioning, how can we ensure that the media playback experience is accessible? A lot of the discussion on lecture capture accessibility focuses on captions, but it’s, of course, much broader than that. It’s really about designing a product that is inclusive so that people with various disabilities can navigate and interact with their online course.

This first question about economically creating captions– early in the development of Panopto, we realized that this is a distinct area of expertise, and it’s one that we should capitalize on through partnerships. So specifically, we work with folks like 3Play, whose focus is on creating high-quality, cost-effective captions at incredibly high scale. So I’m not going to spend a lot of time talking about this first question. However, questions two and three sit squarely in the domain of the lecture capture provider. So let’s talk about those in a bit more detail.

When we talk about simplifying the captioning workflow, this comes down to three key steps. The first is in supporting one-click requests for captions on any given recording. So within a lecture capture environment, the owner of the video, whether it’s a professor or a TA or an admin of the system, they should be able to easily request captioning in just a couple of clicks. Typically, as you see onscreen here, there are multiple turnaround time options for how quickly you want to get the captions. And as you would expect, the faster you need the captioning, the more expensive the captions cost.

Once that request is made, everything else should happen behind the scenes from the perspective of the end user. The professor or the lecture capture admin shouldn’t have to take any manual steps after that point. For example, if on the left you had your lecture capture platform, on the right you have your captioning provider. Once that request is made to caption a video, the lecture capture platform should automatically send that request to the captioning provider, where the captions get generated.

And then, through an integration with the lecture capture platform, those captions are inserted directly within the video. So there’s no need for the administrator or the professor to manually upload the captions that are generated by that captioning provider. And so within a day or a few days, those captions simply appear as part of the playback experience.

Now, for some courses or sections of courses like the biology course that Christopher was just talking about, you may want to caption all of the videos in that particular section or that course itself. So for that, you don’t want the owner of the video or the admin to have to manually request captions for each video individually. Instead, you want to provide an automated captioning service for the entire folder or the entire course.

And so in this case, every video that gets uploaded into a particular course folder automatically kicks off a caption request to the captioning provider. And at that point, the workflow that we just talked about kicks off, where the captioning provider generates the captions and then automatically inserts them back into the appropriate video within the lecture capture system. Now, in order to accomplish these two things, the lecture capture software has to have a tight integration with the captioning provider. And we’ve done a lot of work with 3Play and with other vendors to make that integration seamless.

The third step of simplifying the captioning workflow is to ensure that once the videos have been captioned, they can easily be accessed from within the learning management system. Since most students at universities are accessing their recorded lectures from an LMS, we want to make sure that the captions can be made available directly from within that familiar environment. So for a lecture capture vendor like Panopto, that translates into a requirement for tight integration into all the popular LMSes like Moodle, Canvas, Blackboard, Brightspace, et cetera.

Now, outside of captioning, the next question is, how do we take a more holistic approach to building accessible lecture capture playback experiences? And we have a list of seven of the things we look at at Panopto to address this. The first is keyboard accessibility of the user interface.

So for example, when we look at the media player for our lecture capture system, does the media player require a mouse to navigate, or can it be controlled entirely by the keyboard? So using a keyboard– and in most cases, it’s the Tab key– the user should be able to navigate and control things like playing the video, pausing it, fast forwarding, adjusting volume, toggling into full-screen mode, and other functionality that’s available in that media player.

And as part of that, when the keyboard is being used, it needs to provide on-screen indications of the current area of focus. So as I tab over to the Play button, there should be an indication of some kind– typically, a highlight or a box around the Play button– indicating that that is the key area of focus.

The second element is that any image within the user interface should also be available in a text form. And with a web-based user interface, this typically translates into alt tags for image elements. These alt tags should describe the functionality of the control. And ideally, it should also identify its current state. So for example, if I’ve tabbed over to a volume icon in a media player, that alt tag should have an indication that it is the volume icon and that, let’s say, the volume is muted. So it identifies the control and the state that that control is in.

The third element is about obeying user-configured accessibility settings. So the examples I have on the slide here are contrast and size. If a user has selected a high-contrast color scheme in Windows, let’s say, the lecture capture environment and the media player should respect those settings. Similarly, if the user has configured their text size to be substantially larger for improved readability, that shouldn’t affect the usability or the access to any controls in the lecture capture environment.

Number four is screen reader support. So screen readers like JAWS and Window-Eyes should be available and accessible for people who have visual impairments. A screen reader is– as we know, it is synthetic speech that tells the user what is on the monitor. And again here, the elements in the lecture capture interface need to have metadata that describes their function and their state. So for example, a screen reader should be able to read that the user has tabbed onto a volume icon and that the volume is, let’s say, at 0%.

Number five is variable speed playback. And variable speed playback, or VSP, really provides benefits to a variety of students. So students who are studying in a non-native language benefit from the ability to slow down the audio and replay it as needed. Students with cognitive disabilities can also control their playback environment using VSP.

And what’s been interesting to see is that the ability to control the playback environment is actually changing the way that universities support students with learning disabilities. Traditionally, universities would often send notetakers to class for these students. And in some cases, the ability to have access to recorded lectures that can be controlled by the students is reducing the need to make use of those notetakers.

Number six is making sure that you provide a version of the recording that doesn’t require user vision. And specifically with lecture capture, that means creating an audio-only podcast of the recording and making sure that you create that audio podcast automatically as part of every recording that gets uploaded into the video library.

And then last but not least is a broad guideline for adhering to the Web Content Accessibility Guidelines. I believe there are about 14 guidelines and about 60 checkpoints. And these ensure that web content is easily perceivable, that it could be easily operated, and that it can be easily understood.

And so a couple of examples of that– one part of the guidelines focused on contrast, so ensuring that the text in a web page has enough contrast from the background to ensure readability. Another example is ensuring that all users can understand information that is conveyed by color differences, even if they have a color deficiency, such as colorblindness. So as you evaluate lecture capture solutions on the basis of how they support accessibility, these are some of the key elements to look for. It’s the workflow of generating captions, and it’s the broader support for things like keyboard accessibility, screen reader support, and the other items that are shown on this slide.

So with that, let me hand things back over to Lily. And I think we will switch over to Q&A.

LILY BOND: Perfect. Thank you, Ari. That was a great presentation, and a lot of questions have been coming in. We’ll do about 15 minutes of Q&A. There are some resources on the screen for anyone looking for more information, but let’s just jump right into Q&A.

Again, feel free to continue to ask questions while we’re going through this. We’ll be asking them as they come in. And a reminder to everyone who has been asking that this presentation has been recorded. And we will send out a link tomorrow in an email to view the recording, the slide deck, and the Q&A will be included in that.

So the first question here is for you, Christopher. How did you choose your lecture capture and captioning systems?

CHRISTOPHER SORAN: So our lecture capture system– I’m part of the Washington State Board for Community & Technical Colleges [INAUDIBLE] RFP. And so they chose for the state the Panopto lecture capture system. So there was a whole process on the state level involved around that.

And on the captioning side, we locally ran through a similar process testing out different vendors. And 3Play had all the feature sets that we were looking for and the best ease of use, including all the integrations and things like that that we were looking for. So RFPs.

LILY BOND: Great. Thanks, Christopher. Ari, a question for you here. Do you find that customers are using multiple lecture capture systems? And if so, how does migration work between them?

ARI BIXHORN: Yeah. A number of our customers are using two and sometimes three lecture capture systems. And the most common scenario here is that different departments are using different lecture capture systems. And what often happens is that after a period of time, the departments get together and there’s a decision to go campus-wide with the lecture capture deployment.

And it’s typically at that point that the idea of migration from one system to another ends up happening. In fact, with Tacoma Community College and the Washington State Board of Community & Technical Colleges, that’s exactly what happened. And we built a conversion tool that would take all of the multimedia from the school’s existing colleges, as well as all the metadata, including captioning, and export it from the existing lecture capture tool and then import it into Panopto. And that’s typically the approach that we take.

LILY BOND: Great. Thanks, Ari. Christopher, a question for you here. Someone is asking, you advised allocating at least $10,000 to develop a workflow. Can you break down how that money was used? For example, was a certain percentage used to have work-study students caption, or was a certain amount allocated for videos over a certain length of time?

CHRISTOPHER SORAN: Sure. So in our particular case, it was all dedicated to paying 3Play to provide the captioning for our videos. We’ve certainly had some projects where we had work-study staff go through and caption some videos. The eLearning department happens to manage a help desk. And so we have a large number of work-study students available to us since we’re open most of the days, for six days a week. So we have quite a team that we can work with. But we also need to be a help desk.

And so it was kind of a balance between the other work for our staff. So there certainly could be other ideas and opportunities for hiring staff out of that to do the lecture capture pieces, although you generally want to keep the staff around. And so you want to look for a more sustainable way, possibly, to keep them going. So the way for us was just to pay 3Play for the lecture captioning.

LILY BOND: Great. Thanks, Christopher. Ari, another question for you here. Once you have a captioned video in your lecture capture system, what’s the workflow for publishing it to YouTube? And do the captions continue once you publish it?

ARI BIXHORN: Yeah. Currently what we allow is an export of both the video and the captions, as well as other metadata, like table of contents. And that can then be uploaded into YouTube. What we’re working on is a more one-click solution that will allow you to, in one click from Panopto, export everything and automatically syndicate that to a YouTube channel.

LILY BOND: Great. Thanks. Christopher, there’s a question for you here. How are faculty trained to use Panopto and the other captioning and video systems that you use?

CHRISTOPHER SORAN: Sure. On the Panopto side of things, we provide a lot of one-on-one training. So we’ll meet one-on-one with faculty. We’ll give them a Bluetooth headset so they can go back to their classrooms or their desktops or laptops and they can have a better audio piece. So as part of that, we provide that one-on-one training. So we’ll sit down with them for an hour, two hours, 30 minutes, whatever it takes for them to get up to speed with the different uses. And so we leverage our support team for providing that training piece.

As far as the captioning side of it, we leverage that centralized support system, So faculty can just say, I want these videos captioned. And then our media production team will go and make sure that those videos get captioned. And then it’s just a matter of following up with the faculty member that made the request, letting them know that it’s been done.

LILY BOND: Great. Thanks, Christopher. Someone is asking, has any research been done to prove correlation between adding captions to YouTube videos and views? Could it be that channels with many subscribers and views are more likely to add captions?

I’ll answer that one. So I mentioned a study with Discovery Digital Networks. So that was actually a pretty comprehensive study about the impact of closed captions on views. And they actually captioned over 300 videos across eight of their YouTube channels. And they captioned 125 of those videos over a year. And then they left 200 of them uncaptioned.

And then they collected data on the views and how quickly view increases were found. And the data corresponded to the date that the captions were turned on. So that stabilized the difference between views on recently published videos and older videos. And they found that in the first 14 days, there was a 13.5% increase in views. And then lifetime, that evened out to 7 and 1/2% increase in views on their captioned videos versus their uncaptioned videos.

So another question here is for Ari. Someone is asking, are closed captions supported in Panopto content on mobile devices as well as on web versions of the content?

ARI BIXHORN: Yeah. We provide captions in three different experiences. One is the full web browser experience, where you can view multiple feeds of videos simultaneously, and you can see the captions there. The second is in what we call our embedded player. And our embedded player is more typically what you would see on a site like YouTube, where it is a single feed of video that has closed captions overlaid on top. And then in our mobile apps, we provide the ability to show captions in both our iPad, iPhone, and also our Android apps.

LILY BOND: Great. Thanks, Ari. Christopher, I don’t know if you have this, but someone was wondering, of that $10,000, how much money did you pay for captioning in just one academic year?

CHRISTOPHER SORAN: Sure. So that was kind of the starter piece for getting our process flowing that got us through two quarters out of the four quarters of the year. So a more sustainable budget would be a little bit bigger, probably more in like the $30,000 range annually. Depends on the number of requests coming in and how proactive you’re going to be about it.

LILY BOND: Great. Another question for you, Christopher, kind of along the same lines. Someone’s asking about how you developed a proactive approach to captioning.

CHRISTOPHER SORAN: Well, still in development. So we first started by having a process for requesting captions and making faculty aware of that process so they can make those requests so that we have a workflow with our support team, our media production team on how that’s all going to flow, how we’re going to even track all that piece.

So once we had that workflow, we’re able to kind of expand upon that. And we’re really kind of just getting started on that top 100 piece that I was mentioning. And so we have some scripted emails that we’re going to start contacting individual instructors. Hey, we noticed you have some of these videos in your course. Which one of these are you going to be reusing from quarter to quarter?

So kind of engaging them more on an individual level about the videos in their courses. And so just having our staff spending time working directly with the faculty. So that’s kind of our next steps for making that proactive piece and hitting those most enrolled courses to get the biggest benefit initially.

LILY BOND: Great. Thanks, Christopher. So I think we have time for one more question here. Ari, someone is asking, can you include chaptering, metadata terms, or word search in your Panopto lecture capture?

ARI BIXHORN: Yeah. In addition to captions, we allow several kinds of metadata. One is chaptering information. One is timestamped notes during the playback. So a student can take notes that are private notes to them. Those notes are timestamped and saved along with the recording so that they can access them after the fact. And what was the third one?

LILY BOND: Chaptering, metadata terms, or word search.

ARI BIXHORN: Gotcha. Word search is also included by default. And the word search is done– if captioning is available, the word search will include the captions. So you can search for a term, find it in the captions, and then click on a link that will take you to that precise moment in the video.

For videos that aren’t captioned, we do run every video through a speech recognition system and a text recognition system. And that allows people to search for terms that are mentioned by the presenter or shown anywhere on the presenter’s screen. And similarly, the search results will provide a timestamped link that they can click on and jump right to that point in the video.

LILY BOND: Awesome. Thanks, Ari. Great answer. Christopher and Ari, thank you so much for just really well-thought-out and informative presentations. They were really appreciated by us and by all of the attendees. So thank you so much for being here.

ARI BIXHORN: Thanks for the opportunity.

LILY BOND: And thank you so much to everyone who attended. I hope everyone has a great rest of the day.