Home

Plans & Pricing Get Started Login

« Return to video

Presentation at Accessing Higher Ground (AHEAD) 2012 – Webinar Transcript

Thanks everyone for joining us, and thanks to Howard Kramer and the folks who organized this event. This session is titled “Quick Start to Captioning.” My name is Tole Khesin with 3Play Media. We are based in Cambridge, Massachusetts, and for the last five years, we’ve been providing captioning and transcription services to our customers in higher ed, government, and enterprise.

We have about an hour for the session today. And what I’d like to do for the first part of the session is to go through the captioning basics, the process of how to create captions, some of the recent and upcoming accessibility legislation that impacts captions, the value propositions and benefits of captioning.

And then for the second part of the presentation, I’d like to show you how people are implementing captions and transcripts. We have about 400 customers that we work with, and they’ve implemented captions in very different and interesting ways. And I think you’ll find it interesting as well. And then finally, we’ll open it up to an open discussion.

And on that note, if you have any questions along the way, please feel free to interject. It’d be great to keep this as interactive as possible. And then also if you have any questions or think of anything after the session, please come visit our booth out there or send us an email later on. So, let’s take things right from the beginning. So, what are captions?

So, captions are time-synchronized text that has been aligned with the media so that people can view and read the captions while the media’s playing. Captions convey all the spoken content, which includes the non-speech elements and sound effects, which is actually a distinction from subtitles, which we’ll cover in a second.

Captions were originated in the ’80s as a result of an FCC mandate that was specifically applied to broadcast television. So just real quick, if we could just talk about the terminology just so that we’re all on the same page. When we’re talking about captioning versus transcription, the difference is that a transcript has no time code information. It’s not synchronized with the media.

You could basically just print a transcripts in a Word document or a text file. In contrast, captions have time codes associated with them so that it’s aligned with the media. And for the purpose so a lot of accessibility legislation, I should also point out that a transcript is sufficient in the case where you’re only streaming or providing an audio file.

But as soon as you introduce a video component, then you need to have captions, you need to have time synchronized transcript in place. And by video, it doesn’t necessarily have to be a moving picture. It can be, for example, an audio track that’s aligned with a PowerPoint presentation. That would require captions as well, because people need to be able to follow along.

Captioning versus subtitling. So, the difference here is that captioning assumes that the audience can’t hear. And so it will include sound effects and non-speech elements, whereas subtitling assumes that the audience can hear but they just don’t understand the language. So subtitling does not include any of that extra information. And usually subtitling is associated with translation to foreign languages.

Closed versus open captioning. So the difference here is that closed captioning, usually it means that the closed caption file is a separate file that the video is pointing to. And what that allows you to do with the video player is to be able to turn the captions on and off. Open captioning means that caption’s actually burned into the video. So there’s no way to turn them on or off.

Especially with the proliferation of web video, people have really moved away from open captions. Closed captions are really the standard now, and there are a number of reasons for that. For one thing, it gives the audience the ability to turn them off in case they’re obscuring something in a picture, and from a workflow point of view, it’s just much easier to deal with rather than actually burning in the captions into the video.

Post production versus real time. So, real time means that you’re actually scheduling lives stenographers to come to an event, or maybe they do it remotely, but they’re actually, basically, typing along in real time as someone speaking. Post production, or sometimes it’s referred to as offline captioning, means that it’s done after the event. So usually that means that you already have the recorded file in place.

So, how are captions used? Oh, yes?

It’s a question from the web audience.

Yeah, absolutely.

This may be coming up a little bit later, but what if it’s just a talking head, meaning that we have some videos that are nothing but a person talking into the camera? It might as well be just audio, and we only have transcripts on the captions.

Yeah, so the standard there is to have captions so people can be able to– basically, the distinction is whether there is or is not a video component. And if there’s a video component, even if it is someone just speaking, even if it’s a talking head, then captions are still required.

So, captions originated about 30 years ago, and they were really intended specifically for broadcast television. But now, with the proliferation of online video, the need for web captions has expanded greatly as well. And so as a result, captions are being applied to many different devices and media, especially as people become more aware of the benefits and as accessibility laws become more stringent.

So with that, we’ll talk a little bit about the accessibility laws. So we have sections 508 and 504, which are both from the Rehabilitation Act. So, Section 508 really is just a mandate that requires that any kind of electronic communications be made accessible to the federal employees and the public.

Section 504 it sort of has similar consequences, but that’s basically an anti-discrimination law. It basically says that disabled people that have disabilities, including hearing disabilities, should have the same type of access. And these laws apply specifically to federal agencies, and also any other organizations that take federal subsidy. So often that includes, for example, public universities.

The next slide, the 21st Century Communications and Video Accessibility Act, often abbreviated as CVAA. That was signed into law in November of 2010. And basically, the CVAA requires that any programming that did air on television and is in parallel also airing on a website or being distributed on a website, it has to have captions as well.

And it is actually being phased in, and– quickly talk about the timeline. September 30 of this year, so a couple months ago, the first phase kicked in. And what this did is this actually specified that content that aired on TV and is now on a website, but that was edited for the website. So it’s a very rigid specification.

So in other words, if you take a show, and you cut it up into clips, or maybe you take out the commercials, or maybe you trim it, then you don’t have to caption it at this time. So then the phase two kicks in in March of next year. And that applies to live and near live programming. So this might be a sports show or a news cast which would have to have captions when it’s published on the website.

The third phase is pre-recorded programming that is edited. So this is the case where, if you take a video file, it may be a sitcom or a movie, and you cut it up into clips, whatever edits you make, if you put on a website, you’ll still have to add closed captions to it. And then the last phase, which is going to be in March of 2014, that actually applies to any kind of archival programming.

So there are actually more phases beyond this, but these are the near term ones that are coming into play. And again, this law really impacts only content that aired or is airing on TV. So if it’s not airing on TV, this really doesn’t apply. So for example, a lot of corporate content would not fall under this. A lot of educational content would not fall under this. Netflix and Hulu certainly fall under this.

Netflix does or does not?

It absolutely does, yeah. So talk a little bit about the value propositions of captioning. And obviously this is an accessibility conference and everybody understands that captioning is critical for people that are deaf or hard of hearing. In the US alone, there are close to 50 million, five-zero, 50 million people that are deaf or hard of hearing.

So it’s a very large contingent. Obviously very important. But there are also many, many other benefits. And these are not side benefits. These are primary benefits. And the reason why it’s important to talk about them is that, at least from our point of view, the more reasons that we can give to publishers to add captions, especially reasons that benefit everybody, the more captions we’re going to have.

So I think it’s really important to understand that captions are really valuable. So we’ll just skim through these a little bit. So what we’ve seen is a lot of people that use captions aren’t deaf at all or don’t have any hearing issues whatsoever. For example, they may be students that know English as a second language and they find it much easier to read the captions or transcript.

There’s the flexibility to view the content. And if you go to a library at a University, for example, you can turn on your volume. So you’d have no way to really hear or understand what’s being spoken in a video unless you have a text captions or transcript alongside it. Search is a really important benefit as well. Online video is just proliferating very rapidly.

I mean, universities, and companies, and governmental organizations are amassing terabytes and terabytes of video, but unless you can search through it– and the only way to really be able to search through video is if you have text, metadata, alongside it. So unless you can search through it, it’s kind of useless. You can never find what you’re looking for.

And that sort of goes hand in hand with reusability. Just the ability to find a specific video clip very quickly and to be able to re-purpose it for something else is very important. And in the case of education, we have customers, for example, that are captioning their lectures. And then often, the professor will then take the transcripts from the entire course and maybe use that to help write a textbook or to publish a paper.

It’s a tremendous amount of content, right? People speak at about 150 words per minute. So if you think about that, over the course of an hour you have 9,000 words. And let’s say you have 20 classes or 30 classes. There’s a lot of content to draw from and to re-purpose that. And navigation, I’ll actually show you how you can really improve your navigation using text.

Another big one is SEO, which is search engine optimization. And what that is is, unless you transcribe a video and put it into text, Google and search engines really have no idea what’s in that video, and they have no way to help people find that video. So that’s really important. I was kind of surprised to learn, but we have many customers that have transcribed their content and added captions, but they actually know nothing or don’t even appreciate accessibility.

They do it for one of these other reasons. SEO is actually a big one. And then finally, in the case where you want to translate video into another language, you need to transcribe it first. So that’s sort of a precursor for that step. Talk a little bit about the captioning process. Oh, by the way, are there any questions so far? OK, great.

So, a little bit about the captioning process of how we do it. There are certainly different ways to do it, but the way it works with our company is that the first step is you upload the video to us. And there are many different ways of uploading it. You can upload it from your desktop, you can import it from a video platform that you’re using, you can put in a links. So many different ways of doing that.

We process it, and then you download captions. There are variety of formats depending on what device and media player you’re using. And then you publish it, again, depending on how you’re set up to do that. So the upload, what often people do is you can upload it from your desktop, you can FTP files in. And for example, you can put in the links from YouTube pages and out site will just fetch those.

And then after the files are processed, you can download one of a variety of formats that actually I won’t get into these now because I have another slide to talk about formats. And then finally, publish. And this is actually a YouTube player, but there are many different players and platforms where you can publish videos and their respective captions.

And each one is done a little bit differently. Sometimes the captions file is a separate file that lives separately from the video and the player just points to their file. Other times, the captions file is actually encoded with the video. And that depends on whether you’re using YouTube, or iTunes, or one of the video platforms, so on. Here we go.

Captions formats. Unfortunately, because there’s no universal standard, there are many different players, and as a result, there are there all these different formats to choose from. And we actually create all of these formats to just make the process as simple as possible. On the right hand side of the screen, this is an example of what the SRT format looks like.

This is a very common format. It’s a very simple format. And there you see there are actually three caption frames, and each caption frame contains the spoken text in that frame, the endpoint, and the outpoint. But some of these captions formats are a lot more sophisticated. For example, if you want to author a DVD, you’re often going to use the SCC format.

And that’s much more complicated. You wouldn’t even understand what’s happening in it, because it uses hexadecimal values instead of text. A lot of these use XML. So some are a lot more sophisticated. One format that’s actually emerging is called the WebVTT format. So this is something that’s coming out in conjunction with HTML5.

And there hasn’t really been critical mass in terms of adoption, but I think everybody’s moving in that direction. And the neat thing about WebVTT and HTML5 is that ideally, the visions is that it’ll be a universal standard. So you’ll be able to actually play a video on a website without using any other third party technology. So you won’t need to use another player.

And let me just finish the thought. In WebVTT, the idea there is that you’ll actually be able to, just in your HTML, just put in a tag called track and reference that WebVTT file, and then that’s it, you’re done. I think we’re still a few years out from getting enough adoption to the point where it becomes the standard, but that’s the direction that we’re moving in. Oh, there’s a virtual question.

The University of South Florida asked, does WebVTT only work with HTML5?

That’s a good question. I think the idea is, WebVTT is really geared towards HTML5, but I think others will start adopting it as well once it gains adoption through the web browsers. Yes?

I just want to say that I think that’s what we say at the luncheon before we came here was a long video played through the Chrome browser using HTML5 and I’m sure it used the WebVTT.

Yeah, excellent. Great. Yes?

If you don’t own the YouTube video that your faculty is using, can you put captions on it with someone without interfering [INAUDIBLE]?

Yeah, so that’s a good question. So there are a couple of interesting issues with that.

Say the question again.

Oh yes, sorry. So the question is, if you’re not the owner of the YouTube channel and you need to add captions to it, how do you go about doing that, and is it possible to do. There are a couple of interesting issues with that. One is about the question of copyright. Can you even take that video and create captions? I don’t really want to comment on that. That’s sort of a grey area.

But in terms of technically doing it, it’s actually pretty straightforward. As far as from our point of view, you could just put in any YouTube link and we’ll fetch that file and we’ll create the captions. And we actually have a universal captions plugin which I’ll demo a little bit later that will allow you to overlay captions on any video, regardless of whether you own it or not.

Could you do it on a DVD too?

So the captions plugin is actually JavaScript embed overlay, so it’s designed for a web page. What we’ve done in order to simplify the captioning and transcription processes, we’ve built integrations with a number of video platforms, and players, and lecture capture systems. And in a lot of these cases, for example with Mediasite, or with Echo360, or with Kaltura, or Tegrity, any of those platforms, you actually don’t even need to upload the video or download anything.

Basically the way it works is within your Mediasite account for example, you just specify which presentations you want to have captioned, you press a button, and everything else happens automatically behind the scenes. So Mediasite will send the media file to us, we’ll process it, and we’ll send the captions file back, it gets reassociated with the original files, and then it just appears.

And the same is true for a lot of these other platforms. And we’ve built a lot of these integrations just to simplify the process, because with all the different formats, the workflow can get pretty messy. This is the captions plugin that I brought up earlier. So the idea here– and actually I’ll demo this in a second. I don’t want to spend too much on this slide.

But the idea is that this plugin, which you can see below the video player, is just a few lines of JavaScript embed. And it actually overlays. It can either sit underneath or you can overlay on top of the video. And it will stream in the captions as the video is playing. And so the interesting thing about it, this is actually a Vimeo player here, and Vimeo does not natively support captions.

There’s no way to get captions on Vimeo. But, using this plugin, it’s actually very simple to get captions. And then in addition to captions, we’ve also built a lot of different technology and video plugins to take captioning and transcripts to a new level. This is a screen shot of an interactive transcript.

And I’ll demo this in a minute, but the idea is that you can use that time text data that you’ve already created for the purpose of the caption. You can use that data to make the video viewing experience a lot more engaging and searchable. So this interactive transcript, you can click on any word to jump to that exact point in the video, for example.

You can create clips just by highlight. You can highlight a section of the text and it’ll give you a unique URL so that you can share it with your colleagues or on Twitter. And then when other people click back through on that link, it will bring you back to that specific video and play just the section of text that you highlighted. This is a lot of really interesting stuff that I’ll show in a second.

And this goes back to the question of value proposition to the publishers. The more reasons and the more values we can show the publishers to transcript and caption their content, the more captions are going to be out there. OK, so that’s the first part of the presentation. Before we go on, are there any other questions before we dive into the demos? Yes.

Will you be speaking about time a little bit? Not necessarily time to have it done commercially, but the actual physical time if something is getting captioned?

OK, yeah. So the question is about turnaround time, if I understand correctly. Typically, if you’re doing the captioning yourself, the standard processes is first to transcribe it and then to synchronize it with the media. And the total time it can take seven to eight times real time. That’s pretty typical if you’re doing it yourself.

We’ve actually develop a process which cuts that down a lot. So we have a three step process where when we receive the video, we first put it through speech recognition which gets it to about 75% accurate. But key thing about that is that it inserts the time codes with every word. And then we have professional transcriptionists which will go in and clean up the mistakes left behind by the computer.

And subsequently, we even have a QA person who’ll go through and research difficult words, make sure the punctuation and grammar are in place. And so that process, because the computer is doing 3/4 of work, we average probably about 2x, maybe I’d say 1 to 2x real time. So it’s much faster for us than if you were to do it yourself, but that’s approximately the turnaround. Yes?

One of the web audience asked if they can only have access to these overlays and plugins if 3Play Media does the captioning, or are they available to caption in-house?

That’s a good question. So, currently, these plugins are only available if the content is processed through us. However, we are working towards coming out with some plugins that are publicly available. Actually, on the turnaround question, I should just point out that for our company, this standard turnaround is anywhere from, you can specify the time of upload.

So it can be one day, two days, or four days, depending on the urgency. And we’re actually getting ready to roll out a same-day service, which means that your captions and transcripts will be ready within eight hours after upload.

Cost?

Yeah. So, the cost for our company is pretty straightforward. It’s $150 per recorded hour. And it’s prorated to the exact duration of each file. So let’s say you have a one hour lecture, that would be $150. If you have a two minute video, it’s $5. And there are no minimums. And then actually what a lot of universities or organizations do is they’ll buy a bucket of time.

You’ll buy, for example, 100 hours. And then different departments will use the service. And then every time they use the service, it just debits against that balance. And then it locks in a volume discount as well, so that price comes down from there. So this part is really interesting for me.

As I mentioned before, we have hundreds of customers, and the way that they implement transcripts and captions in terms of workflow and the way they’re published varies a lot, which makes sense because everybody has their own objectives and different resources at their disposal. But I want to show you just a handful of some of the interesting case studies and the way people have implemented captions.

Some are very simple, others are a lot more sophisticated. So, starting with this first one. What we’re looking at here, this is a screen shot from Netflix, and this is actually pretty straightforward. They basically just take a captions file, and they ingest it into their process, and it just shows up on the video player. Very straightforward. Here’s another straightforward case.

This is a Khan Academy. Again, just captions. They’re actually using a YouTube player, and very, very straightforward there.

I have a question on that.

Yes?

I’ve watched a couple of random ones on Khan Academy, and they appeared for a very short period of time as far as being able to attend to what was being written as the algebra problem and what was being written as the caption. Do you have control over that and how long it stays on the screen?

Yeah, that’s interesting. I’d love to see an example there. Typically, the caption frame stays on the screen during the time that it’s being spoken. So yeah, it’s interesting. I’m not sure why that would be happening. OK, so here is an example with Mediasite. This is actually Colorado State University. And the way that this works is there’s a bidirectional integration in place with Mediasite.

So Colorado State University basically just specifies which presentations they want to have captioned, they automatically come to us, we caption them, send the captions back, and then they just appear. They actually just appear right here below the video, and it’s synchronized with the slides and the video right there. Pretty straightforward.

The same is true with Echo360. The captions appear right here in this little draggable box. This is just a quick screen shot. Georgia Tech uses a system called Tegrity, which is similar to these other systems. And it’s the same workflow there. The captions appear right there underneath the video and along with the slides that the professor is using.

This is an example. This is actually with Regis University. They use Kaltura. And so we have, again, the same kind of bidirectional workflow in place with Kaltura. And you can see the captions appear right here. They’re also using the interactive transcripts in some cases. This is an interactive transcript right here. It just provides a little more engagement.

I’ll show you a better example of that here in a second. This is interesting. So, this is with Penn State University. Penn State– this is the College of Arts and Architecture– they have built their own media management system called Elimedia for their E-learning Institute. And it allows instructional designers to basically take all of their assets, their video, images, all that stuff, and put it together and create course content.

And what they’ve done is they have built a custom work flow. I’ll show you this brief video here of how it works. So this custom workflow– I’m not getting much of a signal here. Here it goes. Basically the way this works is instructional designers, as soon as they import a video, then it automatically comes to us. We process it, we create the transcripts and captions, and we send it back to their media management system, and then it just appears.

And unfortunately it’s freezing up here. But here we go. This is actually all built in Drupal. So, the neat thing about the system that they built is that once an instructional designer or whoever’s creating the course elects to have a video captioned, you no longer have to worry about it. Once you check off that box, you can take that video and post it somewhere else, in some other course, and the captions will just show up. They’re tied with it. So you never really have to worry about it again.

This is an example of a site at MIT that has hundreds of hours of video content and hundreds of different interviews here. And they’ve built this really interesting site using our interactive plugins. And hopefully I’ll be able to show this. Here we go. So, the way that this works is above the video is this interactive transcript, and what it’s doing is it’s highlighting words as they’re being spoken.

And the user can also search through that video by searching through the transcript. And you can click on any word to jump to that exact point in the video. Then, on the right side of the video is another plugin that we just call an archive search plugin. This is really, really neat, because it actually lets you search across the entire video library. So let’s see if this is going to–

So, if I search for something like linguistics, there we go. So, it’ll show you where that word was spoken within all of these different videos. So, each one of those horizontal lines, that’s actually a timeline of a different interview. And if I click on one of these sections, it’ll actually expand that section of transcript, show you where the word linguistics was spoken.

Then if I click play, it’ll switch out videos and jump to that exact point. So it really just takes this library which has 300 hours of content and makes it immediately accessible to everybody. This is with Penn State, and this is actually an example of that captions plugin that we were talking about earlier. So this is a Vimeo player, and here’s the plugin here.

And as you can see, it’s just streaming in the captions as the video’s playing. Makes it very simple to use with really any player. And actually there was a question earlier about what to do in the case where it’s a YouTube video and you don’t own the content. Well, this would be very easy to do because you can just embed that video even if you don’t own it, and then just add this captions plugin to it.

It has some additional features too. I mean you could even, for example, search. If I search for something, it’ll show me where that word was spoken within that video, and then it can jump to that exact point. Something that ordinary captions don’t do, obviously. So, let’s see here. This is MIT OpenCourseWare. So, MIT OpenCourseWare was actually our first customer five years ago, and we have captioned all of their content. We’ve been captioning all of their content.

And this year, we actually started building this new UI for them. And this really takes it to a new level. So, what you can see here, we’ve got the video here, we’ve got the interactive transcript here. It has the same functionality that we talked about before. And then the right hand side is the ability to search. So, this is actually a math course with 38 lectures in it.

And if I search for, let’s say I search for array, it’ll show me which lectures have that, where that word was spoken. And if I click on one of these lectures, it will switch out videos and actually point me to where that word exists in that lecture. And unfortunately, the internet connection is freezing up here. But it’s similar to what you saw in the MIT150 site.

So, here is another really interesting example. This is with Al Jazeera. They transcribed the presidential and VP debates. And they did some really interesting stuff. So let me show you how this works. We’ve got the interactive transcript below. Again, the same functionality. You can click on any word and jump to that point, you can search through it, but the other near thing here is that you can search here across the transcript.

Now, if I search for something like Iran, it’ll show you in this little pie graph who said it more frequently. So you can see that Romney said it 17 times. And then over here, it’ll actually plot where that word was spoken along the timeline and who actually said it. So if I click on this link, it’ll actually jump to that point in the video where Obama is talking about Iran, which is pretty interesting.

So this is really some really neat stuff that you can do with the transcripts in addition to captions. And I should also point out, the thing that’s really neat about all this stuff is that there’s actually no cost to implementing any of these plugins. They’re really easy to add to a website, and there’s no cost because we’ve already transcribed the content. We already have the text data.

We’re really just re-purposing it for these other plugins, and we actually don’t even charge for it. So it’s really a neat thing that you can do down the road. And actually what a lot of customers do is they’ll start out by captioning content, and then maybe a year later, or in the case of MIT OpenCourseWare, five years later, they go back and they say, OK. Let’s put in some of these interactive features that make the content more engaging.

So that’s all possible. Here’s another interesting case. This is actually through our partner KnowledgeVision. So you can see the interactive transcript below, but what you can do here is that it’s synchronized not only with the video, but also with the slides. You can change the aspect ratio of all these things.

Down below, there are chapters that you can jump to, and related materials. And even the related materials are synchronized as well. It will show different materials when you’re in different chapters. So that’s pretty neat. And let’s see here. So that’s what I wanted to show for the demos. And so now I wanted to open it up to Q&A and open discussion. OK, question right here.

I have a question about the formats that you’ll produce it in. So, if somebody’s buying your services, they can request any or all?

Yeah. That’s a good question. They’re all available any time. The way our process works is we will create a core word to word time synchronized transcript. So there’ll be a transcript with a time code behind every word. And from that, we create all these derivative formats, all these different caption formats, timestamped transcripts, all of these interactive formats.

But at the core is just this timestamp. And we store all of those indefinitely. So you could come back years later and do something different if you’re publishing with a different player. It’s all available. OK, so there’s a–

A question from the University of Iowa. Are we able to get links to some of those examples, or where can we find those?

That’s a good question. So actually, a lot of these on our website, there’s a section there on case studies. And a lot of these, or some of these, at least, are there. But if you send us an email, I’d be happy to send links to all of these. Yes?

You say a word for word time synchronization?

Yeah.

That’s a good concept for me, because that precise– How’s that done? How’s that possible?

It’s interesting. So that’s an outcome of the process that we’ve built. So each one of these words, for example, has a time code behind it. So I can click on any word, and it’ll jump to its respective point in the video. And it’s just an outcome of the process. When we put it through our speech recognition process, the draft that comes out of that already has a time code associated with each word.

So then, when our transcriptionists go to clean up that draft, it already accounts for the time codes that are already in place. And so actually, when we create captions files, those are actually less precise. We have to break them up using natural language processing. Yes?

I have a question, but first I want to say, I love your tool, your SFV to SRT converter. I go to your website and I use it almost every day. So that’s really a great thing. And what we’re interested in outsourcing for some more technical videos in our nursing program and things like that. What if, when we get the caption file back, some editing needs to be done past that. How does that work?

That’s an excellent question. In most cases, we find that the transcript is very, very high quality and the accuracy is very high, that editing isn’t really required. But in rare cases, you’re right. Sometimes you’ll need to maybe change the spelling of someone’s name if the transcriptionist wasn’t able to research that. Or you might actually want to redact part of the transcript, or you might want to block out somebody’s name.

So for that purpose, we built a captions text editor that’s built into the account system. I have a–

Could you give a quick summary of the question?

Yeah. So the question was, what to do in the case where an edit needs to be made to captions after they’ve been processed. So this is a screenshot from the captions text editor. So in the account system, you open up that file and you say edit, and what it’ll do is it’ll bring you to the screen where on one side you’ll see your video, and then on the other side you’ll see the transcript.

And you’ll actually be able to freely make changes to that text, and it’ll automatically account for the time synchronization. And the neat thing about it is that each word is linked to the video. So let’s say you want to listen to a word over and over again, you just click on it so you can hear it better and it’ll play from that point. And then as soon as you click Save Changes, that will propagate through to all the output files.

Do you have a standard for quality of the audio that you’ll accept? What do you do when you get a video from a campus that’s really, really poor quality?

Yeah, that’s a really good question. So, most of the video that we get in education is actually–

Can you repeat the question?

Yeah, sorry. So the question is what to do in the case where we receive video where the audio quality is very poor. So most of the times, especially in education, it’s not an issue. People usually produce and upload very high quality audio. It’s really more just a question of miking correctly.

Once in a while we get content where either there was a lot of background noise, or maybe have many people talking over each other, or music playing, or it wasn’t miked properly. So in that case, what we usually do is we have to assess a small nominal difficulty surcharge, but it’s pretty rare. Any other questions?

You did mention Echo360, I think, as one of the other venders that you work with?

Yes. So the question is whether we have an integration with Echo360, and yes we do. So we have a complete bidirectional integration with Echo360. You don’t really have to do anything. You just sort of specify which files you want to have captioned, and then that’s it. Everything else is done behind the scenes.

Do you have a minimum of buckets that you have to buy? You said a lot of people buy a bucket.

So the question was, in regards to pricing, whether there’s a minimum amount that you need to purchase. And the answer’s no. There are no minimums. You can open up an account and just upload a few files if you wanted to. There are no minimums at all. The purpose of buying those buckets is to secure a volume discount, but there’s really no minimum requirement. Yes?

The captions plugin that you described that currently is available when you use 3Play services– [INAUDIBLE]. In order to play that back by bringing in video from one source and then linking it to your captions, say from YouTube, and then you’re combining the captions, where does the interface have to take place? Could that happen on Echo360? Does that have to happen on your own server that you host?

Yeah, so that’s interesting. So, first of all, with the captions plugin and also the interactive transcripts from the other plugins, there are– Yeah, I keep forgetting to do that. So the question is, if you’re going to be installing the captions plugin, how does it communicate with the video player and where does the interface take place?

And I just want to back up and say that that plugin as well as the interactive transcript and the other plugins, the standard way of installing them is that we are hosting the transcript files. And so basically, you would just install that by embedding it just with a few lines of JavaScript code.

So the same way that you embed, for example, a YouTube player on a website, you would then put in the embed code for the captions plugin, which would then reference the player, and then it would just communicate automatically. But there’s also another way of installing these plugins, which is that you can actually host them yourself.

So you can actually host the transcripts and these plugins on your own servers and have it streaming from your location. In that case, you would have complete control of it. There are some advantages and disadvantages to doing hosted versus us hosting it. Does that answer your question?

It does, 90-something percent of what I was wondering. But there’s one more piece of that, which is, can it take place inside a content management system, or does it have to be on a [INAUDIBLE]?

So the question is, can this take place inside a content management system. And definitely a lot of these plugins are actually in a content management system, in Drupal, WordPress, pretty much any kind of CMS, or even if you’re using an LMS. As long as you have access to a page where you’re putting in HTML or you’re able to embed a video, then you can just embed this plugin there as well. OK, great. Well, again, thanks very much for attending.

[APPLAUSE]

Interested in Learning More?