Plans & Pricing Get Started Login

« Return to video

Presentation to AHEAD-C [Transcript]

PATRICIA TESAR: Hello, everyone. Can you hear me? My voice is fading away, but hopefully, it will last until the end of this. OK, I think we’re ready to begin now. As I think most of you know who I am now, my name is Patricia Tesar, and I’m the director of the Office for Students with Disabilities here at Gallaudet University. And I’m also the president-elect of C-AHEAD.

But today, I have the duty of leading this group because Emily Lucio, our C-AHEAD president is on maternity leave, so I’m delighted to have this opportunity to really meet everyone today. I welcome you to this event, and I hope that it will be beneficial to all of you as you work with students in [INAUDIBLE] services. I’ve spoken to many of you through email, on the phone, and some of those that we’re not able to come today as well, and I feel like I’ve begun to get to know who you are. But I’m the kind of person who likes to match the face with the name, and so you know who I am. I’m going to ask if you can all go around one-by-one and just tell me who you are, where you’re from, and what kind of work you do in serving students?

So we have interpreters available. The presentation is also captioned. So I’d like you to introduce yourself, tell me where you work, and tell me what kind of work you do.

BARRY WHITE: I am Barry White, and I work here at Gallaudet in Video Production Services. I’m a senior video producer.

PATRICIA TESAR: Thanks, Barry. Good to see you again.

STEPHEN SIMCOE: Hi my name is Stephen Simcoe. I work at the Community College of Baltimore County. I’m an instructional designer. And in terms of accessibility and student services, I work with Blackboard. I teach a class, and part of that class is to make faculty aware of what the college’s policy is in terms of accessibility, how to incorporate it into their syllabus so that students that require accommodation can contact the faculty as well as the Office of Disability Support to get the services they need. Thank you.

PATRICIA TESAR: Thank you and welcome.

MICHELLE SHENK: Hi, I’m Michelle Shenk, and I’m from Shenandoah University in Winchester–


CAMERON: All right, Josh. OK, well, Josh, I’m going to let you take over, and would you like the Q&A to wait until you’re done with your presentation, or can people ask you questions throughout the presentation?

JOSH MILLER: It might be easier, especially if we’re going to be passing mics around, it might be easier just wait until the end. And I am definitely happy to answer any questions. I definitely want to make sure there’s time so that we have a good conversation at the end of this. I don’t think it’ll take too long to go through the presentation.

MALE SPEAKER: All right, Josh, I’m going to hand it over to you. Everyone’s ready to hear your presentation.

JOSH MILLER: Great. Thanks a lot, Cameron. And thanks everyone for inviting me and taking the time today. It’s not the nicest of weather out there. It’s certainly not any nicer up here in Boston. So it sounds like it’s a great crowd there and a lot of good interest in accessibility and online videos, so that’s great.

So as Cameron mentioned, I’m one of the co-founders of 3Play Media. I’m going to talk a little bit about some of the basics of closed captioning, some of the applicable legislation, and our approach and just kind of how we look at this, how we tackle this challenge of making video content and audio content more accessible and also how we also really think of this as, in some ways, an opportunity for students as well.

So my contact info’s on here. Definitely feel free to reach out. If you have questions that we don’t get to today, I’m happy to continue the conversation offline.

So just a little bit about us as a company. The inspiration for 3Play Media started when we were doing some work in the Spoken Language Lab at CCL, which is the computer science department at MIT. We were approached by MIT OpenCourseWare to really apply speech technology from this project we’re working on to captioning for a more cost-effective solution.

We quickly recognized, though, that speech recognition alone would not suffice and that, at the same time, it did provide an interesting starting point that we really wanted to explore. So we developed an innovative transcription process that does use speech technology but also uses humans in the process. And that way, we end up with a really high quality transcript and closed caption file, and even text with time synchronization.

So we’re constantly developing new products and ways to use these transcripts, largely with the input of our customers. So we really value input from our customers, and a lot of our customers are in the education area. So a lot of what you’ll see really does have student learning in mind. So as I mentioned, we’ll talk a little bit about some of the accessibility trends, captioning basics, the laws, benefits of captioning, which we really don’t want to overlook, and then our approach and process.

This is a bit of data from the 2011 WHO Report on disability. It states that more than 1 billion people in the world today have a disability, and nearly one in five Americans age 12 and older experience hearing loss severe enough to interfere with day-to-day communication. So the other interesting conclusion is that the number of people requiring accessibility accommodations is rapidly on the rise relative to population growth. So of course, we want to think about why is this happening.

Well, one is just medical advances. So things like a premature birth, a car accident, aging, these are all things that even just a couple of decades ago would be a serious threat to someone’s life, but now we’re able to save someone’s life in some of these situations. Unfortunately, there may be some consequences of that. There might be some by-products, such as hearing loss, that come about. We’ve also been at war for more than a decade now, so again, thanks to modern medicine, a number of casualties are not as bad as maybe they would have been in terms of fatalities, but we still are dealing with the injuries and the side effects of what happens when thousands of people are at war.

So what it comes down to is that accessibility, hearing loss, and these are real things that are critical issues that are just going to continue to be prevalent in years ahead. Is it going to be the case for our student population who is a young population that will have all these same issues? Not necessarily in the short term, but it’s something that we have to pay attention to.

So let’s talk a little bit about captioning basics. Closed captioning refers to the process of taking an audio track and transcribing it to text and then synchronizing that text with the media. Closed captions are typically located underneath a video or overlaid on top. In addition to spoken words, captions convey all the meaning and include sound effects that are relevant to the content. And this is a key difference between captions and subtitles that we’ll talk about. Closed captions originated in the early 1980s by an FCC mandate that applied to broadcast television. And now that online video’s obviously becoming a huge medium, captioning laws and best practices are really proliferating there to online media as well.

So first let’s talk about some common terms– captioning versus transcription. So a transcript is usually a text document without any time information, meaning that the text doesn’t necessarily coincide with any point in the video visually. On the other hand, captions are fully time-synchronized with the media. So you can make captions from a transcript by breaking up the text of that transcript into smaller segments called caption frames, and synchronizing those frames with the media, and that way each caption frame is displayed at the right time.

And as I mentioned, captions and subtitles are definitely different. The main difference is that subtitles are intended for viewers who do not have a hearing impairment but may not understand the language. So it’s much more about language versus inability to hear. Subtitles capture the spoken content but not the sound effects, because the assumption is that someone can hear those sound effects that are relevant. So for web video, it’s absolutely possible to create closed captions or subtitles and have both tracks playing on the media player, or at least make them both available to the viewer. That’s absolutely possible with many media players.

Then closed versus open captioning– I think the standard that we often see is closed captions, but sometimes you hear people talking about open captions. The main difference here is that closed captions can be turned on or off by the viewer. The viewer has complete control, whereas open captions are always on the screen and cannot be turned off. And that’s a key difference. As I said, most media players on the web enable closed captioning.

And then post production versus Real-Time. So you may see that there are live captions for this presentation going on right now. That would be an example of Real-Time captioning. Post production means that the captioning process itself occurs offline, usually after the recording is complete and could even take a few days to be fully processed. Obviously real-time captioning is done by a live captioner who’s getting the feed to the audio and able to type quite quickly to show you the text of what’s being said. And as you might imagine, there are advantages and disadvantages of both processes depending on what your needs are.

To talk a little bit about how captions are used, because they are really applied across many different types of media, especially as people are becoming more and more aware of the benefits, especially within the internet environment, the way captions can be used is actually much more expansive than just on broadcast television.

There are many different caption formats, and this is where things can start to get a little complicated. And different media players online basically require different types of caption formats. Some are certainly more common than others, though. That’s the good news is that as you work with more and more web video, some of these caption formats will start to become quite familiar, because they get used over and over again.

So the image that you see on the right is what a typical SRT– SubRip file format– looks like. It’s basically pretty straightforward. It’s time code of the start time of the frame, time code of the end time of the frame, and then the text of that frame. And it literally can be constructed within a text file, and that’s what YouTube and a number of other media players can read and display the captions appropriately.

So there are a number of video platforms that operate this way where you take a file and create a caption file and associate it with the video. And the technology knows how to associate those caption files with the media player so that it looks like it was all one viewable experience for the viewer. But most times with web video, those two files, the video and the captions, do remain as separate files.

So we’ll talk a little bit about some of the accessibility laws that often come into play. So section 508 is a fairly broad law that requires all federal electronic and information technology to be accessible to people with disabilities, including employees and the public. For video, this means that captions must be added. And for podcasts or audio files, a transcript is sufficient because the visual isn’t, obviously, as important.

Section 504 entitles people with disabilities to equal access to any program or activity that receives federal subsidy. Web-based communications for educational institutions and government agencies are covered by this as well. And Section 504 and 508 are both from the Rehabilitation Act of 1973, although Section 508 wasn’t added until the mid-80s. It’s actually important to note before I go on to the ADA that a number of states have enacted legislation that mimic Section 504 and 508, and so they’re called something different, but the content of the policy is very, very similar.

So the Americans with Disabilities Act of 1990 covers federal, state, and local jurisdictions. It applies to a range of domains including employment, public entities, telecommunications, and places of public accommodation. The Americans with Disabilities Amendments Act of 2008 broadened the definition of disability and made it the same definition as in Section 504. And that means that more people were actually covered by the ADA.

Now the ADA is interesting because it’s ADA that was cited in the lawsuit that was brought on by the NAD versus Netflix. Netflix argued that the ADA applies only to physical places, and that was the site that the NAD– National Association of the Deaf– noted that Netflix is essentially a public entity, or it’s a place of public accommodation. So Netflix said, no, we’re not. We’re not a physical place and therefore it shouldn’t apply to us.

But the judge actually ruled the ADA does, in fact, apply to online content and that Netflix does qualify as a place of public accommodation. So the ruling doesn’t get too specific. But the idea is that Netflix is so easily accessible to so many people, in terms of just reach, that it should count as a place of public accommodation.

So the ruling has profound implications for anyone publishing content online that has a broad reach. So whether it be education enterprise, entertainment, this is something that now has to be considered. So the court did not get into exactly what the details are that provide reasonable accommodation, or what the cutoff is to be considered a place of public accommodation, but it certainly has some interesting implications going forward.

And then the CVAA, which is the 21st Century Video Communications and Accessibility Act was signed into law in October of 2010. This expands the closed captioning requirements for online video that previously aired on television. So this is the law that most people may have heard of more recently because it’s gotten a lot more press thanks to the fact that the entertainment industry just has quite a bit of recognition. And this has brought up quite a few issues for them.

So the law is often referred to a CVAA and, basically, it really is pretty straightforward that if the content aired on television with captions at any point, when it goes online it also has to have captions. And the rule specifically says that the captions that it goes online with must be as good or better than the captioning experience on television.

We’ll talk a little bit about the benefits of captions, and this is something that we really take seriously because there are obvious reasons to captions. The accessibility for the deaf and hard of hearing, there’s no question that that is very serious, and everyone should be able to consume content. There are number of other ways that captions can get used and can be valuable that sometimes get overlooked.

For example, ESL viewers– so people who don’t speak English as their first language often have trouble listening to the speaker and following at the same pace, whereas it’s easier to read the text and follow a little bit faster that way. Certainly, if you’re really new to the language, it makes a huge difference. Noise-sensitive environments whether it be a gym, library, an office where you’re not supposed to have sound on, all of a sudden captions let you consume the content. We actually work with a large wireless carrier that puts content out to all of their employees in their retail stores and everything, and all the computers in their retail stores don’t have speakers. So they caption everything, because that’s the only way their employees will actually know what’s going on in the video.

Search becomes incredibly valuable. Navigation, the user experience in general, and I’ll talk a little bit about some of the tools we built, but once you have text and the time data associated with that text that’s linked to the media, all of a sudden that text becomes a navigation tool. So you can search for words that are linked to a specific point in a video, and that can be very, very valuable. The other part is that the internet, in general, is a text-based entity.

So if you’re trying to search for something that someone said in a lecture or a video, it’s pretty much impossible if you don’t have the text equivalent. So another reason why having a transcript or captions really can become a valuable tool, especially as the amount of video content is just rising and rising and rising, it’s harder to find what you’re actually looking for. SEO, for people who are interested in search engine optimization if you’re looking at this from a more marketing standpoint, the same idea. Google can’t read what someone is saying within a video. But if you have a transcript on that page that corresponds to the video, well, now Google can absolutely read what’s going on.

Just a quick overview of kind of what we actually provide, our focus is really a premium quality transcription and captioning service that can be offered at scale, really. So you can use the service whenever you want, as much as you want, and that shouldn’t make any difference in the quality that you get back. Now we also can translate. So once we’ve created those captions and transcripts, we can translate into other languages. And then we offer a number of interactive tools and this is getting into the whole idea of search and navigation and making that user experience better with the same transcription and captioning process.

And then the other part that we really take pride in is workflow simplicity. So we integrate with a number of video platforms and capture systems to make the process way, way easier. We also have our own API, but then again, if you only have a video or two, it’s absolutely possible to just upload it, have it processed, get captions, and then show that video with captions. So we really try to cater to whatever needs might be there.

Again, on the simplicity, we really try to whittle this down to three easy steps– upload a file, download the captions, publish the video with captions. And we really want to do everything we can to make it that simple. So we have a number of upload options. It can be as simple as you can upload right from your computer into this web-based account that we give you.

We have this idea of linked accounts where you can– and I’ll talk a bit more about this– but you can actually tie into YouTube, or Echo360, or Mediasite and transfer files back and forth very easily. If you have content on a server, you can actually just give us a list of links to those videos and get it going, or certainly FTP. So we really try to make the process as easy as possible.

And then, kind of on the other side, as I mentioned, all these different media players use different caption formats. So we give you all those different formats. In fact, this image here is pretty outdated. I think there are nearly twice as many closed caption formats now that we offer. But it’s basically you download whatever format you need whenever you want it. And that’s really important, especially with internet video, because it’s really common now to put the video up on one place and then maybe put it up on YouTube later, or put it up on Vimeo, or distribute it on multiple channels– iTunes U. So there are a lot of different places to distribute content and each one of those places might have different caption requirements, so we definitely want to make it easy to caption the video, no matter where it goes.

And then the idea of this platform integration or linked accounts that I mentioned, these are pretty much ready to go out-of-the-box with any 3Play Media account. And what that means is that with a very quick setup process, usually 5 or 10 minutes, you then have enabled the ability to make captioning requests for videos right from your Kaltura account or your Echo account or your Mediasite account. Those videos then get sent to us, we caption them, and then as soon as they’re finished, the captions automatically post back to the proper account, and they’re viewable by all the viewers. So it’s very little work involved in terms of actual workflow and publishing process for you.

So the accuracy and quality is something that we take a really, really seriously, especially with academic content. It’s something that you really have to do everything you can to get it right. So we use a pretty unique process because, as I mentioned, we came at this thinking about how we can use speech technology in a way that still gets high quality output. So we use a multi-step review process that does deliver more than 99% accuracy, even in cases of poor audio quality or multiple speakers, difficult content, accents.

Basically, if you were to think about how much work gets done by a machine, usually the accuracy rate is about 60%, 70%, 75% accurate. So one way to say it is maybe 2/3 of the work is getting done by the machine. But then the rest is all done by trained transcriptionists. We actually call them editors. So the process is actually made much more efficient this way, because it affords our transcriptionists the flexibility to spend more time on some of these finer details, because they’re already starting from a draft.

So it means they can research difficult words, names, places. Just the fact that they’re editing rather than transcribing from scratch is just an easier thing to do cognitively, so that makes a big difference. And we really put a lot of care in to make sure that the correct grammar and punctuation is in place. We’ve done a lot of work on the operational side of the business as well, so that means we can match transcriptionist expertise with certain types of content, and we’re doing more and more around that as well, because that’s something that’s really interesting to us.

We actually about 500 transcriptionists on staff, and so they cover a broad range of disciplines. So we have lots of people who are experienced doing math content, even at this point, chemistry and biology. We’re not going to have too many biology majors doing this work, but we definitely are going to have people who are pretty comfortable with it.

So all the work is done by someone who’s been trained on our system. Our software, the editing interface, is very different from say a Word document. It is a very specialized process for this. And all of these people, all these transcriptionists are in the United States, they are native English speakers, and that’s something that we think is really important for quality. Everyone is also under NDA confidentiality agreements, so we do everything we can to make sure the content is being worked on in a secure way and that there’s as much consistency as possible.

So real quickly about some of the plugins that we offer, the captions plugin, it’s a one-line script that can be added to a Web page with your media player that brings a caption track into the page. It can be put on top of the player. It can be put below the player. It gives you a little bit more flexibility about the way the captions show up. It also incorporates search into the caption experience. So you can click on the magnifying glass, and you can type a word in and it will actually show you where that appears in the video and you can jump to that part of the video.

So this is what I was talking about how having captions can be extremely valuable for the navigation. It supports multiple languages, so if you do have subtitle files, it will work. The other part that’s interesting, so up until a few days ago, Vimeo, for example, did not support captions at all. So this would work with a Vimeo player. It works with a YouTube player, or many others.

Another interesting application is that if you want to show a YouTube video in class or online but it’s not your YouTube video and therefore you can’t really add captions to it, this is actually a way that you could add captions to it. So we can actually pull that video in from YouTube and then give you access to this captions plugin and you could embed the YouTube video onto a page and add this caption track. So you’re not officially adding captions to the YouTube video on YouTube, but you’re creating a captioned video experience for the students using this plugin.

And then we have the interactive transcript, which basically expands that window, so it looks more like a normal transcript. And each word will highlight as it’s being spoken. You can click on a word, jump to that part of the video, search within the video. So again, this is a great tool for long-form content.

If someone knows what they want to review or look for, it enables them to just to do exactly that. They can search for it, scan the transcript even, jump to that part of the video. So it’s quite popular with, as I said, long-form content, lectures, webinars, so it can be quite valuable for those purposes. And it’s also, again, very easy to embed. It’s a one-line script, and it works with over a dozen different media players.

So I’ll quickly show you some examples of kind of how captions end up being published with some of these systems, and then we’ll have more of an open discussion. So this is an example of Mediasite with captions. So Mediasite enables the searchability of captions, as you can see more on the left. So there’s actually a Search icon, and it does indeed search on the captions automatically. As soon as you add captions to a presentation, it makes it searchable. And you can turn the captions on and off. And so it really takes advantage of the fact of having text synchronized with the video.

Here’s an example of eLearning content. These are tutorial videos that have the interactive transcript on the right. So it allows people to, if they want to quickly review certain sections, they can do that very easily and this is a Brightcove video player on the left. This is something that Al Jazeera English did with the debates. It’s a customized Brightcove player, a customized interactive transcript.

And what they’ve done is use the fact that there are speaker identifications in the transcript to associate keywords with each speaker. So they can actually identify when President Obama or Mitt Romney said a certain word and where on the timeline. So that’s what you see on the bottom here is the timeline, and then who said what word how many times in those segments of video. And you can actually skip around using that tool. So this something that they built on top of what we already offer, which is pretty cool, because it really shows the power of that synchronized text and all that data in the transcript.

This is an experience we built with MIT for their 150th anniversary where they had over 200 MIT alumni and professors, faculty all talk about their experience with MIT and some of things that have happened over the last several decades. And what you see on the left is the video with the interactive transcript below. And on the right is what we call our archive search. So you can actually search across all 200 videos and then jump to exact points in those videos.

And that’s where you see the red kind of notches on the timeline. So it’s a visual timeline, each horizontal bar, and then the red notches basically tell you where on the timeline your search result has a hit. And then you get the option to expand it and play that video.

This is all from the same transcript and caption data that we’re producing by default. It’s basically a UI layered on top that we offer. Again, it’s very easily embeddable onto a page. It’s compatible with a number of different media players, but the idea is, again, this is the same core data you’re using. It’s the captions and transcripts.

This is something that Infobase Learning is doing, Films on Demand. They put captions and interactive transcripts on their content. You can turn the transcript off if you want to, but this is something they now do for all of their videos.

And then, here’s one more example of that archive search. This now is one course of the MIT OpenCourseWare. And the slight difference here is if you search on the right, it’ll show you which lectures have the search term, and when you jump to that lecture, it’ll populate the search in the interactive transcript.

And we’ve actually done a few studies on the value of this interactive text and the search and everything, and this is where I think it’s really important to recognize that captions and transcripts have value for way more than just people with hearing impairments. This is really valuable for lots of students. It’s a learning tool. It’s a way to consume content. It’s a way to navigate the content. And the feedback was overwhelmingly positive about using these tools within the learning environment.

All right, so that is the presentation. I’m happy to answer any questions and talk through what might be on your mind here.

CAMERON: Hey Josh, I’m going to hand it over to Isaac, and Isaac will take the questions from the audience.


ISAAC: Okey-doke, anybody have questions for Josh? In the back here.

JOSH MILLER: And I should say that I have some time to go beyond 2 o’clock. If you need to stop at 2:00, I totally understand, but I’ll let you dictate how long we go.

ISAAC: The question is what’s the turnaround time for these documents?

JOSH MILLER: Sure, so we actually offers several different options that you can specify when you upload files to us, so our standard turnaround’s four business days. But then we offer two-in-one business day options as well as a same day option. So really, we kind of put it in your hands to control whatever you need.

ISAAC: What is the cost?

JOSH MILLER: So the standard cost is about $2.50 per minute of content. We do volume discounts. The full discount schedule and turn-around schedule and everything is on our website, so you can definitely download the whole schedule. That’s basically the starting point.

ISAAC: The plug-in that you mentioned, is it a stand-alone or is it part of a bigger package?

JOSH MILLER: So the interactive transcript and the captions plugin we basically give you with the service. There’s no extra fee for that. The only dependencies for those is that, I mean, I think there are probably 12 or 15 different media players that they work with, so there is a bit of a limitation just in terms of how you get it working. So we don’t provide our own video player. We’re just giving you the ability to pull the text in alongside it basically. But it does work with all the major streaming video players, so YouTube, Vimeo, Brightcove, Ooyala, Kaltura, SoundCloud, JW Player, Flowplayer, and a whole bunch of others.

AUDIENCE: I was interested in knowing, can you get a transcript with this as well?

JOSH MILLER: Yeah, absolutely. When we process any file, we’re giving you captions, transcripts, time-synchronized transcripts. We don’t really differentiate between captions or transcripts and different levels of service. We basically just give you everything,

AUDIENCE: And then can we share that with not only the student? Because sometimes if it’s a format that’s been changed maybe for large print or something like that, we can only share it with the student, but often times the professors or departments are paying for the cost of this, and they would like to have the material themselves.

JOSH MILLER: Oh, absolutely. I mean, it’s even written in our terms that once we’ve processed the file, the output of that, the transcript and the caption files themselves, are yours. You own them. You can download them as many times as you want and do whatever you want with them. They’re

GREG: Hi, Josh, this is Greg, and I’ve got three quick questions.


GREG: One is I’m unclear about the plugin. Is that something that the end user has to download into their browser as well?

JOSH MILLER: So it’s something that whoever’s in charge of publishing the content would publish on the web page along with the video. And then it’s basically there for any viewer who goes to that page. So we do have some documentation on our site that might help make it a little more clear. But basically, the viewer shouldn’t have to do anything to see it if it’s been published.

GREG: Right, so the end user is not downloading a plugin?

JOSH MILLER: Right, they don’t have to download it. We call it a plugin, but really it’s this little widget that ties into the media player.

GREG: And related to that, how well are you able to keep up with the various iterations of browsers, particularly the nightmarish versions of Explorer that are continually rolling out?

JOSH MILLER: Well put. So we do everything we can. That’s something that we’re constantly keeping an eye out for. It’s not always easy, so I can’t say we’re perfect at it, but we do pay attention to it. I believe we’re currently supporting IE 8 and higher. I think that’s where we are right now.

GREG: Great. And do you think HTML5 is going to make some of this simpler in terms of the multiple iterations of transcript files that are out there, or do you see it going in that direction?

JOSH MILLER: We would love that. It’s a little out of our hands, unfortunately. It’s really going to depend on adoption and whether people really stick to the HTML5 standard, or platforms like YouTube and Kaltura and Brightcove are the ones that really dominate, they’re the ones who are going to dictate what happens.

So our argument is certainly standardization is better. Whether we’re doing the captioning, whether you’re doing the captioning, whether another service is doing the captioning, standardization is just better. It means that the viewing experience is more standardized from one video to another, and that’s good. So we definitely are big proponents of that concept, but it’s really hard to say what’s going to happen.

GREG: OK, and last one I guess I had, any chance you’re going to be working with Panopto?

JOSH MILLER: Yes. That is going live, we’re hoping, this quarter.

GREG: Great. OK thanks, Josh.


CAMERON: This is me, Cameron. This is a question regarding copyright and the legal aspect of it. If somebody brings you a film that is dated, an older film, dated many, many years back, or something perhaps on an online– YouTube, for example– that’s done by an individual person, do you have to be concerned about the copyright or captioning media that are either dated or somebody’s created but is their own personal–

JOSH MILLER: Yeah, it’s a great question, especially, obviously, in the academic setting. And we will say something about it to the organization when this happens. Our stance is, look, if you’re doing this as an accommodation and that’s the only way you’re going to use it, we have no problem captioning it and helping you figure out a way to show it as a captioned video.

The challenge with copyright has to do with distribution. So if you were to all of a sudden say, hey, we’ve got a captioned version of this video, let’s go distribute it all over to a whole bunch of other schools, we would say that’s a problem. But also, we would say it’s your problem because all we’re doing is providing the captions. We’re not doing any of the distribution. We’re never going to do any distribution for you.

So that’s really what it comes down to. If it’s for an accommodation, I believe there may be some legislation that actually says that’s OK. Maybe it’s the terms of YouTube, actually, specifically that I’m thinking of. But it really does come down to ownership attribution and distribution.

CAMERON: All right, thank you.

GREG: Hi, Josh, Greg again. I’m sorry. I just keep thinking of things because I actually have to make these things at my college.

JOSH MILLER: No, that’s great.

GREG: We’re Blackboard. And of course, when you try to put things in Blackboard, sometimes they get interesting. Is your plugin generally happy in there?

JOSH MILLER: Yes, it is. It does work. We definitely have people using Blackboard and the plugins. It’s something that can be very quickly tested, certainly. I think the bigger dependency is going to be the media player you’re using in Blackboard than the plugin.

AUDIENCE: Do you have any information about the requirements for the internet to be captioned? Because I know there was supposed to be a ruling, I guess a notice of a proposed ruling went out, and they were supposed to have something by like, a few months ago, but I haven’t seen anything. And yet, a lot of teachers are using information on the web and it’s not captioned, so I’m just curious about that.

JOSH MILLER: So are you referring to the CVAA and the law? That law has had several milestones and changes to the actual requirements over the last year or so. Is that the one you’re thinking of?

AUDIENCE: I’m actually not sure. I could look and see, but offhand I don’t know if it’s exactly that.

JOSH MILLER: Sure. So that particular law doesn’t currently apply to educational content. This is the one that’s probably gotten the most press recently because it applies to very high profile content. And there have been different milestones that have come and past and a few more upcoming. That’s the one that really applies to any television content that’s going up online.

In terms of academic content, the laws that probably apply the most are going to be Section 504, Section 508, maybe the ADA. Those are the ones that I would probably pay the most attention to. But that being said, the 21st century, the CVAA Act, that is likely going to get expanded at some point. So when it first was proposed in the House, the recommendation, I think the way it was termed was all professionally produced content must have captions. And that, obviously, is very vague. So where do you draw the line between what’s professional and what’s not? And I think you could argue that some people probably wanted it to be vague, whereas other people wanted to have a clear line drawn that they would not be included because of the expense.

So that was the way it was originally proposed, but obviously it’s been cut down quite a bit to only apply right now to content that aired on television. So that means any webisodes or anything that goes straight to the web does not actually count within that law. And that wouldn’t apply right now.

But I should also add, I’m not a lawyer. So if you are concerned, I would highly recommend checking with the legal team at your school or organization, because they will give you the answer that is truly safe for you as opposed to what I can try to interpret.

CAMERON: All right, thank you, Josh. I think that’s all the questions we have. Josh, thank you for your time and we really appreciate it. I guess I see your information on there. That’s the best email and phone number to reach you at?

JOSH MILLER: Yeah, that’s great. Definitely feel free to reach out. We’ll also put up a recorded version of this in case anyone wants to make sure it gets sent out. So if you do want to take a look, it’ll be free to take a look at. So thanks, everyone. We really appreciate the time and the questions.

CAMERON: OK, can you give a moment? Somebody has a question.


AUDIENCE: We can’t really read the email. Will you tell us what it is?

CAMERON: I can read that. It’s Josh, so J-O-S-H, @ 3, the number 3, Play Media dot com– Josh@3PlayMedia.com. And his phone number is extension 1-617-764-5189. And the office extension is 102. Is that helpful? All right, thank you again, Josh. And if we have any other questions–

JOSH MILLER: Great, thanks so much.

CAMERON: Oh, just a second.

AUDIENCE: I have an evaluation form for each of you to fill out. It’s very brief. There are eight check boxes and three places to write things, like this was wonderful food, I want to join C-AHEAD.