Home

Plans & Pricing Get Started Login

« Return to video

Quick Start to Captioning [Transcript]

EMILY GRIFFIN: All right. Welcome, everyone, and thank you for joining this webinar, entitled “Quick Start to Captioning.” I’m Emily Griffin from 3Play Media, and I’ll be presenting today. We have about 20 minutes for this presentation, followed by 10 minutes for Q&A with myself and my colleague Lily Bond.

For an agenda today, I’m going to go through some captioning basics, followed by the benefits of closed captioning. I’m going to talk about the accessibility laws and pertinent lawsuits, go through captioning services and tools, and then again I’ll save 10 minutes at the end for questions and polls.

We do have a quick poll for you to get started, actually, before we move on to the rest of this webinar. You’ll see a question pop up shortly asking, how do you foresee your captioning needs changing in the next year? And you can select increasing significantly, increasing moderately, staying the same, or decreasing. I’ll give you a few seconds to answer that, and then we’ll see the results.

Great. Thanks, everyone. So as you can see, the majority of folks expect their captioning needs to increase either significantly or moderately in the next year. And we find that that’s on trend with what we’ve seen in this industry in general.

All right, so moving on. Let’s just start with the basics. Let’s answer the question, what are captions? Captions are text that have been time-synchronized with the media so that they can be read while you’re watching the video. Captions assume the viewer can’t hear the audio, so captions need to convey not only speech, but also sound effects, speaker identification, and other nonspeech elements.

An example of this would be if there are keys jingling offscreen behind a door, and you want to add the sound effect “keys jingling” because it’s relevant to the plot. But if there are keys jingling in someone’s pocket walking down the street, then it’s not relevant, and you wouldn’t need to include that in captions.

In the US, closed captions originated as an FCC mandate for broadcast media in the 1980s. But the rulings for captions have really expanded with the proliferation of online video. So today you’ll find captioned video in all kinds of industries and on different devices.

Now let’s just explain some terminology. While captions are time-synchronized with the media, a transcript is just a plain text version of what has been spoken. So transcripts are sufficient for audio-only media, but you would need captions for any media with timed visuals like a video or a recorded PowerPoint presentation.

Captions versus subtitles– so captions assume that the viewer cannot hear the audio, whereas subtitles assume that the viewer can’t understand the language being spoken. So subtitles are really more about translating the content, and captions are more about making the content accessible to people who are deaf or hard of hearing.

You’ve probably heard of closed captions, but is there such a thing as open captions? And the answer is yes. So closed captions have much more flexibility with how you display them on screen. You can toggle them on or off. You might be able to customize their size, font, color, or general appearance, depending on your media player, whereas open captions are burned directly into the video file and can’t be turned off or customized.

Prerecorded versus live captioning refers to the timing of when the captions are created. Prerecorded captioning is made in postproduction, whereas live captioning happens in real time as the video is being streamed. For that, you would need a stenographer transcribing the words as they’re spoken, as in a newscast.

There are many different kinds of caption formats. This lists the more common caption formats, along with their use cases.

Now, on the top right is an example of an SRT caption file. So that would be used for something like YouTube or Facebook videos. Just from looking at it, you can pretty much figure out what the file is saying. It’s the number of the caption frame in the sequence followed by the time codes of when the caption frame should begin and end, followed by the text that should be displayed on screen. So that’s a pretty user-friendly caption format.

Compare that to the example right underneath it, which is in the SCC format. This format uses hex frames and is, as you can see, much more difficult to understand and to create from scratch.

Once you have your caption file in the format you need, you can then associate it with the video. And there are a few different ways to do that. The first and most common way is to upload captions as a sidecar file. And a sidecar file is a file that your video references as it’s playing. Depending on your video player, you usually just upload your caption file onto your video, and you’re all set.

Another option is caption encoding, which sort of bundles the data from your caption file into your video file so it’s all contained in a single file. You would need to do this for something like iTunes, which requires closed captioning on their videos, but doesn’t support sidecar files.

And then the third option is to do open captions, which are burned in and can’t be turned off. If your captioning workflow involves an integration with your video player or your VMS, then this whole step becomes trivial because it will be automated.

There are lots of benefits to captioning video. The first, obviously, is accessibility for the deaf and hard of hearing. There are over 48 million Americans living with hearing loss, which is about 20% of the American population. And this number is growing due to medical advancements which allow people to live longer and due to a decade of war, which can be very damaging to veterans’ hearing.

On a more positive note, captions provide better comprehension for everyone. A study by the Office of Communications in the UK found that 80% of people who use closed captions are not actually deaf or hard of hearing at all. Captions can improve comprehension in situations where the speaker has an accent, if the content is difficult to understand, if there is a background noise obscuring the speech, or if the viewer knows English as a second language.

And captions also provide flexibility to view a video in sound-sensitive environments such as at the office, in a gym, at a library, or on a noisy train. And this is very important when it comes to mobile video viewing, especially on social media. This example from BuzzFeed Video shows a captioned video on Facebook performing very well. And in a recent study, Facebook found that captions improved video watch time by 12%. When you think about how many people are watching videos on their phones during their commute, that makes a lot of sense.

Another benefit of captioning is the fact that it provides a great basis for video search capabilities. MIT surveyed their students and found that 97% of them had a better user experience when they watched video with a searchable interactive transcript. And video search basically allows you to search within a video and jump directly to the word you’re looking for. People are used to being able to search for what they want and go in to find it immediately anyway with search engines, so this makes that possible to do with a video.

Another benefit of captioning is that it improves your SEO, or Search Engine Optimization. Google can’t watch a video, so adding a transcript or caption file to your video provides a lot more context for Google to be able to classify your contents correctly. So it leads to more inbound traffic.

Discovery Digital Networks did a study on their YouTube channel where they captioned half of their library and left the other half uncaptioned. And they found that the videos that they captioned had a 7.3% increase in views.

Captions also make your content reusable. The University of Washington found that 50% of their students were repurchasing transcripts of lectures as study guides. From a content marketing standpoint, video transcripts can be used to create case studies, blog posts, white papers, or infographics.

Video transcription is the first step in creating subtitles in other languages. So you can translate your English transcript to make your video accessible to an international audience.

And finally, captions may be required by law. So let’s take a look at some of the US accessibility laws now.

The first major accessibility law in the US was the Rehabilitation Act of 1973. There are two sections of the Rehabilitation Act that apply to accessibility, and specifically to captioning. Section 504 is a broad antidiscrimination law that requires equal access for individuals with disabilities. And then Section 508 was introduced in 1998 to require Federal Communications and IT be made accessible. So closed captioning requirements are written directly into Section 508 and are often applied to Section 504.

Section 504 applies both to federal and federally-funded programming, and Section 508 only applies to federal programs. But any state governments that receive funding from the Assistive Technology Act are required to comply with Section 508. So that law will extend to state-funded organizations like public colleges and universities.

Next is the Americans with Disabilities Act of 1990, or the ADA. The ADA has five sections, and Title II and Title III apply to closed captioning. Title II applies to state and local government entities. And Title III applies to places of public accommodation, which can be publicly or privately owned.

So the big question with Title III of the ADA is what constitutes a place of public accommodation. In the past, this was applied to only physical places, such as the requirement to add wheelchair ramps and elevators to buildings. But more and more it’s been tested against online businesses. And more courts are ruling that the ADA applies to private companies who operate online.

And the last US accessibility law that we’ll cover is the 21st Century Communications and Video Accessibility Act, or CVAA, which was enacted in 2010. This explicitly covers online video, specifically online video that previously aired on US television with captions. As of this year, video clips from these programs posted online are also covered. The FCC recently clarified that video producers are responsible for supplying accurate captions for their content, and video distributors are responsible for ensuring those captions are delivered and displayed correctly online.

In 2014, the FCC set caption quality requirements for broadcasters. And these are really the only legal standard for caption quality and accuracy that exist in US law. There are other standards like WCAG, but those aren’t legally binding in the US currently.

So the four caption quality standards that the FCC laid out are– caption accuracy, that the captions must match the spoken words to the fullest extent possible and include nonverbal information. And this does allow some leniency for live captioning. Captions must be synchronized with their spoken words and sounds to the greatest extent possible. Program completeness requires that captions run from the beginning to the end of the program. And finally, captions should not block other important visual content. For example, if you’re watching a documentary and the bottom third of the scream has a banner with the name of the speaker, then you would need to move the captions to the top of the screen to avoid obscuring that information.

Now let’s review a few key lawsuits that affect closed captioning. Netflix was sued by the National Association of the Deaf in 2012 for failing to provide closed captions for most of its movies and television shows that were streaming online. This was the first time that Title III of the ADA had been applied to internet-only businesses. Netflix argued that they don’t qualify as a place of public accommodation under the ADA, but the plaintiffs’ lawyers, some of whom were involved in drafting the ADA, argued that the ADA was designed to apply to new technologies as they emerged.

And the court ended up ruling in favor of the NAD, saying that excluding online-only businesses would betray the spirit of the ADA. In the settlement, Netflix agreed to caption 100% of its streaming content. And this case set a precedent for companies that were streaming video across industries, including entertainment, education, health care, and corporate online training content. On that last note, FedEx Ground was actually sued recently for not providing closed captions on their employee training videos.

Just last year, Harvard and MIT were sued by the NAD for providing inaccessible web video that was either not captioned or was inadequately captioned. So this is the first case outside of the entertainment industry in which caption accuracy has been considered in determining legal ramifications. Harvard and MIT were using YouTube’s automatic captioning services on some of their videos, and so this case really reinforces that automatic captioning is not sufficient. They can’t provide an equivalent alternative for deaf and hard of hearing viewers. So they are not ADA-compliant, generally.

Arleen B. Mayerson, who was a lawyer for the NAD and one of the people who helped write the ADA, said that the ADA was meant to grow and expand and not to deny or limit the accommodations available at the time. So the argument here was that educational online videos are a public accommodation regardless of whether or not the ADA originally applied to just physical structures. Arlene said, quote, “If you are a hearing person, you are welcomed into a world of lifelong learning through access to a community offering videos on virtually any topic imaginable, from climate change to world history or the arts. No captions is like no ramp for people in wheelchairs, or signs stating that people with disabilities are not welcome,” end quote.

In June of last year, the Department of Justice submitted a statement of interest supporting the plaintiffs’ position that Harvard and MIT’s free online courses and lectures discriminate against the deaf and hard of hearing by failing to provide equal access in the form of captions. The final argument was held in September, and we’re still waiting on the decision. But the outcome will have huge implications for captioning in higher education.

Just a little bit about us, 3Play Media. We are a captioning, transcription, and translation company based in Boston. And our goal is really to simplify the whole process of captioning. We have over 1,600 customers in education, media, and entertainment, corporate markets, and the government. And some of the ways that we make it easier for you to caption your video include a cloud-based account system to manage your files, flexible turnaround options, including automated workflows, over 50 different caption file formats, the ability to import and manage existing captions, video search tools, and Spanish captioning. And I’m going to go through some of those tools right now.

First of all, accuracy and quality are extremely important to us. We comply with all the FCC’s quality standards and best practices, and we guarantee at least 99% accuracy on our captions. And we tend to average higher than that, about 99.6% accuracy.

We have a three-step process for captioning. We first put the content through automatic speech recognition, which gives us a rough timecoded-to-the-word transcript that is obviously not adequate for accuracy.

So from there, it gets reviewed by one of our 1,000-plus professionally certified transcriptionists who are all US-based. And they go through and clean up the automatic transcript. They research difficult terms, proper names. They can flag anything they’re not sure about.

And then the final step is for quality control, a QA person who reviews the editor’s work, and they research any of the flags that the editor made to make sure you end up with a flawless transcript. All of our transcriptionists and QA personnel go through a rigorous certification program before they ever touch a file.

And we actually now have algorithms that match editors with expertise to corresponding content. So if one of our transcriptionists was a former developer, we could match them to STEM content. A former nurse might review a medical video. And this not only makes the editors happier, it cuts down on review time and ensures greater accuracy.

You can also upload cheat sheets or vocabulary lists for specific terminology in your file or to your account in general. So that way, transcriptionists can quickly access any specific words that might be difficult for them to verify. And again, that really helps to ensure accuracy.

We offer flexible upload and turnaround options. On our secure online account system, you can upload your videos from your computer, submit videos via links, FTP, or with an API, or use one of our integrations. Our account system is all web-based, so there’s nothing to install. And you can access it from any device connected to the internet.

Our standard turnaround time is four business days, but we have options for more urgent or more extended deadlines. And we now offer a two-hour turnaround, which is the fastest option in the industry. If you’re not in a rush, you can select the extended turnaround for a discount.

We integrate with leading platforms for video, video players, lecture capture systems– all the ones listed on your screen right now– to make the captioning process as automated as possible. So companies like Brightcove, Kaltura, YouTube, Mediasite, or Panopto allow you to tag video files directly from their platform. And that submits the videos to 3Play for transcription. We’ll send the completed files back to you directly onto the video. So they’ll just show up automatically on your video platform, and that really automates the process so you don’t even have to think about captioning.

We offer over 50 different output formats, as I mentioned before. Some are listed here. When your captions are ready, you’ll receive an email alert. And from there, you have unlimited downloads of your captions in whichever format you need.

And once the captions are completed, you can easily make edits to them yourself. And when you finalize, the edits will propagate to all outputs and all plugins without having to reprocess your captions. And this is really great for any quick fixes that you need to make yourself.

You can import captions into our account system so that you can securely manage all of your assets in one place. And then you’ll have access to all of our tools, plugins, output formats, and integrations, as well as the ability to translate those existing captions into other languages.

Our automated transcript alignment service is kind of the opposite of caption import. If you already have a transcript for your video, you can upload your video and transcript to our system, and we’ll automatically timecode that transcript to create caption files for you. And again, you will have access to all of our tools, plugins, output formats, et cetera.

This is one of my favorite tools, an interactive transcript. It displays along with your video and highlights the words as they’re spoken. It’s a really great way to keep a viewer engaged as they watch your video. And what’s even cooler is that when you click on a word in the transcript, your video jumps to that point. It makes it really easy to skim a video or navigate to a certain part.

Just to show you an example of how the interactive transcript can be used, let’s take a look at a real-life example. We’re going to take a look at the MIT Infinite History website. This is a library of oral history interviews that MIT transcribed and has cataloged. And they’ve made excellent use of our interactive transcripts and our archive search feature.

So if you can see my screen here, MIT has a library of videos. I could start playing this one right now. And I’ll scroll down just a little bit so you can see the interactive active transcript that displays below. As it’s playing, it’s highlighting each word. If I wanted to, I could scroll ahead, click on a word, and it would take us right to that point in the video.

And MIT has also customized how this displays. You can customize the font and the color and all that. So this is very customizable.

Now scrolling back up a bit, I want to show you my other favorite plugin, Archive Search. So to the right here is the Archive Search bar where I could search for a keyword and find anywhere that it appears in this entire video library. So let’s look at technology. I bet there’s a lot of technology in this. And there is.

So the results show all the videos where that word appears in the transcript. It even shows the number of times it appears. And then I can look at it for further detail to see exactly when it appears and where. Please forgive my internet if it’s a little bit laggy. But this soon will display all the different instances where it appears. I could click and go to that point in the video. There we go. So that’s a look at some of our interactive plugins.

And then finally, I just wanted to mention that our customers are extremely important to us. And a lot of our success as a company has been based on the fact that we give all of our customers a lot of attention. And the word cloud on the right is a compilation of the words our customers used to describe us in a customer satisfaction survey we ran. And as you can see, support really stands out as one of the key aspects that our customers are happy with.

So with that, we’ll move on to Q&A. And as we’re compiling the first questions, I just have one more poll question for you. The poll should show on your screen shortly. And it will read, what is your greatest barrier to implementing captions? You can select cost or budget, resource time, technical challenges, or not sure I need to. So I’ll just give you a few seconds to fill that out.

OK, so the results should show on your screen. No surprises here. Cost and budget are usually the biggest concern. But resource and time are also a consideration, especially if you are considering doing any captioning yourself. You’ll quickly discover that it can be quite time-consuming.

So with that, let’s jump into Q&A. So the first question we’ll start with is, do we need both captions and a transcript? So legally– I’m not sure who you are, but if you were an entertainment entity, you would definitely need captions, according to the FCC. If your organization needs to comply with Section 508, 504, or the ADA as I mentioned before, you absolutely need captions for your video content.

I would definitely recommend that if you’re going to go through the trouble of having your video transcribed and captioned, why not make both available? There are people that prefer to print out a transcript and read it separately. There are people that prefer to watch videos with captions. So if you’re going to do it in the first place, why not? That’s what I say.

Up next, what is the cost of your captioning service? So the quick answer to that is that for just pure transcription and captioning services, we start at $2.50 per minute and go down from there if you order in bulk. That’s with our pro account. And that price does include unlimited downloads of your video transcript and captions in whatever format you need.

If you want to learn more about our pricing and discounts as well as the price for our faster turnaround options, translation, or using any of our plugins, I would encourage you to go to info.3playmedia.com/pricing, and you’ll download our pricing form from there. You could also speak with our sales representative. They would be happy to talk to you more about it. Again, that’s info.3playmedia.com/pricing.

Up next, how do I know if I legally have to caption my videos? Good question. I would definitely advise you to consult your lawyer or your organization’s legal team. They should definitely know about that. If your organization has any sort of compliance office, they should be able to answer that question. But if you don’t have those resources or just want to be more educated about it yourself, I would encourage you to read up on the laws. Our website actually has a page devoted to accessibility laws and specifically how they apply to captioning at different institutions in the US and also in Canada, the UK, Australia, and New Zealand. So those are free. You can go to our website and download those, and hopefully that can help you determine where you stand as well.

Next question. If an instructor is using a YouTube video in their Moodle course, can it be captioned by 3Play Media if it was created by someone else? The answer is yes. What I would not recommend is any sort of illegally downloading of the YouTube video. I know that a lot of colleges do do this, and that can get very hairy in interfering with copyright law.

The best recommendation we have for captioning your videos safely is to use a captions plugin, which we offer. Basically what it would do is allow you to embed the original YouTube video that the instructor wants to share. And along with that embed code, you would embed this captions plugin. And the plugin would display captions for that video. It would just display it timed correctly as the video plays. So you’re not altering the original video file. You’re not tampering with that in any way. You’re just making captions available to the student.

Interesting– do legal requirements apply retroactively, like our existing videos? This really depends on the context of who you are, where you’re publishing your videos. But in many instances, yes, this does apply retroactively. For example, if you’re in entertainment, in the entertainment industry, the CVAA does cover archival video content. So even content that aired years ago, if it’s posted online and was originally broadcast on US television with captions, it still needs to have captions posted online. If you were an educational institution, it really shouldn’t matter as long as if the video is in use and you’re covered by the ADA or Section 508 or 504, then your content should be captioned.

Up next, once captioned or translated in English, can students in different countries choose the language that the caption appears? Yes, this does depend on your media player. Just about any good media player should offer controls for the user to select a given language if there are multiple language or caption tracks on the video.

And you’ve probably encountered this yourself. Some YouTube videos, if you click on the caption settings, there are options to watch the videos in other languages. You could definitely see that on TED.com videos, because they actually crowdsource a lot of their subtitling and try to offer their videos in different languages. So you can see that in action there.

Next question, I’m having challenges trying to convince my faculty members that captioning is critical for students. Faculty claim that since their students are not deaf, they don’t need to do this. How might I convince them that captioning is required by law on their videos and screencasts?

This is a very important question, and one that a lot of educators have, and unfortunately struggle with. The best advice that I’ve found is that the buy-in and the messaging should really come from the top. The leadership in your organization should craft and adopt an accessibility policy and have a clear and public accessibility statement. And it really should all trickle down from there. I mean, it’s hard for faculty to argue with the higher-ups, like the dean or the president or an accessibility policy that’s plastered everywhere on your site. They really have no skin in the game to fight that. So that would be my recommendation. And of course you can also send them our white papers, which proved that their organization might be liable.

Up next, does transcript alignment cost less than full closed captioning? Yes, it does. It costs usually about half the cost. So that answers your question.

How do you handle speaker identification? This really depends. You can choose to have speakers identify just by like a hash mark. Our default is to, whenever possible, supply a full name in all caps. So for example the transcript for this webinar will probably read EMILY in all caps colon, and then me speaking. You could offer specific names with your content for how you want us to refer to your speaker IDs, or you could always change that yourself.

Next question is how we handle transcribing and captioning STEM content. So we definitely do transcribe a lot of STEM content. And as with anything else, we guarantee that 99%-plus accuracy rate. With this type of content, again, you might really benefit from uploading a glossary for any difficult vocabulary or proper names. We do have a special setting for transcribing mathematical content and for how to caption equations that are written out in text.

And also as I mentioned briefly before, we do try to match our transcript editors with subject matter that they would have expertise in. So we would try to match folks with a background in STEM with your STEM content to increase accuracy in those cases.

How do you know what format you need for the caption file? So that will vary depending on where you need your video to end up. Every video management system or program or player should have somewhere in their support documents information about what caption files or formats work for them. And on our website, we actually have a whole section for how-tos– how to add captioning to YouTube, how to add captions to Facebook, to this and that different program. So you might see if the program you’re using is listed there. And that should have some easy-to-follow instructions for the whole process and for giving guidance for which caption format to use.

And that is all we have time for. So just want to remind you that you will receive an email tomorrow with a link to a recording of this webinar, as well as a transcript and slide deck. And with that, I’d like to wish you all a great rest of your day.