Quick Start to Captioning [Transcript]
LILY BOND: OK, so welcome, everyone, and thank you for attending this webinar on closed captioning. My name is Lily Bond. We have about 30 minutes to cover the basics of captioning. I’ll try to make the presentation about 20 minutes and leave the rest of the time for your questions.
So the best way to ask questions is to type them in the Questions box at the bottom right corner of your Control Panel. We’ll keep track of them and address them all at the end. And you can always feel free to email or call us any time after the webinar.
So first of all, what are captions? We’ll take it from the very beginning. Captions are text that has been time-synchronized with the media. Captions assume that the viewer cannot hear the audio at all, so the objective is not only to convey the spoken content, but also the sound effects, speaker identification, and other non-speech elements. Basically, the objective is to convey any sound that is not visually apparent but is integral to the plot.
So captions originated in the 1980s as a result of an FCC mandate that was specifically for broadcast television. But now, as online media becomes more and more an everyday part of our lives, the need for web captions has expanded and continues to expand greatly. And as a result, captions are being applied across many different types of devices and media, especially as people become more aware of the benefits and as laws become increasingly more stringent.
So just to make sure that we’re all on the same page, let’s take a little look at some captioning terminology here. So the difference between captions and a transcript is that the transcript is not synchronized with the media. On the other hand, captions are timecoded so that they can be displayed at the right time while watching a video.
For online media, transcripts are sufficient for audio-only content, but captions are required any time there’s a video component, which doesn’t exist meaning that there has to be a moving picture. For example, a slide show presentation with an audio track would require captions.
Captions versus subtitles. The distinction here is that captions should assume that the viewer cannot hear, whereas subtitles are intended for viewers who can hear but cannot understand the language. For that reason, captions include all relevant sound effects, and subtitles are really more about translating the content.
Closed versus open captions. The difference here is that closed captions allow the end user to turn the captions on and off. In contrast, open captions are burned into the video and cannot be turned off. With online video, people have really moved away from open captions for a lot of different reasons. The workflow is more complicated. The captions can obstruct the video content. And it sometimes requires multiple versions of the video.
So post production versus live captions– this refers to the timing of when the captioning is done. Real-time captioning is done by live stenographers, where post-production captioning is done offline and usually takes a few days. So there are advantages and disadvantages to both.
As for the benefits of captioning, the primary purpose for captions and transcripts is to provide an accommodation for people with hearing disabilities, which is definitely critical. But people have discovered that there are many other benefits as well, so let’s take a look at some of those.
First of all, SEO is search engine optimization. So Google and other search engines can’t watch your video, but search engines do index caption files and transcripts, which really increases your keyword diversity and density and helps your videos be found. So we did a study with Discovery Digital Networks where they captioned videos across eight channels, and they compared them to uncaptioned videos. And as you can see, it showed a really interesting increase in views on YouTube, a larger increase over the first 14 days, and an overall increase of 7.32%.
So another benefit of captioning is better comprehension. When the audio quality isn’t great, or if the speaker has an accent, or if the content is kind of complex or esoteric, people find captions helpful for understanding the spoken word. People who know English a second language often find captions helpful for comprehension. And then, of course, sound-sensitive environments, like libraries, or gyms, or offices, allow you to view the video without the audio.
So typing into this is– the UK Office of Communications did a study across 7 and 1/2 million people in the UK, and they found that 80% of people who use closed captions were not deaf or hard of hearing. So that just proves that captions really do make video accessible to a lot more people than you would think.
So with captions, there are some search and navigation benefits that you can get. We have some plug-ins that allow you to search within the video and go directly to that point. So we’ll talk about more of those plug-ins at the end of the presentation.
Of course, another reason to caption is that they might be required by law. The ADA, the Rehabilitation Act, the CBA, and the FCC all have regulations. And we’ll go over that in more detail in a bit.
So you can also repurpose captions and transcripts to create more content. For instance, a transcript from a webinar could be used to create case studies, support docs, blogs, infographics. And then in terms of education, one of the schools that we work with found that 50% of students who would view the content would download the transcripts and use them for study guides.
So finally, captions are a great basis for translation. Once you have captions, you can translate those into any language and add multilingual subtitles to your videos. So that makes your video more accessible on a global scale.
All right, so let’s get into some of the accessibility laws. Section 508 and 504 are both from the Rehabilitation Act of 1973. Section 508 is a fairly broad law that requires federal communications and information technology be accessible to employees and the public. So for video, this means having closed captions. For podcasts or audio-only content, transcripts are sufficient.
And then Section 504 is basically an anti-discrimination law that requires equal access for disabled people with respect to electronic communications. Both of these laws apply to all government agencies and certain public colleges and universities that receive federal funding, such as through the Assistive Technology Act. And many states have enacted their own laws that mirror these federal laws.
The Americans with Disabilities Act is a very broad law that is comprised of five sections. It was enacted in 1990, but the ADA Amendment Act of 2008 expanded and broadened the definition of disability. Title II and Title III are the ones that pertain to video accessibility and captioning. Title II is for public entities and Title III is for commercial entities.
And so this is the area that has the most legal activity. Title III requires equal access for places of public accommodation. The grey area here is what constitutes a place of public accommodation. In the past, this was usually applied to physical structures, like requiring wheelchair ramps for accessibility purposes.
But recently, the definition has been brought in and tested against online businesses. So one case was the National Association of the Deaf v. Netflix. And the court there ruled that Netflix did, in fact, qualify as a place of public accommodation.
The CVAA is the most recent accessibility law. It was passed in October of 2010. So that requires captioning for all online video that previously aired on television. So this would apply to publishers like Netflix or Hulu and is often referred to the CVAA. And they’re thinking of expanding the legislation here to move beyond network television.
So in February of this year, the FCC came out with specific quality standards for captions. There were four parts to the ruling. In terms of caption accuracy, they said that the captions must match the spoken words to the fullest extent possible. That includes proper spelling, punctuation, and grammar. And they also said that they should convey the tone and intent of the content. There’s some leniency for live captioning there.
For caption synchronization, the captions must coincide with the spoken words. For program completeness, there were people complaining that things like the tags at the end of programs weren’t captioned. So this really requires the captions run from the beginning to the end of the program.
And then finally, on-screen caption placement says that captions should not block other important visual content. So something like in a documentary, text on the bottom of the screen should not be obscured by captions. We have a vertical caption placement that automatically detects that and would move the captions to the top of the screen.
There are many different caption formats. A lot of them are used with specific media players. The image at the top right there shows what a typical SRT caption file looks like. That’s something you would need, for instance, on YouTube. You have three caption frames, and you can see that each caption frame has a start time, an end time, and then the text within that.
Once a caption file is created, it needs to be associated with the corresponding video file. The way to do that depends on the type of media and the video platform that you’re using. So the first kind would be a sidecar file. This is something you would need for a site like YouTube, where you have to upload the caption file for each video. You could also encode your captions. So iTunes is an example of when you would actually need to encode the caption file onto the video. And then open captions are burned into the video and can’t be taken off.
So a little bit about us– we’re based in Cambridge, Massachusetts, and for the last seven years, we’ve been providing captioning and transcription services to over 1,000 customers in higher ed, government, enterprise, and media and entertainment. We’re going to go over some of our products and services. You can see we do captioning, subtitling, and transcription, but we have a few other products, as well.
So in terms of accuracy and quality, we use a multi-step review process that delivers more than 99% accuracy, even in cases of poor audio quality, multiple speakers, difficult content, or accents. So typically, 2/3 of the work is done by a computer and the rest by transcriptionists. This makes our process more efficient than other vendors. And more importantly, it really affords our transcriptionists the flexibility to spend more time on the finer details. For example, we diligently research difficult words, names, and places, and we can put more care into ensuring correct grammar and punctuation.
We’ve also done a lot of work on the operational side of the business, such as making it possible to match transcriptionist expertise to certain types of content. So we have about 700 transcriptionists on staff, and they cover a broad range of disciplines. For example, if you send us tax-related content, we can match that content with a transcriptionist who has a financial background, so that really helps ensure the accuracy.
And without exception, all of our work is done by professionally trained transcriptionists in the USA. Each transcriptionist goes through a rigorous training program before they touch a real file. So they also go through a background check and enter into a confidentiality agreement.
One thing we’ve found is that no matter how hard we try, there are just certain proper nouns and vocabulary that can be difficult to get exactly right. So we built the ability for you to make changes on the fly. So if a name is misspelled or there’s something else that you want to change, you can make that change, press Save, and your changes will immediately propagate through all of your output files, and there’s no need to reprocess anything.
So we have a lot of flexible upload and turnaround options. Once your account is set up, the next step is to upload your video content to us. There are a lot of different ways to do that. You can upload through our secure web uploader, via FTP, or through our API. And we’ve also built integrations with a lot of leading online media and video platforms and lecture capture systems, such as Brightcove, Mediasite, Kaltura, Ooyala. So if you’re using one of those platforms, then the process is even easier.
We really aim to make the captioning workflow as unobtrusive as possible. So we give you the ability to automate much of the workflow, and our captions and tools are compatible with most video players. I should also note that our account is all web-based, so there’s no software to install.
So once you’ve uploaded your content, it’ll go into processing. Standard turnaround is four business days, but we offer faster turnaround options for more urgent work. And when your files are complete, you’ll receive an email alert. You can log into your account and download your files in as many different formats as you want. And you have unlimited downloads. You can download at any time.
Of course, if you want us to delete your files after processing, we can do that, as well. You can also access the editing platform from there and make changes to your transcripts and captions. And there are a lot of other features in the account system that we’d be happy to talk about separately.
One of our plug-ins is the interactive transcript. So I mentioned this earlier. It’s available to you and included in the cost of captioning. Basically, it’s a time-synchronized transcript that would go either below or to the side of your video. And it allows you to click anywhere in the transcript and jump directly to that point in the video, or to search within the video and see time markers of everywhere that that word comes up, and you can jump directly to any of those points. So that really makes video a lot more engaging and popular for a lot of viewers.
So while we’ve built a lot of tools that are self-service or automated, a lot of our success as a company is based on the fact that we give all of our customers a lot of attention. We expect to walk people through the account tools and we enjoy building relationships with people, so our customer support has been really successful and popular.
So at this point, we can open it up for questions. Again, if you type your questions into the bottom right corner, we can take a look at those. OK, we have some questions coming in over here. So first of all, someone asked if this presentation is captioned. It’s being recorded, and we will send out an email with a link to the captioned video tomorrow.
Another question is, how is this different from what Google now provides? So I assume that you’re talking about YouTube’s automatic captions. Basically, YouTube’s automatic captions is just the automatic speech recognition. And so our process really takes that as a basis and then improves upon it.
Because we use the human component of all of our trained transcriptionists, we can take the automatic transcript– which usually is fairly inaccurate, if you’ve ever looked at those closely– and we make it pretty much perfect. So our transcriptionists go through a really rigorous training process. And so once they get the transcripts, they go through and they edit it. And then we have the third level of review, where people research difficult terms and provide you pretty much a perfect transcript.
So I see another question about any demo movies. We have a bunch of demos on our site. I think you’re looking particularly at how we can caption to capture tone and then using some of the plug-ins that we talked about. So all of our transcriptionists are trained to take into account the FCC’s standards. So all of the videos that we caption would take into account tone. And if you’re looking for any of the plug-ins, you can go to our Interactive Transcript and Plug-In Gallery, and you can see examples of those live.
So someone’s asking to explain how you can change the text of the closed captions, if needed. Within your account system, when you open up your file, you’ll see a little pencil icon in the upper right-hand corner. You can click on that to edit your transcripts.
So it’s a really good interface, really user-friendly. You can see that you can take any word and bold it, italicize it, change the spelling, add in a word if something was missed, something like that. And then you can save it, and then when you finalize the file, all of those edits will propagate. So you don’t have to re-upload any files.
So another question about, from an accessibility standpoint, the difference between open and closed captioning, and whether one is better than the other. So open captioning, from an accessibility standpoint, basically it just means it can’t be turned off. So from an accessibility standpoint, it would always be there. But closed captioning is usually preferred because it does allow users– it kind of makes it accessible to everyone. People who don’t need it can turn it off, but people who do need it can turn it on.
And closed captions usually provide options for on-screen placement and user control options to make it easier to view the captions the way you want to. So that would include options for the font, the size, the color, all of that kind of thing. So closed captions are definitely preferred.
So there’s another question about how we handle industry terms and specific spellings. A couple of things– all of our transcriptionists come from different backgrounds, so we really have the ability to handle difficult content and really specific content. Because we can kind of hand it off to the people that it’ll be in the best hands in.
So then in terms of specific spellings and terms, what you can do is when you upload your video, you can upload a cheat sheet, basically, or a glossary, to help your transcriptionist with those terms that might be a little bit less common. And then they can use those to really give you pretty much a perfect transcript.
So there’s a question here about how you would upload with Mediasite. So we have a seamless round-trip integration with Mediasite. From within Mediasite, you would select which presentations need captions. Mediasite would send us those videos, and then we would send the captions back to Mediasite, and they would just show up automatically.
And so that’s a really great question. That’s how a lot of our integrations work. So for most of our integrations, that just really simplifies the process and allows automatic post-back, so you don’t have to worry about downloading and uploading caption files.
So I see a question about the vertical alignment that I mentioned. It asks, will we make the captions and then automatically have them jump to the top of the frame if a lower-third graphic appears? The answer is yes.
So we have vertical placement, vertical caption placement. It’s basically an algorithm that detects pixels in the lower third that the captions would obscure. And whenever we’ve detected that, we’ll move the captions to the top of the screen and then move them back once we’ve detected that the content in the bottom of the screen is gone.
So I see a question about how you determine what file format you want. If you’re using any of our integrations, you don’t have to worry about that. That would just automatically send them back to your video in the correct format. We have a lot of resources on our site, in terms of what type of format you might need for different platforms and for different video players. I’m not going to run through all of them because there’s over 50 formats, but that information is really accessible on our website.
And then another question about– are our captions pop-up or roll-on? They’re pop-up. So that just means that they will pop onto the screen and then be replaced by the next caption frame when that’s ready.
I have another question about whether you insert speaker names into transcripts or captions. So that’s one of the settings that you can set from within your account system. You can do that on a project basis or across all of your videos. So you can set your preferences for how speaker identification would show up there.
So there’s a question– if your content is uploaded through Mediasite or through another video platform, are downloads for captions still available in other formats, if desired? Yes. You can always log in to your account system in 3Play Media, and you can download any format you might need, as many times as you want, indefinitely.
So there’s a question about– do we provide any additional services related to transcription and captioning? So we do offer translation and subtitles. So once you’ve submitted your file and gotten the captions, you can order translations, and then you could download subtitle files for that, to provide multilingual subtitles. We also have tools like Clipmaker. So you can basically highlight text within your interactive transcript and create video clips that way. And we have an archive search functionality that allows you to search across an entire video library and see the exact places, across all videos, where a term might show up.
We have some other resources on the screen here that might help you with any other questions you have. We have some links to other webinars and white papers, some how-to guides and video tutorials, case studies, plans and pricing, and customer testimonials.
So I wanted to thank everyone for joining us today. As I said, a recording of this webinar with captions will be available tomorrow, and you’ll receive an email with a link to watch it. So thanks again, and hope you all have a great day.