Home

Plans & Pricing Get Started Login

« Return to video

Quick Start to Captioning – Webinar Transcript

JOSH MILLER: Hey, everyone. Hopefully we can try this again. Sorry for the challenge there. We’ll start over, and we’ll go through quickly. Thanks for everyone sticking around and bearing through that. All right. So we’ll go from the top.

My name is Josh Miller. I’m one of the co-founders of 3Play Media. We have about 30 minutes. We’re going to try to go through everything.

I wanted to spend about 15 minutes on the presentation, and the rest of the time for your questions. The best way to ask questions is by typing them into the questions window in the bottom right corner of your control panel. We’ll keep track of them, and address them all at the end.

For anyone following on Twitter, the hashtag is shown on the screen, which is #3PlayCaptioning.

So we are going to talk through an overview of closed captioning for web video. We’re going to talk about some of the applicable legislation, and we’ll talk about some of the services that we provide, including some of the process and workflow steps that are relevant.

So first, what are closed captions? Just so we’re all on the same page, captioning refers to the process of taking an audio track, transcribing it to text, and synchronizing that text with the media. Closed captions are typically located underneath a video or overlaid on top.

In addition to spoken words, captions convey all meaning, including sound effects. And this is a key difference from subtitles, which is often confused.

Closed captions originated in the early 1980s by an FCC mandate that applied to broadcast television. And now that online video is rapidly becoming the dominant medium, captioning laws and practices are proliferating there as well.

Some basic terminology that I think is worth going over. A transcript versus captions. A transcript is usually a text document without any time information. On the other hand, captions are time synchronized with the media. You can make captions from a transcript by breaking the text into smaller segments called caption frames and synchronizing those segments with the media such that each frame is displayed at the right time.

Captioning versus subtitling. The difference between captions and subtitles is that subtitles are intended for viewers who do not have a hearing impairment, but they may not understand the language. So subtitles capture all the spoken content, but not necessarily some of the sound effects. So for web video, it’s possible to create multilingual subtitles and have multiple language tracks for a particular video. And this is a pretty important distinction with captions.

Closed versus open captioning, which you sometimes hear. The difference is that closed captions can be turned on or off by the viewer, whereas open captions are burned into the video and cannot be turned off, meaning they’re always shown. Most web video players support closed captioning. That’s what you normally see, and that’s why you’ll see a CC button, oftentimes, to turn the captions on and off.

Post production versus real time. Post production means the captioning process occurs offline and usually takes a few days to complete, so it’s after the content has been created. Whereas real time captioning is done by live captioners while the content is going live for the first time. And there are certainly advantages and disadvantages of each process, depending on the type of content.

So although captions originated with broadcast television, nowadays captions are being applied across many different types of media, especially as people become more aware of the benefits. And certainly the laws are expanding to apply to web content as well, and really becoming more stringent.

So every video player and software application actually handles captions a bit differently. So we’ve actually created a number of how to guides, which you can find on our website under the How It Works section. I’d definitely encourage people to check that out because it’s a good resource to figure out how to get captions working on a particular type of media player.

There are a few laws that are relevant to the captioning requirements online. The first is Section 508, which is a fairly broad law that requires all federal electronic and information technology to be accessible to people with disabilities, including employees and the public. For video, this means that captions must be added, whereas podcasts or audio files– in that case, a transcript is all that you really need.

Section 504 entitles people with disabilities to equal access to any program or activity that receives federal subsidy. Web-based communications for educational institutions and government agencies are covered by this as well. Section 504 and 508 are both from the Rehabilitation Act of 1973, although Section 508 wasn’t added until 1986. Many states have also enacted similar legislation to Sections 504 and 508, and oftentimes will use pretty much the same language.

The Americans with Disabilities Act of 1990 covers federal, state, and local jurisdictions. It applies to a range of domains, including employment, public entities, telecommunications, and places of public accommodation. In 2008, the law was broadened to include a definition of disability that was a lot more in line with Section 504, which really meant it resulted in more people being covered by the ADA.

The ADA is especially interesting because that’s the law that was cited in the recent Netflix lawsuit, where the NAD, which is the National Association for the Deaf, sued Netflix for a lack of captions on their content. Netflix argued that the ADA applies only to physical places, and you couldn’t use the places of public accommodation clause for them, whereas the judge ruled that that’s exactly what was the issue, and that Netflix did qualify as a place of public accommodation.

So the ruling has some pretty interesting implications for anyone publishing content online, because it is a bit vague. It’s a little unclear what constitutes a place of public accommodation when it comes to web video. The best way to think about it is really the ease of access, is probably what it comes down to. But it’s definitely something to pay attention to as well.

Recently, the 21st Century Video Communications and Accessibility Act was signed into law, and then more recently really started to be put into place. It was signed into law in October of 2010. It expands the closed caption requirements for all online video that previously aired on television. This law is often referred to as the CVAA, and it basically expands the legislation that was applied to broadcast television. And even more legislation in that area is being discussed right now, beyond just network television.

So as of right now, there have been a number of milestones that have already been phased in for the CVAA. The one that is coming up has to do with content edited specifically for internet distribution. So already, any content that aired on television, if it was a full episode or full show, if it ends up going up online, that has to have captions as well.

This upcoming milestone has to do with edited content, meaning clips. So if you go to a site like a Hulu, then you’ll see clips online of shows that have just aired. Basically, what this is saying is that those clips will also have to have captions.

So accessibility is clearly a growing concern. It wouldn’t be talked about with the web content if it weren’t. There are a number of interesting statistics here that are worth noting. More than a billion people in the US have a disability, and 48 million, which is actually 20% of the US, have some sort of hearing loss. So certainly, it makes sense to ask the question, why is this happening? Why is this relevant?

There are a couple interesting story lines that come with this, which means this issue is not going away. One being just medical advances. So nowadays, we have an aging population. People can survive accidents better than they ever could before. Babies who maybe were born premature are able to survive easier than they could before. And what this means is you’ve got a growing population that may have some form of disability still able to survive with us– which is great, but it means that we need to accommodate for some of those issues.

The next part is we’ve had quite a bit of war over the last 10, 15 years. And so as we get better with the number of survivors in a difficult situation, the reality is there are people who have some form of casualty, who survive an injury. So all these things kind of play together to basically expand the number of people who actually need some of these accommodations.

So next we’ll talk about some of the benefits beyond the obvious because that’s actually something that’s pretty important. So clearly, people with hearing disabilities need captions. That’s critical. That’s the first step.

Beyond that, captions can improve general comprehension of content. It certainly can remove language barriers for people who know English as a second language. Captions can also compensate for poor audio quality or a noisy background, and allow the media to be used in sound sensitive environments, like an office or even a library.

From a search engine optimization point of view, captions make your video a lot more discoverable because search engines are able to actually index what’s being said. And then once your video has been found, captions allow it to be searched better, and certainly reused.

So this is especially important with long form video. For example, if you’re looking for something in a one hour lecture, you can quickly search through the text instead of having to watch the whole thing. We actually have tools specifically designed for text-based search and navigation, where you can actually jump to a point in the video based on your search results. And then finally, transcription is essentially the first step to translate into other languages. So if you are interested in putting subtitles on your content to reach a broader audience, captions are actually the first step.

Very quickly, each media player on the web tends to have a slightly different caption format requirement. This is an example of an SRT file, which can be used with YouTube and a number of other web players. It’s just something to consider.

But what you see here is it’s pretty straightforward. You’ve got time information and text information that creates each caption frame. And in some form or another, that’s what each caption format will contain.

So just a little bit about 3Play Media, and then we’ll open things up for questions. The inspiration for 3Play Media started when we were doing some work in the spoken language lab at CSAIL, which is the computer science department at MIT. We were approached by MIT OpenCourseWare with the idea of applying speech technology to captioning for a more cost-effective solution.

We quickly recognized that speech recognition alone would not suffice, but it did provide an interesting starting point. From there, we developed an innovative transcription process that uses both technology and humans, and yields high quality transcripts with time data.

So we’re constantly developing new products and new ways to use this time synchronized text. We actually really value the input of our customers to figure out how else can we make this output useful beyond just closed captions. Our focus is to provide premium quality transcription and captioning services. We also provide translation services.

And we really push hard on making captions more valuable. So we have a number of interactive tools that use the time synchronized transcripts to enhance search and navigation, as I mentioned. We actually have a separate webinar where we talk a little bit more about that.

And then the other part is just making this process easier. So we have a number of ways to automate the workflows. We integrate with lecture capture systems and other video platforms.

We have a multi-step review process that delivers more than 99% accuracy, even in cases of poor audio quality, multiple speakers, difficult content, or accents. Typically, about 2/3 of the work is done by computer, and the rest is cleaned up by our transcriptionists. So this makes the process more efficient, and it also affords our transcriptionists the flexibility to spend more time on the finer details, just because they’re editing rather than transcribing from scratch. So we’ll actually research difficult words, names, places, and really make sure that the proper care is put into getting the correct grammar and punctuation in place.

One thing we’ve found is that no matter how hard we try, certain proper nouns or vocabulary can be quite difficult to get exactly right. So we’ve built the ability for you to actually make changes on the fly. So if a name is misspelled or if you decide you want to even redact an entire paragraph, you can make that change with a few clicks, press Save, and the changes immediately propagate through all of the different transcript and closed caption files that we offer you. So you don’t have to actually reprocess anything. It’s something that can be done very quickly.

We’ve built a number of tools that are meant to be self service or automated or super simple to use whenever possible. But the reality is that we’ll give our customers lots of attention. We expect to build relationships with everyone because that’s where we get ideas for new tools to make things even easier. So we really do value feedback, and we take it very, very seriously.

So real quick. In terms of how the process works, everything is based through a web-based account. We can handle credit card payments. We can handle invoicing. We’re very flexible in that regards. It’s a secure system, and you can control user access by inviting users with different levels of permissioning and roles. So you have full control over who has access and what type of access.

We have a number of ways to upload content into our system. You can upload from your desktop. We have FTP access and API. It’s unique for every account as well. And then certainly, we can integrate with some of the major video platforms and lecture capture systems to automate the entire workflow.

So it can be as simple as literally clicking a button to request captions, and then we can post those captions back for you automatically when they’re done. You don’t have to install any software. It’s just off the shelf and ready to go.

Another service that we offer is transcript alignment. So if you have a transcript, you can submit the transcript with the video, and we will synchronize the text to the media to create the captions and time synchronized transcripts from that. And that’s an option no matter how you upload to us.

And then once the content is ready to go, you actually will have the ability to download in a number of different caption and transcript formats. We’re actually constantly expanding this list as new standards emerge. So we’re always paying attention to what the proper formats are for different media players.

So even if you’re using, say, a Mediasite or Echo360 for lecture capture, and let’s say you put that content up on YouTube as well, you’ll have all the different caption formats at your disposal. You don’t have to reprocess anything. They’re all there for you.

Translation is fully built into the system now. So once the file is complete, you can select which language you want to translate into. And there are different service levels basically relating to the certification of the translator and the quality that you’ll get back. So there are different options there, which also correspond to different price levels.

So if you’re able to do your own review, that might be a way to save some money on the process. And in this, we also have an editing interface for the translation. So you can edit any translated subtitles as you need to as well.

This is a quick little blurb about our captions plug-in, which is an embeddable plug-in that will work with a number of media players, including some media players that don’t support captions at all. So Vimeo, for example, has no captioning support. With this plug-in, you can actually embed a Vimeo player on a page, and then add this captions plug-in, so that it actually will have captions with your Vimeo player. And this is included when we provide the captioning service. It’s very easy to install, and works very, very well with a number of media players.

We recently did a soft launch of this site. We’re officially launching in a few weeks, so it’s in beta right now. youtubecaptions.com is pretty much now the easiest way to add captions to YouTube videos. It’s completely self-service.

You can sign in with your YouTube account, pick which videos you want to add captions to, pay by credit card, and then once the captions are done, we’ll post them to YouTube for you, so you don’t have to do anything else. So it’s very, very easy to use. And it can absolutely be tested now, and then we’ll be officially live in a couple weeks.

So there are a couple resources here if you want to take a look. We’re going to spend about 10 seconds pulling some questions together. Please do feel free to reach out if you do have other questions. Happy to answer anything that comes up.

And I should note that this webinar is being recorded. We will be posting it on our website. So you’ll receive an email with a link to that recorded version. So definitely look out for that. And that recorded version will have captions and the transcript with it as well.

An interesting question about the use of YouTube in captions and how that works. The question basically centered on copyright concerns with YouTube if you don’t own the YouTube video. So if you do not own the YouTube video, technically, you do not have access to add captions to that YouTube video on YouTube.

Technically, you could use our captions plug-in. So there are ways to download the video from YouTube. We could create captions, and you could embed that YouTube video with our captions plug-in to provide accommodation.

It does depend on how you’re using that video. It’s the same copyright laws that apply, really, with any YouTube video. You have the right to embed it on a page if you’re using it the right way. Obviously, if you’re trying to redistribute it and try to profit from it, that would be an issue. Also, I should say, trying to get search engine optimization credit for that video would not be looked upon well.

However, if you’re adding captions purely for accommodation purposes– so for example, if you’re using that YouTube video in a class, and you wanted to make sure it was accessible– that would be absolutely fine. So it’s basically the same terms that really apply to any YouTube video when it comes to captions. It’s really not all that different. It’s a great question that does come up.

Question about the lecture capture systems that we support. So we are currently integrated with Echo360, Tegrity, Mediasite. And we are pending integration with Panopto as well, so that’ll be live soon. So those are the ones that we currently integrate with. And our captions also work with a number of other capture systems, such as Collaborate, Camtasia, and a few others. So there definitely are ways to add captions to some of the others as well.

One bit of clarification, based on a couple questions. We actually don’t offer any live captioning. Everything we do is based on recorded content. There is a pretty big technology difference in the way that’s supported. And that really does depend on the media player you’re using, and what capabilities it has.

Question about the youtube.com pricing in relation to 3Play pricing. As of right now, it is exactly the same. So every turnaround option is also there. So you could get it as fast as same day, eight hour turnaround if you wanted. But it’s all built in. And just to clarify, it’s youtubecaptions.com.

Another good question about– there are actually a couple questions about new video files with captions encoded into them. And the reason why that is important is for a number of mobile devices, or if you need to show the captions offline. There is an option within our account system to encode your captions into the media file that you’ve supplied. And there are even a number of different dimension or resolution settings that you can pick from.

So for iPhones, for example, the captions actually have to be encoded into the video. Otherwise, the captions won’t show up, because it defaults to a full screen QuickTime player. It basically operates unlike any web media player. So that is an option that we provide.

But the most common method for publishing captions, at least with web media, is that the captions remain as a separate file, which is often called a sidecar file. And then the media player knows to reference those captions when it plays the video.

There are a couple questions about access to transcripts and captions with our service. We actually basically give you access to the captions and the transcripts that we’ve created pretty much indefinitely. So as long as you’re using the service, you will have access to all of the output files that we’ve created for you.

A couple questions about payment, and how that works with schools. We have a few different options. So we can take a PO. Actually, you can pay by credit card right in the account system. There are different options for pay as you go, or you can even purchase a bucket of, say, 100 hours, 200 hours, of captioning, and actually get a discount. Because you prepurchased it, you can get a lower rate.

So there are actually a number of different ways to pay for the service. We’re pretty flexible in that regard, and we’ve actually probably worked with every possible set-up at this point. And that’s not a problem.

There’s a couple questions about screen reader accessibility with media players. And that’s important because if you do need to have a screen reader active, the media player really has to be accessible for the captions to work as well. So every media player’s a little bit different. YouTube, for example, is one of the more friendly media players when it comes to accessibility. JW Player is also a good one when it comes to accessibility.

Some of the big video platforms have an accessible version of their players. But it’s definitely something that’s worth checking out. The two that we’ve seen the most as being the easiest to get up and running in terms of screen reader accessibility tend to be YouTube and JW.

There were a number of questions about pricing. The pricing’s actually on our website. So if you just go to 3playmedia.com, there is a link for the pricing. So I’d definitely encourage you to check that out. There’s a full schedule of volume discounts, as well as the translation pricing. So it’s definitely worth checking out there.

With that, I’m going to actually stop, since we ran over time a little bit. Thanks, everyone, for joining us. If you do have other questions, please do feel free to reach out. We’re happy to speak more. But really appreciate everyone’s time and questions today.

And like I said, this will be recorded and posted online shortly. Also, if you did post any questions just now that weren’t answered, we will reach out to you and be in touch about that. Thanks very much.

Interested in Learning More?