Home

Plans & Pricing Get Started Login

Closed Captioning Legal Requirements, Best Practices, and Workflows for Media and Entertainment [Transcript]

LILY BOND: Welcome everyone, and thank you for attending this webinar entitled “Closed Captioning Legal Requirements, Best Practices and Workflows for Media and Entertainment.” I’m Lily Bond from 3Play Media, and our presenters for this webinar will be Tim Sale, the Director of Technical Sales from thePlatform, Josh Miller, the VP of Business Development at 3Play Media, and Mo Zhu, a software engineer at 3Play Media. So I’m briefly going to go over the agenda.

After this intro, I’m just going to hand it over to Josh, who’s going to go over closed captioning legal requirements and best practices. Then Tim is going to talk about closed captioning with thePlatform. And finally, Mo will take us through a demo of thePlatform captioning workflow. And that should take us through about half an hour, and then we’ll have about 15 minutes for questions.

So the best ways to ask questions is to type them directly into the box at the bottom right of your Control Panel. And we encourage you to submit your questions throughout the webinar, and then we’ll go through all of them at the end. And before we get into it, I just want to mention that a recording of this webinar, with captions, will be available tomorrow, and you’ll get an email with a link to that. So I’m going to hand it over to Josh.

JOSH MILLER: Thanks, Lily. So just a quick bit of background about who we are. We provide captioning and subtitling services with a focus on simplifying the process, and making what has been an extremely labor-intensive service more like a web service for our customers. The inspiration for 3Play Media came when we were doing some work in the Spoken Language Lab at CSAIL, which is the Computer Science Department at MIT, back in 2007. We now have over 1,000 customers in media, entertainment, education, enterprise, and government.

So just real quick, to make sure we’re all on the same page, and when we talk about closed captions– captioning refers to the process of taking an audio track, transcribing it to text, and synchronizing it with the media. Closed captions are typically located underneath a video or overlaid on top. In addition to spoken words, captions should convey all meaning and include sound effects.

This actually is a key difference from subtitles, where subtitles are more for language purposes. Closed captions originated in the early 1980s by an FCC mandate that applied to broadcast television. Now that online video is rapidly becoming the dominant medium, captioning laws and practices are being applied to the web content as well.

There are many different caption formats that get used, depending on the actual workflow or media player. The call-outs on the right give a glimpse of how differently captioned files can be built. An SRT file is a very basic web format that just has a start and end time code, along with the text for a given frame. The time codes are based on the media file run time.

And then you see the SCC file below that uses a SMPTE time code. SCC files also support additional styling parameters, including caption frame placement. Most web delivery methods take a caption file as a separate sidecar file, as opposed to having the captions encoded into the video file itself.

With cases like our integration with thePlatform, you wouldn’t have to worry about these details at all. You wouldn’t have to be concerned with which file format to use. All that gets taken care of in the background for you.

So quickly want to talk about some of the legal requirements when it comes to closed captioning, because they’ve definitely changed quite a bit over the last couple years with regards to web content. There are a couple applicable laws that have brought accessibility into the news quite a bit lately. The first is the 21st Century Video Communications and Accessibility Act, which is often referred to as the CVAA. That was signed into law in October of 2010.

The CVAA expands closed caption requirements for all online video that previously aired on television, and this is basically expanding the original FCC broadcast law. As of right now, full-length titles that aired on television with captions also need to have captions online. That law will soon cover clips, and there are also a number of exemptions for the online captioning requirement, which we’ll talk about.

The ADA is the Americans with Disabilities Act of 1990, which covers federal, state, and local jurisdictions. This applies to a range of domains. The two that are most applicable to the online video space are public entities and commercial entities, including places of public accommodation. The ADA is the law that’s been cited in a number of recent lawsuits, including Netflix, CNN, and FedEx.

The case law suggests that large video repositories that are readily accessed by the public can be considered places of public accommodation, and therefore must be accessible to everyone. So the flip side– the FedEx case is a little bit different, because it’s work-related content. It’s being distributed to employees for work purposes. Therefore, that must also be accessible. Certainly either way, this could have some really interesting implications for other video sites and enterprises across the internet.

The CVAA has been broken out into a number of milestones. So this timeline is for content owners to implement processes to adhere to the new captioning rules. The milestone that just passed made it such that any content that had aired on television with captions must also have captions online within 45 days of being posted online. That lag time is scheduled to shrink over the next couple years, meaning content that gets posted online will have to have captions added pretty quickly after being added online in order to be compliant.

There’s some new regulations that will be going into effect, or already are in effect, that maybe aren’t talked about quite as much, the first one having to do with video clips. Clips will be included within the corpus of content that’s required to be captioned online, starting January of 2016. “Clips” are defined as smaller segments of content that either came from a larger show or film, or went straight to the web.

So this first milestone addresses straight lift clips, meaning a single contiguous segment of a larger show. And then in 2017, montages, or mash-ups, will be included as well. None of these clip milestones have to do with user-generated content.

Then Spanish and bilingual programming also is covered by the CVAA. Spanish and bilingual content are now being treated the exact same way as English content. Previously there were some, almost, exemptions, in terms of how much content had to be captioned. But at this point, everything is exactly in line with English content. The interesting thing is that content that is neither Spanish or English does not fall into the CVAA, and does not need to have captions.

So the FCC has released a number of rulings on caption quality for video programming, the most recent one in February. That was the big one. The CVAA text states that online video that previously aired on television must have captioning of at least the same quality as when such programs are shown on television. So this means that television captions are setting the baseline for what is acceptable.

For the first part of this, text accuracy is basically exactly what you’d expect– is, the captions must match the spoken words of the dialogue. And it should be in the original language that they are spoken. The point here is that paraphrasing it would not be allowed. The captions must also coincide with what’s being spoken as much as possible. So it needs to show up at a speed that can be read by viewers.

For content being edited for rebroadcast, captions would also have to be edited for accurate synchronization. A caption should cover the entirety of the program or film, but it should not block any important visual content on the screen, or other information that is essential to the understanding of the program’s content. For example, a news segment might feature an interview that displays a graphic with the name of the person speaking. The captions, therefore, should be moved– most likely up– to avoid overlapping with that graphic.

With all the new requirements, it does make sense to have a few exemptions. There are a number of reasons why it may not make sense to force an organization to caption their content. There are a number of financial reasons why it may not make sense to force captioning of content, such as the company is still new to programming, or their revenues are under $3 million per year.

Until recently, churches and religious broadcasters were exempt from the more general closed captioning requirements, but that exemption actually was taken out of effect in 2011 by the FCC. So that means that faith-based organizations are subject to the exact same requirements as everyone else. The same goes for the exemptions, though, as well.

So just a few best practices when we think about transcription and captioning, in terms of putting good captions out there. This is a list of some best practices that we think about when it comes to actual transcription portion of the process.

We talked about accuracy already. Speaker labels should not give away plot points. For example, in a thriller, a mysterious voice on the phone should simply say, MYSTERIOUS VOICE, rather than reveal more critical details, like the killer’s name.

Sound effects can also be a bit subjective, but it’s important to capture the important ones. For example, keys that are jingling on the screen may not have as much significance as keys jingling on the other side of a door that can’t be seen. A random phone ringing in the background of a police station bullpen probably isn’t necessary to capture, as it doesn’t really suggest anything unusual.

In terms of just the actual captions, these points all come back to making for the most readable experience for the viewer. It covers everything from font styles to the structure and orientation of each caption frame that gets displayed. For example, captions should remain on screen for long enough to be read, but also should not contain too much text, making it unreadable. So with that, I’m going to turn things over to Tim from thePlatform.

TIM SALE: Hello, everybody. Thanks, Josh. I’m going to just share a few slides real quick, and talk you through thePlatform. So if you’re not familiar with thePlatform, we are a video content management system for managing, obviously, video, and helping customers syndicate that video, manage that video, and distribute to their end users.

So we’re about a 14-year-old company here in Seattle. And back in 2006, we were purchased by Comcast. We’re one of the family of Comcast companies, like NBC Universal, Comcast Wholesale, or Strata. And we’re a global company, so we’re based all over the world.

You’ve probably seen a number of our players out in the wild there, because we support a number of large content providers, like A&E, the NBC family of companies, CBS, Scripps Networks, as well as different pay TV operators throughout the industry. So here in the US, with Comcast xfinity and Time Warner Cable, but also in Canada with Rogers, British Telecom in England, Liberty Global in the Netherlands– we’re helping people power both online video, as well as full IP cable operator services as well. So going to set-top boxes and different devices like that.

And the main, sort of, high-level overview of the company is based around mpx as our cloud-based service for managing the content. And really what we do is sort of end-to-end management of both the content creation from the linear stream, the metadata inclusion with that content– so getting metadata from our content writer sources, but also from people like TMS and Rovi, who have listings data and other titles, descriptions, images, surrounding movies, cast, and whatnot. And we want our customers to take that content, advertise around the content, monetize the content through subscription and transaction-based commerce models, and then render this content in very fast, beautiful players.

And what we’re known for is, of course, enterprise class performance. So we’re doing this for content providers where video is their primary business, in most cases. And so you’ve probably heard of a number of companies like us, like Brightcove, Ooyala, and whatnot. And a lot of times, people try to group those kinds of companies together.

But we really think of ourselves as more of a video content management system, with a bunch of differentiating features, like our mpx replay service, which we announced this week, which allows you to automatically cut assets out of the linear stream and create catch-up resources, with, maybe, the C3 VCR, sort of the Nielsen ratings indicators in the content. And then also automate that content into a VOD asset– so complete, automated, end-to-end asset creation.

What’s nice about that, too, is once the content comes in, we can push this content over to 3Play for subtitling and closed captioning requirements. And we get a lot of these requirements from our content providers, because a lot of them– and most of them– fall under these FCC requirements that Josh was just going through.

We can take those assets, then, and also link them to all that, sort of, rich metadata that you expect from your content provider. So not just the title and description of a television show or video, but its relationship to an episode, or a season of content, or a series of content, so that people can navigate in sort of a grid-based layout that they’re used to, as well as these experiences that you would expect with somebody like iTunes. So you can navigate what series are available on your iPad. You can jump into the seasons that you might want to purchase very easily, not just video-by-video.

And then as I mentioned earlier, you can automate the premium content monetization with windows. So you might have content that comes in that’s ad supported or TV Everywhere authenticated against when it first airs, but then automatically enable a subscription or a transactional way to purchase the content in previous seasons, or as it gets older. So you can monetize your whole back catalog of content.

And then generally, you want to render this playback experience in a really user-engaging way. So we have an HTML5-driven video player that loads very fast, which is really important for keeping users on your site as we look to get those first video frames rendered within the first second or two. And then also, you want to make it very easy for customers and users to share that content with their friends through social media integrations and whatnot. And of course, the integrated closed captioning in here.

So thePlatform mpx and our player support a number of closed captioning formats, like SMPTE-TT, SRT, and DXFP. And these are the formats that 3Play can create, that Mo’s going to walk you through how that works. But before I get there, I thought– before he gets there– I thought I’d kind of show you just a quick little overview of how we manage this content in the system.

So I’m logged in as an administrator that has full access to the services. And what I’ve got here is, there’s a number of videos in my archive. And so what I can do is, sort of, navigate to the content that came in. Maybe this content was automatically created from the linear stream, or it was uploaded into the system. And I can quickly go in and edit metadata if I want to, and maybe enhance it. Maybe the metadata I got was not completely accurate, so I can– from somebody like TMS or Rovi– so I can adjust that, and watch a preview of the content if I want.

Now, a clip like The Dark Knight Rises is going to have a number of different format bit rate renditions. It’s going to have lots of images, and of course, the subtitling information, or closed captioning files themselves. So these are buried underneath this asset, to kind of make it easier to navigate through the different assets. And you can see here all the different variations of this file, as well as a SMPTE-TT format. Now, I just uploaded some SMPTE-TT from Twilight in here, so that’s why there’s a different name.

But what happens is, when we render the playback experience inside mpx, we’ll see that there’s a closed caption file there, and we’ll automatically display the closed captioning buttons in the player. So how do you set that up as a player? We have a nice little designer here for building players where you can choose from different layouts, and you choose the clips that you want to display in your player, as well as different skins that you might want to choose for design.

Now, these are just pre-defined ones that we built in the system that all our customers get, but actually, all of our customers can take these skins and adjust them, and change the color schemes, and whatnot, and build their own layouts here. I just put in sort of a simple one here. And once you save that, you can preview it here.

So you’ll see, as I go back to play this video, it sees the closed captioning files available, automatically displays this closed captioning button. I can select that. And we’ll see, as the closed captioning comes in– again, it’s not going to align directly with my Batman clip, but we’ll get our closed caption rendering right over the frame there.

Of course, all of this comes with reporting as well. So you can view the engagement of your content with your users as well. And that is all I have, so from there, I’m going to go right to Mo to show you how you integrate thePlatform with 3Play, and get those closed captioning files built automatically.

MO ZHU: Thank you, Tim. So I’m going to run through a quick demo of 3Play’s integration with thePlatform. And so I just wanted to, kind of, show what we have set up here in thePlatform.

These are our media files, which we got through Creative Commons. And I’ve also created a new user 3Play Webinar demo, who is an editor. And we’re going to have that be our channel into thePlatform. We do recommend that customers using both 3Play and thePlatform create a new, sort of, more generic user to plug in, in case certain things change. You don’t want Attach associated with a specific person.

So going to the 3Play account, after you log in, you’ll be taken to this screen, and you go to Upload, followed by Linked Account. That is how we refer to our integrations. And you can create a new linked account by clicking there, and selecting thePlatform.

So now here we have a place that’s going to ask you for information– your credentials for thePlatform. So I’m just going to give it 3Play Integration Webinar. That’s the name. Your PID can be seen in the user’s page. It’s either mps or mpx under the User ID column. And you can find that column, if you don’t see it, by clicking Customize Columns at the bottom there.

For this, we have it as an mpx user, and the username is 3PlayWebinarDemo@3playmedia.com. Great. And I’m going to type in the password here. And then your account number can be found by going to the top right corner of the screen, and clicking the little i. And it is at the end of that URL there. So I’m copying and pasting that and putting that in. The last two settings there, Postback and Tag Based Auto Upload are special features that we have that can enhance your workflow– make it much, much simpler. I’m going to get into that later.

So once you have that information in there, you can click Create Account. And it will take you to a screen which has listed all of the resources that you have uploaded into thePlatform. Here you can select– for example, if I want to get the Fresh 20 Cookbook video and the Stockholm Furniture Fair captioned, I will select those. And I can click Upload, and go through this workflow, and hit Submit.

I’m not going to do that, because then it will actually submit it, but that’s how you would do it. And once it is done, you will get an email notification that will tell you that your captions are ready, and you can download them. And this takes us to the two settings that we’re talking about.

The Tag Based Auto Upload is a way for you to automatically upload data into 3Play Media for captioning, based on categories and tags. So that way, you don’t even have to login to 3Play Media to get 3Play to ingest your media files. You can simply add a tag that is 3Play Rush or 3Play Standard, depending on the speed that you want the captions. And our system will automatically detect that for you. So you can turn that on or off here, at the Tag Based Auto Upload feature setting.

For Postback data, that’s, after we have completed your captions, we will automatically post it into that file and associate it with the appropriate media file, so that you don’t have to log into the system, and download the file, and then re-upload it into thePlatform. The system will automatically do that.

So these are two features that can further simplify your workload, if that’s desirable. So that’s about it for our integration. And I’m going to pass it back to Lily.

LILY BOND: Great. Thanks, Mo. So at this point, we are going to open it up for questions. So first of all, we have a question about whether or not the slides and the recording will be available. Yeah, we’ll make both of those available tomorrow, with captions, and we’ll have a slide show deck as well, and we’ll send you an email when that’s ready.

Great. So Josh, maybe you want to take the first question. It looks like a question about whether automatic speech recognition is good enough to satisfy legal requirements.

JOSH MILLER: Sure. So based on the accuracy requirements of the FCC laws, speech recognition almost always will not suffice. The reality is that, at this point, even with all the research that’s gone into speech recognition, the ceiling really is around 80% accuracy, which means one out of five words is still wrong. And then if you think about feature content, like films and TV shows, that accuracy number is going to decrease pretty dramatically, based on the sound effects and music that will really throw off a speech recognition engine.

We use speech technology as part of our process, but we think of it as a means to an end. It’s only part of the process. We still believe very much that humans are required, and we have humans in our process as well.

LILY BOND: Great. Thanks, Josh. So next there’s a question about the video players in thePlatform, and whether any video player you create will support closed captions. Tim, since you demoed that, do you want to talk a little bit about that?

TIM SALE: Yeah, that’s right. It doesn’t matter whether it’s an HTML-based environment or a Flash-based environment for playback of the content. The closed captionings will automatically appear as an option in the player by default, if there’s closed captioning involved. So once it goes through the 3Play closed captioning process, it’ll automatically update with that button.

LILY BOND: Great. Thanks. So I see another question here about something within thePlatform account. So in the demo, Mo created an editor role. Is that required? Tim, do you want to talk about that a little bit, and maybe Mo can weigh in also.

TIM SALE: Yeah. It is required to have a user for the integration, because it’s basically an API integration between the two systems. So we use that role to give the right permissions to the API calls that are made against the system. And that’s a nice role for having a limited amount of access to the API without giving admin access, for example.

And as Mo mentioned, we recommend that you create a separate user for these calls, because you don’t want a real editor changing their password, for example, and then breaking that integration. So you ideally create that user, give it a separate password, and that’s one that you don’t change, unless you do so purposely.

LILY BOND: Great. Thanks. So Josh, a question for you. If we caption content on thePlatform, but then need the captions in other formats, is that possible?

JOSH MILLER: Yeah. I mean, that definitely is a big part of what we’re trying to offer, is just the flexibility of workflow. So we can post captions, basically, automatically back to thePlatform for you, and then if, for any reason– let’s say you’re putting content up on YouTube as well– we’d have whatever format you need for YouTube as well. So that’s something that we make available pretty much at all times through our account system. So you could download other formats, in whatever format, at any time.

LILY BOND: Perfect. Thank you. So Tim, another question for you. How would you deal with live closed captioning for the web?

TIM SALE: Yeah. So generally for live captioning, we’ll get the captioning stream from the live encoder itself. So it needs to come into the encoder that way. And then the players, the native players, will pick that up.

So there’s a plug-in for our player to process– I think it’s referred to as 708 captions. So that’s when the content is actually embedded in the stream. And that will get rendered just like any other text-based format in there. And it also works for, like, native players, like on iPhone, for example, where a user goes and, in their settings, chooses to turn on subtitling. That will also get displayed there.

LILY BOND: Great. Thank you. So Josh, for you. Do programs that have a lifespan of less than 24 hours require captioning? And if the same program is placed online for a week, does that affect closed captioning requirements?

JOSH MILLER: It’s a good question. The baseline is really, did the content air on broadcast with captions? So that’s the first question. If it was on television with captions, and then goes online, then the answer is basically yes. It needs to be captioned.

The same exemptions would apply for online content that apply in general. Let’s say you’re a major news station and your content gets put up online. If it’s the full-length program, then the answer is yes. If we’re talking about clips that get put up online, then that requirement has not gone into effect yet, so you would not be required to caption those clips from the full-length file.

Again, keep in mind that the baseline for quality is also what was aired on television. So if you’ve got a live program, and you’ve got your live captions, technically all you’d have to do is re-sync that captioned file, and it would be absolutely OK. You wouldn’t necessarily have to fix the mistakes from a live caption feed.

LILY BOND: Great. Thanks. So another question for you, Josh. Is it possible to import captions automatically into thePlatform?

JOSH MILLER: So I’ll take one part of this, and then, Tim, you should probably weigh in as well. So we have the ability to import existing captioned files in a number of different formats, and then reformat them, and basically post them wherever you might be– wherever you have different workflows in action. So the answer would be yes, as long as it’s not a totally esoteric format.

The point of this is that, when we import captioned files that you already have, a lot of times it’s because you need them in other formats. And so we’d be able to treat them just like any other file we’ve created, making all the different output formats that we normally would offer available, regardless of what you had to start with. So Tim, do you want to talk about, maybe, some of the caption import functionality within thePlatform as well?

TIM SALE: Yeah. I mean, as content, generally our customers are automating the content creation in our system. We don’t have a lot of customers that are manually adding and uploading videos that way. So generally, we’re monitoring some type of service that’s [INAUDIBLE] the big video files, or we’re cutting them directly out of the stream or something.

And sometimes our customers will have captioning formats that are created, or maybe they have them in archive or something. So those can come in like any other image or video, whatnot, into the system, and get properly labeled as, maybe, the proper language, or things like that, or what format they’re in, stuff like that. So once that happens, basically, they can be treated like any other asset, and they can be pushed over to 3Play. Say you only had DFXP or something, and you wanted to be FCC compliant by going to SMPTE-TT, you could push that over as a resource for that.

ThePlatform’s mpx is really this, kind of, archiving system for all of your video assets. So we tend to, kind of, collect these types of formats, and images, and videos, over time. So it’s not uncommon to see different image sizes, different format types, for closed captioning.

LILY BOND: Thank you. Mo, a question for you about whether the integration with 3Play supports subtitling, and how you would add translations through 3Play.

MO ZHU: To add the translations through 3Play, that goes to a couple of our third-party partners. You would just simply add– you would simply request them through the interface, and then they would– they can also be posted to thePlatform.

LILY BOND: Great. Thank you. So Josh, a question just about the captioning process, and how you guarantee accuracy, and whether the accuracy is complaint with the FCC standards.

JOSH MILLER: Yeah. So we have a kind of a unique process, which is possibly why this question came up. And I mentioned that the speech recognition is part of our process as well. Like I said before, we’re trying to make this more like web service. We’re trying to make everything more efficient.

So what we do is, we take a file, we put it through speech recognition first, which provides a draft, a starting point. And then we’ve really built an editing platform to clean up that draft. And we have a human who goes through every second of the file, cleaning up mistakes, adding punctuation in the right place, adding the speaker IDs in the right place– all of the things you’d expect to see from a really clean captioned file.

It then goes to another QA process, which is also human driven, before it’s finally what we would call a finished captioned file. At that point, we basically guarantee over 99% accuracy. We have a measured accuracy of 99.6%. What we’re doing is, we’re actually putting in quite a number of quality control mechanisms to make sure that that doesn’t change.

And the way we think about is, that shouldn’t be– that process and quality should not be any different whether we process one file, 10 files, or 10,000 files. That it’s really important that that quality stay the same, and stay really consistent. So we have a number of different auditing mechanisms to make sure that that takes place. And it’s something that we really put a lot of effort into, to make sure that it works. To us, if we’re slipping in quality, we’re not doing our jobs.

LILY BOND: Thanks, Josh. So that’s about all we have time for. You can feel free to email us with any more questions you might have. And there are some resources up on the screen here. There are some white papers from 3Play, as well as some information about our integration with thePlatform, and some resources from thePlatform as well. And again, this will be online with captions tomorrow. Thanks for attending.

Interested in Learning More?