Section 508 and 504 Video Captioning Requirements, Workflows, and Best Practices for Government and Federally Funded Programs [TRANSCRIPT]
SAMANTHA HOLLAND: Good afternoon. Carahsoft Technology would like to welcome you to our 3Play Media Webinar, Section 508 and 504, Video Captioning Requirements, Workflows, and Best Practices. Before we get started, I’d like to go over a few housekeeping items for today’s session. You’re able to hear the webinar through your computer speakers. All lines have been muted to reduce any background noise.
If you do have any questions throughout the presentation, we encourage you to use the Q&A pod on the left side of your screen, and we will do our best to answer your questions at the end of the presentation. If for some reason we do not get to your questions, our team will certainly follow up with you offline.
To tell you a little bit about Carahsoft, we are a trusted government IT solutions provider delivering software and support solutions to federal, state, and local government agencies. Carahsoft maintains dedicated teams to support sales and marketing for all its vendors. Our contact information will be at the end of the presentation, so please don’t hesitate to call or email us. This session is also being recorded, and a copy will be emailed to you shortly afterwards.
I would like to introduce our speakers for today, Josh Miller, vice president of business development at 3Play Media, as well one of the co-founders of the company, and Lily Bond, marketing manager at 3Play Media. At this time, I’m going to hand the presentation over. Lily, the floor is all yours.
LILY BOND: Hi, everyone. I’m Lily Bond. And as Sam said, I’m the marketing manager at 3Play Media. To start things off, we have a poll that we would love for you to answer, which should read, are you currently captioning your videos? And you can select all of our videos have captions, some have captions, we don’t add any captions to our videos, or I don’t know. And I’ll just give you a minute to answer that.
Great. Thanks everyone. That’s really great to know, as Josh and I go through our presentation. It looks like people are kind of all over the board, so we’ll make sure to cover all bases.
So to start, I’m going to briefly go over an agenda. I’ll start out with an introduction to captioning, go over the benefits of captioning, and then dive into the accessibility laws, focusing on Section 508 and 504. And then I’ll hand it over to Josh, who will go over our captioning products and services and give a live demo of some of our video search tools. And then we’ll finish up with some questions. And again, you can feel free to ask questions throughout, and we’ll go over them at the end.
I’d like to start off with a bit of global and national data from the World Health Organization and the US Census Bureau, because there’s a lot of interesting data and trends that I think you’ll find interesting. First of all, more than 1 billion people in the world today have a disability. And in the US alone, 56.7 million have a disability, and 11% of higher ed students have a disability as well. And as you can see, 45% of 1.6 million veterans sought disability, and 177,000 of those were related to hearing loss.
So one of the most interesting conclusions is that the number of disabled people is increasing rapidly and disproportionately with the population growth. If you’re wondering why that’s happening, there are actually a couple of big reasons. The first is medical and technological advancements.
One of the examples here is the survival rate for premature babies is increasing, which is obviously great, but the side effect is that more babies are being born with disabilities. And another reason is that we’re coming out of a decade of war. And with modern armor, soldiers are 10 times more likely to survive an injury than in previous wars. And again, this is obviously a very good thing, but it means that they’re more likely to sustain an injury, such as hearing loss. All of this points to the fact that accessibility is a critical issue that will become even more prevalent in the years ahead. And captioning is obviously an important part of this, which is why we’re talking about it today.
So what are captions? We’ll take it from the very beginning. Captions are text that has been time synchronized with the media so that it can be read while watching the video.
And captions assume that the viewer can’t hear the audio at all, so the objective is not only to convey the spoken content, but also sound effects, speaker identification, and other non-speech elements. Basically, the objective is to convey any sound that’s not visually apparent but integral to the plot. An example of that is that you would definitely want to include the sound effect, keys jangling, if you hear the sound behind a locked door, because it’s important to the plot development that someone is trying to get in. But you would not include it, if it’s the sound of keys jangling in someone’s pocket walking down the street.
Captions originated in the 1980s as a result of an FCC mandate specifically for broadcast television. But now, as online video becomes more and more an everyday part of our lives, the need for web captions has expanded and continues to expand greatly. So as a result, captions are being applied across many different types of devices and media, especially as people are becoming more aware of the benefits and the laws become increasingly more stringent, all of which we’re going to talk about in just a second.
But first, let’s go over some terminology just to make sure that we’re all on the same page. First, the difference between captions and a transcript is that a transcript is not synchronized with the media. On the other hand, captions are time coded so that they can be displayed at the right time while watching a video. For online media, transcripts are sufficient for audio-only content, but captions are required any time there’s a video component.
The distinction between captions and subtitles is that captions assume that the viewer cannot hear, whereas, subtitles assume the viewer can hear, but cannot understand the language. So that’s why captions include all relevant sound effects, but subtitles are really more about translating the content into a language that the viewer can understand. The difference between closed and open captions is that closed captions allow the end user to turn the captions on and off. And in contrast, open captions are burned into the video and cannot be turned off. With online video, you’ll mainly see closed captions.
And finally, post production versus real-time captioning refers to the timing of when the captioning is actually done. So real-time captioning is done by live stenographers, whereas, post production captioning is done offline and takes a few days. And there are advantages and disadvantages to both of those.
Looking at caption formats, there are many different caption formats that are used with specific media players. For one example, the image at the top right shows what a typical SRT caption file looks like. And that’s the type of caption file you would want to use for a YouTube player, for example.
You can see that it has three caption frames there. And each caption frame has a start time and an end time, followed by the text that appears in that time frame. And at the bottom is an SCC file, which is a little bit different format, a little bit more complex.
So once a caption file is created, it needs to be associated with the corresponding video file. So the way to do that depends on the type of media and the type of video platform that you’re using. For sites like YouTube, all you have to do is upload the caption file with the video file, and we call that a sidecar file.
In other cases, you actually need to encode the caption file onto the video. ITunes is an example of that. And another way to associate captions with the video is with open captions, which I just mentioned. And again, those are burned directly into the video and cannot be turned off. If you’re using one of the popular video platforms or one of the ones that we’re partnered with– such as Brightcove, Mediasite, Kaltura, Ooyala, there are a lot of them– then this step becomes trivial because most of this all happens automatically.
So the primary purpose of captions and transcripts is to provide accessibility for people who are hard of hearing or deaf. As I mentioned earlier, 48 million Americans experience hearing loss, and closed captions are the best way to make media content accessible to them. Outside of accessibility, though, many people have discovered a number of other benefits to closed captioning, and I’m going to go through some of those right now.
Closed captions provide better comprehension to everyone. The Office of Communications in the UK conducted a study where they found that 80% of people who were using closed captions were actually not deaf or hard of hearing at all. So closed captions really provide increased comprehension in cases where the speaker has an accent, if the content is difficult to understand, if there’s background noise, or if the viewer knows English as a second language. And captions also provide a flexibility to view captions in noise-sensitive environments, such as offices, libraries, or gyms.
Captions also provide a really strong ground for video search. And there are certain plug-ins that we offer that can make your videos searchable. People are used to being able to search for a term and go directly to that point, and that’s what our interactive transcripts let viewers do within a video. And Josh is going to demo that in a little bit.
For people who are interested in SEO, or search engine optimization, closed captions provided a text alternative for spoken content. So search engines like Google can’t watch a video, and this text is the only way for the search engines to correctly index your videos. Discovery Digital Networks did a study to see the impact of closed captions on their SEO, and they found that adding captions to their YouTube videos increased their views by 7.3%.
Another benefit of captions and transcripts is their reusability. The University of Wisconsin found that 50% of their students were repurposing video transcripts as study guides. And you can also take the transcript from a video and use it to quickly create infographics, white papers, case studies, and a lot of other docs that can be useful for other things. Of course, once you have a caption file in English, you can translate that into foreign languages to create subtitles, which I talked about a little bit earlier. And that makes your videos accessible on a much more global scale.
So finally, captions may be required by law, which is probably why most of you are here today. And I’m going to dive into the Federal Accessibility Laws right now. So the first big accessibility law in the US was the Rehabilitation Act of 1973. And in particular, the parts that apply to captioning are Sections 508 and 504.
Section 508 is a fairly broad law that requires federal communications and information technology to be acceptable for employees and to the public. Section 504 is basically an anti-discrimination law that requires equal access for disabled people with respect to electronic communications. Both of these sections apply to federal programs. And I’ll go into more detail on this law and about who is specifically implicated in just a second.
The Americans with Disabilities Act is a very broad law that is comprised of five sections. It was enacted in 1990, but the ADA Amendment Act of 2008 expanded and broadened the definition of disability. Because the ADA basically expands Section 504 of the Rehabilitation Act to the public sector, this law doesn’t necessarily pertain to government programs, but it has a lot of really similar requirements.
Title II and Title III of the ADA are the ones that pertain to video accessibility and captioning. Title II is for public entities, and Title III for commercial entities, which is the area that has the most legal activity. Title III requires equal access for places of public accommodation. And the gray area here is what constitutes a place of public accommodation.
So in the past, this was always applied to physical structures. For example, requiring wheelchair ramps at buildings. But recently, the definition has been tested against online businesses. So one of the landmark lawsuits that happened a couple of years ago was the National Association of the Deaf versus Netflix. The National Association of the Deaf sued Netflix on the grounds that a lot of their streaming movies did not have captions, and they cited Title III of the ADA.
One of Netflix’s arguments was that they do not qualify as a place of public accommodation. But the courts ended up ruling in the end that Netflix does qualify. And they ended up settling. And now, Netflix has captioning on close to 100%, if not 100%, of all of their content at this point. So the interesting thing to come out of that case is that Netflix is considered a place of public accommodation, which that’s a really profound precedent for the ADA’s application to the web and to other online content.
There are a couple of other ADA cases that haven’t had decisions yet. Time Warner was sued by the Greater Los Angeles Agency of Deafness for not providing captions on CNN’s web videos. And FedEx was sued by the US Equal Employment Opportunity Commission for discriminating against deaf and hard of hearing workers. And the decisions on these will further shape the scope of the ADA.
So the CVAA is the most recent accessibility act. It stands for the 21st Century Communications and Video Accessibility Act, and it was passed in October of 2010. It requires captioning for all online video that previously aired on television. So for example, this applies to publishers like Netflix and Hulu, or to any network websites that stream previously aired episodes online.
There are a lot of upcoming FCC updates to this law, but the biggest one is that, starting in 2016, clips from television programs must be captioned when they go online. So for example, a two-minute excerpt from a show that you can view online would need captions. And by 2017, that will be expanded to montages. So then things like trailers and previews for an upcoming show would need to be captioned. And with the CVAA, the copyright owner bears the responsibility for captioning.
So I’m going to go into more depth on Section 508 and 504 of the Rehabilitation Act, which applies specifically to federal and to federally funded programming. When it was enacted in 1973, Section 504 was actually the last sentence of the Rehabilitation Act. And it was expanded in 1977 to really be the first statute in the US to declare civil rights for individuals with disabilities. It essentially affords individuals with disabilities the same rights as groups protected by the Civil Rights Act of 1964.
And Section 504 very clearly applies to both federal and federally funded programs. So not only do all government programs need to follow Section 504, but federally funded programs like airports, higher education, primary and secondary schools, libraries, and federally assisted housing all have to follow the anti-discrimination statutes of Section 504. And as I mentioned earlier, in 1990, when the ADA was enacted, Section 504’s regulations were extended to the public sector.
Section 508, meanwhile, is an amendment to the Rehabilitation Act that was signed in 1998. And it applies to federal programs in regards to electronic and information technology. Basically, this means that federal programs have to provide accessible web content and accessible technology to individuals with disabilities.
To go into more depth on Section 508’s web accessibility requirements, let’s talk about who exactly is implicated by the law. Honestly, there’s a lot of debate over Section 508’s direct influence. Because it’s not specifically written into the law that Section 508 applies to federally funded programs, many people believe that they are not implicated by Section 508. However, other people believe that, because the Rehabilitation Act applies to federally funded programs, Section 508 should extend to such organizations because it is a part of the Rehabilitation Act.
Regardless, there often other mandates in place that extend Section 508’s influence to more organizations than just federal programs. I’m going to go over some of those right now. For example, the Assistive Technology Act provides funding to states to help provide assistive technology to individuals with disabilities.
And the Assistive Technology Act will not provide funding to states, unless they guarantee that all programs receiving funding– and this includes colleges and universities– they won’t guarantee that those programs receive funding, unless they comply with Section 508. And keep in mind that most states do receive funding from the Assistive Technology Act, so this is actually a pretty far-reaching stipulation.
Another way that non-federal programs are implicated by Section 508 is through state law. Many states have imposed laws that directly reference Section 508 in requiring state funded programs to provide accessible electronic and information technology. There are a number of states with web accessibility laws, and I’m not going to mention them all right now. But just to name a few, California, New York, and Minnesota all have fairly comprehensive web accessibility laws at the state level that directly reference Section 508.
And in terms of the requirements for closed captioning that are laid out by Section 508, anyone implicated must provide closed captions for video programming. And all television and computer displays must be able to decode and to display close captions, so that’s pretty cut and dry. And again, programs that are required to comply with Section 508 through the Assistive Technology Act or through state laws would also have these same requirements.
It’s important to keep in mind that Section 508 was enacted in 1998, when most web pages were static HTML without dynamic media. Now websites are much more complex and are a much more integral part of our everyday lives. So in February of 2014, the US Access Board submitted a proposed rule that would refresh Section 508 standards. And the Access Board’s proposal directly references WCAG 2.0 Level A and AA success criteria, which are the Worldwide Web Consortium’s collection of web content accessibility guidelines.
WCAG 2.0 is really the international standard for web accessibility. But as of now, it has no legal backing in the US, so it’s just a suggested standard at the time. The Section 508 refresh would be the first legal backing of WCAG 2.0 in the US and would require much more comprehensive web accessibility measures for those implicated by Section 508, moving forward. WCAG 2.0 works under the assumption that websites should be designed to make all content perceivable, operable, understandable, and robust for all people, and it has three levels of compliance. Most likely, the Section 508 refresh would began by requiring Level A compliance, which is the minimal level, with a proposal to phase in Level AA compliance.
Before we move on, I’m going to go over some of the Section 508 lawsuits that have taken place over the last few years. All of these are law suits filed by the National Federation of the Blind, or NFB. The first two listed here are against the US Department of Education, where the NFB filed a complaint that one of their websites was inaccessible to blind people who were using screen readers or Braille. And the other complaint said that the Direct Loan Program denied a request for monthly student loan statements in an accessible format. The settlement reached was in favor of the National Federation of the Blind in the case against the Direct Loan Program.
Similarly, the NFB sued the Social Security Administration and the Small Businesses Administration for having inaccessible websites. And an agreement was reached in the case against the Small Business Administration, also in favor of the NFB. So you can see here that the court cases have all directly referenced web accessibility. And the agreements and settlements have all required the defendant to provide accessible alternatives. So with that, I’m going to hand it over to Josh.
JOSH MILLER: Great. Thanks, Lily. So I’m just going to talk a little about who 3Play Media is and what our approach is to this captioning challenge. So we provide a more efficient and cost-effective solution for closed captioning, subtitling, transcription. We started with a project at MIT in the Spoken Language Lab at CSAIL, which is the computer science department there. And we built a solution that aims to take the laborious and expensive captioning process and really make it more like a web service. And we now have over 1,000 customers spanning government, education, corporate, and media.
Our core services are focused on transcription, captioning, subtitling, as I said. And then we’ve built tools to make the workflow much, much easier, such as with our API, our video platform integration. And we also have tools to make video content completely searchable, as Lily was talking about. And it’s all using the same time coded text that is used to create the captions is used to create a searchable experience.
We use a what we call a multi-step review process that delivers more than 99% accuracy, even in cases of poor audio quality or multiple speakers, difficult content, or accents. So typically, 2/3 of the work, in terms of the overall accuracy grade, is done by a computer. And then the rest of it is done by trained transcriptionists. So this makes our process more efficient than other options. More importantly, it affords our transcriptionists the flexibility to spend a little more time on the finer details.
For example, we’ll diligently research difficult words, names, or places. So we put a lot of care in to ensure that the correct grammar and punctuation is there, in addition to all the potentially difficult vocabulary. And we’ve also done a lot of work on the operational side of the business, such as making it possible to match transcriptionist expertise to certain types of content.
And we have close to 1,000 transcriptionists now, so we really do cover a broad range of disciplines. And without exception, all the work is done by certified transcriptionists here in the United States. Every transcriptionist goes through a certification process before they even touch a real file. And they’re continuously scored, as well.
We’ll talk a little bit about the process itself. There are several ways to actually upload content to us, including a secure web uploader, FTP, or API. We’ve also built a number of integrations with the leading online video platforms and lecture capture systems.
So regardless of the method you choose, you also have the flexibility to select the turnaround that you would want with each upload. So for example, you could decide that a certain file isn’t as urgent as another, and you put it through what we call our standard turnaround. Whereas, you could upload the next day and realize you need that file back the next day, you’d have that option when you upload the file.
And so I’ve mentioned the platform integrations. We’ve built out a number of out-of-the-box integrations with the media platforms and lecture capture systems. They include systems like Brightcove, Ooyala, Kaltura, YouTube platform, Mediasite, and many more that you see here.
The benefit of these integrations is that the workflow is completely automated for you. So you can select a few files to have captioned with just a couple of clicks. Those files get sent to us. We’ll process them, and then we automatically post the captions back to the system you’re using when they’re done. And then the captions just display wherever you’ve published that video. The set up for these supported integrations really only takes a few minutes, as well. So it adds a lot of value in terms of just making your life easier.
We offer over 50 different caption and transcript output formats to choose from, meaning just about any web media player or publishing scenario, or even if you’re going to broadcast, all of them are covered. Our goal is to make everything as easy as possible. The reality is that many different players use different types of caption files.
So if you’re using one of the platform integrations, the correct format will automatically be selected and posted back for you. You don’t have to worry about that. But you’ll also have access to all these other formats, should you need them, since we actually store the output files for you.
And we also stay on top of the latest standards. So if a standard is changing and a new format is created, we’d be able to create that new format and create a template around it for that standard. And we’ll actually be able to retroactively apply the template to all the files that have already been processed, so they’re actually available right away for you. You don’t have to reprocess files to get this new format that you might need. Or if you needed to switch to a different media player or a different platform that requires a different caption format, it’s very easy to pull out whatever you need.
One thing that we’ve found, no matter how hard we try, certain proper nouns or vocabulary can be difficult to get exactly right, so we’ve built the ability for you to make changes on-the-fly. So if you know that a name happens to be misspelled, you can actually just go in, quickly make a change, and that’s that. You’re done. Similarly, you could redact an entire paragraph, if you wanted to.
So once you’ve saved those changes, the changes get immediately applied to the file, and then they propagate through all of those output files we just showed you. So you don’t have to reprocess anything. Everything just gets made available with the updates.
The one thing I should mention is that, with difficult content, in terms of vocabulary, or proper nouns, or acronyms, which can be very common with government content, we have the ability to allow you to actually give us a list of terms or additional information about the content. So you could give us a glossary, or what we call a cheat sheet. It’s very easy to add that information either to a single file, or even a whole batch of files. And that’s information that would go right to the transcriptionist, and they have it and can use that as a reference point.
So Lily started to mention the interactive transcript. This is a free tool that we offer for files that we’ve transcribed or imported into the system that adds a layer of interactivity for the viewer while also making video entirely searchable by the spoken word. So the text will highlight as it’s spoken, and each word can be clicked to jump to that exact point in the video.
There’s also the search functions based on everything that’s being spoken, so it’s a full text search. And the plug-in can be embedded on a web page. And is compatible with a number of different media players. So you don’t have to change a whole lot of what you’re doing in terms of publishing video, this is something that gets added on to what you already have.
So I’m actually going to show you a demo of what this looks like in action, real quick. OK, so what you see here is a page from the Harvard School of Public Health. And here’s the YouTube player on the left. And underneath, is this interactive transcript. So the whole idea is that it’s displaying more of the text than just a closed caption track.
What you also see here are a number of options around the interactive transcript, such as download the transcript. So if I click this, I’ll actually download the file. Another option is to learn more about the event. So this has been customized a little bit by Harvard School of Public Health, so this is additional content that they’ve added here.
Here is the search bar, so I can search within the transcript. And as you see, if I scroll, each word actually highlights as I hover over it, and I can click on it. So if I click, it’s going to actually jump to that point in the video. And then it’ll play along. And you can see, very subtly, it highlights as it’s being spoken.
So I’m going to stop this, in case it’s a little choppy over the connection. And what you see on the right is what we call archive search. This is an additional tool. It doesn’t have to be used with the interactive transcript. So this, right here, just requires a single script that gets added to the HTML.
On the right, this archive search is another script. What this allows you to do is actually search across an entire library of videos. So again, just using the same time-text data that we use for the closed captions, we’re able to use that for our search experience.
So let’s say I want to search for the word, “policy” in this library, it’s going to show me a visual timeline of all the videos that have the word and where the word appears. So here, this video, “policy” is quite an interesting topic. It talks about it a lot.
And when I open the segment, it gives me the context and the ability to actually play the clip from that point. So if I click this, it’s actually going to switch the video and jump to that point in the video. So this is just another tool that can be used. It’s all using, basically, an accessibility tool as the starting point.
All right, so one last thing about support. We’ve built a lot of tools– as you may have gotten wind already– that are really self-service, or automated. We really try to make everything as easy as possible to use and really start to reduce the reliance on humans in the workflow.
But that being said, we base a lot of our success in the company on the fact that we give our customers a lot of human attention. And we expect to walk people through the tools. We expect to walk people through the setup process. And we really enjoy building those relationships because, through those relationships and those conversations, we learn a lot about what other features could be really useful.
And so we’re constantly thinking about how do we make the captioning workflow, the captioning process, easier, unobtrusive? So we really, really take the feedback seriously. And we want to hear more about what tools could be better, what have we not addressed yet? So that’s something that we really take very seriously. So with that, I’m actually going to turn it back to Lily.
LILY BOND: Great. Thanks, Josh. So at this point, I think we’re ready to open it up for questions. Again, feel free to ask questions directly in the Q&A box. And we’ll start getting to those in one second. Really quickly, though, I want to go over some resources that we have on the screen.
First of all, we have a white paper on Sections 508 and 504 that goes into a lot of detail about the closed captioning and web accessibility requirements. We have a few other resources on the screen. And we also got a question about our integrations and wondering if there’s a list of platforms that we’re integrated with. And we have a bunch of how-to guides on our website, 3playmedia.com, and you can find that full list there.
So I think we’ll get right into questions, then. So Josh, there’s a question for you here. I know you mentioned accents or technical content, but there’s a question here asking if you could go into more detail about how exactly we handle accents and technical vocabulary.
JOSH MILLER: Sure. So one way is that we definitely ask for people who know they’re going to have difficult content to give us those lists, those vocabulary lists, the glossaries in advance, because that’s something we can definitely use. That’s a great resource and a great way to ensure that things are done right the first time. So that’s one method.
The other is that, just based on the number of transcriptionists we have, we actually know that certain people are very good at certain types of content. Certain people are actually really comfortable with accents, some people are not. So we’re able to help guide content towards the right people, based on their expertise as well. So that process is happening all the time, and that’s something that makes everything go a lot smoother.
LILY BOND: Great. There’s another question here. Will the captioning requirements change if the Section 508 refresh gets passed? So just to talk a little bit about that, if the Section 508 refresh gets passed, the main difference is just going to be that the web accessibility requirements are a lot more comprehensive.
But because closed captioning was already listed in the Section 508 technical requirements, that specifically is not going to change that much. However, the strictness of the closed captioning requirements will change, depending on the level of WCAG 2.0 compliance that is recommended by the Section 508 refresh.
So just to look at something like closed captions with WCAG, there are three levels of compliance– Level A, Level AA, and Level AAA. And so with Level A, the requirements for captioning is that captions be provided for all pre-recorded content and that they be synchronized with the media. So again, transcripts alone would not be sufficient.
And then with Level AA, captions must be provided for all live content and synchronized media. And with Level AAA, sign language interpretation would need to be provided for pre-recorded content. So you can see how the requirements get stricter and stricter, as the levels of compliance are increased. But again, Section 508 already requires captioning, so the basic changes there would not impact captioning quite as much.
Josh, another question for you. What if I already have captions for my videos? Can they use them in 3Play? Can they use the tools and whatnot?
JOSH MILLER: Yeah. That’s actually a great question. We recently launched a service specifically designed for that. We call it Caption Import Service.
So you can actually import captions that have been created elsewhere into the 3Play system. You’d have the option to use all the tools that you would get if we created the caption, so everything from the editor, to the interactive transcripts, to all the different output formats. We’d actually convert your file into a standardized version so that we can spit out all the different formats you saw.
So absolutely. And it’s a lot less expensive in terms of getting files into our system. It’s a pretty low cost subscription model where you can basically import up to X number of files in any given month.
LILY BOND: Great. Thanks, Josh. There’s another question here. You showed that your closed captions can be translated into other languages. Someone’s wondering how that process is done. Do you want to take that one?
JOSH MILLER: Sure. So that workflow is also built into the 3Play interface. So we talked a little bit about glossaries for the captioning. Translation also really benefits from some kind of style guide or guidance in terms of how you want the translation to take place. So we have what we call a Translation Profile where you can explain who your audience is, what kind of tone you want, because translation ends up being a bit more subjective. So that’s something that we definitely encourage people to fill out.
And that gets submitted with any translation request that gets made to the translator. And so you can actually click on a file in the 3Play system to select it. And then there’s a button that literally just says, Order Translation, and you can pick what language you want to have it translated into. And it gets sent off to the translator.
When it comes back, it comes back and is made available in all the same output flavors so that you can use them. And again, we have an editor. If you ever want to make changes, there’s an editor there.
And it’s synced up to the English, frame by frame. So everything is synchronized. You don’t have to worry about timing. Everything’s ready to go as a subtitle file.
LILY BOND: Thanks, Josh. So someone’s asking, in addition to YouTube, my department also uses DVDs. Can 3Play Media help with DVD captioning? If so, how?
So in terms of DVDs, yes, we can help with that. It kind of depends on what authoring software you use for your DVDs. And we have a lot of how-to guides on captioning for different authoring softwares. And in terms of YouTube, there are a few questions here about our integrations in general. So maybe I’ll just walk through how our YouTube integration works, as an example of what it would be like for other platforms as well.
So for YouTube, you would just upload your video to YouTube. And then there’s a place in your 3Play Media account where you can actually link your 3Play Media account with your YouTube account. And once you’ve done that– and that’s a one-step process, really simple, it just has you log in with your Google account– all of your videos that you have on YouTube will appear directly in your 3Play Media account.
You can just click on them, and send them in for captioning, and select a time frame that you want the turnaround for. And when the captions are done, they’ll just appear automatically on your YouTube videos. So you don’t have to worry about downloading captions, or uploading them, and what format to use, any of that.
So that’s similar to how most of our integrations work. And it can really clean up the workflow for a lot of people. So Josh, another question for you here. Do I need to install any software to use 3Play?
JOSH MILLER: No. Everything is web based. So everything is done through a secure online account system. So you shouldn’t have to install anything at all.
LILY BOND: Great. And then another question here is if I can talk a little bit more about who is implicated by Section 508. Sure. It’s definitely confusing. There’s a lot of conflicting opinions about it.
What’s written directly into Section 508 is that federal programs are required to follow Section 508. Nothing is written specifically into Section 508 about federally funded programs. So again, it’s best to check to see if your state is receiving funding from the Assistive Technology Act and/or if your state has specific Section 508 requirements.
The resources to check that are actually directly in our Section 508 white paper, which is listed on the Resources page on the screen here. And so if you want to look into those, you can just go to the links directly from there.
So I think that’s about it for what we have time for. So thank you, everyone, for being here. And thanks, Josh. And I’m going to hand it back to Sam.
SAMANTHA HOLLAND: Thank you, Lily. I want to thank all of our participants for joining us today, and especially our presenters, Josh and Lily. We hope this webinar has been helpful for you and your organization.
As we mentioned before, if you have any further questions or would like to request more information, our team would be happy to assist you. Please feel free to contact Kacey Cawley at Carahsoft. Her contact information is currently being displayed on your screen. Thanks again, for your attendance, and have a great day.