[Infographic] Your Ultimate Guide to Captioning & Describing Online Video
Updated: January 18, 2018
Similarly, the recent Section 508 refresh will require audio description for many organizations beginning January 2018.
In a webinar entitled The Nuts & Bolts of Captioning & Describing Online Video, we explored everything one needs to know to create, publish, and talk about captioning and audio description. Below we’ve condensed the information to help you become your organization’s CC and AD guru.
What are captions?
Captions are time-synchronized text that can be read while watching a video. Usually, you can tell a video has captions by searching for the CC icon on the menu bar.
Captions originated as an accommodation for individuals who have difficulty hearing. They convey all relevant sound effects, speaker identification, and other non-speech elements.
What is audio description?
Audio description is a narration describing the important visual details that cannot be understood from the soundtrack alone. Usually, you can tell a video has description by searching for the AD icon on the menu bar.
Audio description originated as an accommodation for individuals who are blind or low vision.
What are the legal requirements for captioning and audio description?
According to the study on the 2017 State of Captioning, 67% of respondents said they understand legal requirements for captioning, yet only 23% have a clear policy for captioning compliance.
Though captioning and audio description laws vary depending on the type of organization, they are increasingly becoming a prevalent civil rights requirement for all organizations who disseminate video content.
Over the years, captioning laws and audio description have changed drastically to reflect the prevalence of online video. CC and AD gurus should be familiar with the three main laws: The Rehabilitation Act of 1973, the American with Disabilities Act of 1990 (ADA), and the 21st Century Communications and Video Accessibility Act (CVAA).
- The first major accessibility law in the United States was the Rehabilitation Act of 1973. Within the law, two sections apply to video content: Section 504 and Section 508.
- Section 504 states all Federal and Federally-funded programs are required to provide equal access for individuals with disabilities. Section 504 applies to entities like airports, police stations, universities and state houses.
- Section 508 states Federal communications and information technology must be accessible. Under the refreshed Section 508 of the Rehabilitation Act of 1973, all covered organizations must comply with WCAG 2.0 Level AA standards.
- The second major accessibility law is the Americans with Disabilities Act of 1990. Tittle II and Title III apply to online video.
- Title II states public entities must provide equal opportunities for individuals with disabilities. Title II applies to corporate entities who provide online training videos. In 2014, FedEx was sued for failing to provide the necessary accommodations for their deaf and hard-of-hearing employees.
- Title III states places of public accommodation must provide equal access to people with disabilities. Though the title was written before the emergence of the internet, it has since been extended to the online sector. In 2011, Netflix was sued by the National Association of the Deaf, for failing to caption their content. The settlement concluded that Netflix is considered a place of public accommodation and is therefore required to caption all their content. Netflix has also started adding audio descriptions. Other streaming sites like Hulu and Amazon have also been required to caption their content.
- The final major accessibility law in the US is the 21st Century Communications and Video Accessibility Act, or CVAA. The act states all online video that has previously appeared on television must be captioned. This included full length videos, clips, and montages. It also states that by 2020 all television content must include audio description.
The Web Content Accessibility Guidelines, or WCAG 2.0, are international standards for web accessibility. There are three levels to WCAG 2.0.
- Level A states prerecorded video must include captions and a transcript or audio description.
- Level AA states prerecorded video must include captions and audio description.
- Level AAA states prerecorded video must include captions, audio description, and an extra video with a sign language translation.
What are the benefits to captioning?
The main benefit to captioning is greater accessibility. There are around 48 million individuals living with hearing loss in America. Captions give them access to content they otherwise could not enjoy.
Captions provide better comprehension for everyone. They also give the viewer flexibility on when, where, and how they decide to consume video content.
If you are part of a company looking to improve SEO or search engine optimization, captions can help. Search engine bots can’t read your video, so transcripts help Google understand the content for better ranking and indexing. A study by Discovery Digital Networks found adding captions to videos increased views by 7.3%.
Captions also make your content reusable. For example, the University of Wisconsin found 50% of students repurposed their transcripts as study guides. Marketing departments can use transcripts to create blogs, infographics, or other marketing material. Parents can use captions to teach their children to read. You can truly get creative when you have access to captions.
What are the benefits to description?
As with captioning, the main benefit to audio description is greater accessibility. Audio description helps eliminate the barrier of access to online video for the 22 million Americans who have some degree of vision loss.
But like captions, audio description also benefits a wider population. In education, audio description has been used to benefit people with autism and other learning disabilities. Description helps with understanding emotional and social cues that an onlooker may miss from a simple look. Audio description can also be used in language development, since listening is an important element in language development.
Audio description also gives viewers flexibility, allowing them to enjoy videos on the go or in vision-sensitive environments.
How can you create captions yourself?
The first step to creating captions is to transcribe the audio of the video. You need to include things like relevant sound effects, speaker identification, and other non-speech elements.
Next, you’ll need to set time codes, ensuring that your captions align with the audio of the video.
Depending on the platform that you are publishing your captions into, you’ll need to convert your caption file to the correct format. You can use our free converter.
If transcribing and time coding your file word-for-word and second-by-second seems daunting, you can also upload your video to YouTube and use their automatic captioning platform. Then, edit the caption file, and later convert it to the format you need.
Editing your YouTube caption file is imperative since automatic speech recognition technology is not good enough.
Other things to keep in mind when creating captions include:
- Limit captions to three lines per frame
- Limit each line to 32 characters
- Use non-serif font
With DIY captioning, there is certainly a lot to keep in mind. Alternatively, you can hire a professional captioning vendor.
When choosing a vendor, it can be tempting to go for the cheapest service, but often, when when choosing a cheaper vendor you sacrifice accuracy and quality. Use the guides below to help you in your decision making.
- How to Select the Right Closed Captioning Vendor – 10 Crucial Questions to Ask
- Why Process Matters When Choosing a Video Transcription Company
- How the Vendors Captioning and Transcription Process Determines their Rates
- How Accurate is your Transcription and Subtitling Service?
- Why You Should Only Use U.S. Captioning Companies
At 3Play Media we guarantee 99% accuracy, with a range of turnaround options, and competitive pricing.
How are descriptions created?
Since there are fewer platforms and systems for creating audio description, we have three suggestions that can help streamline the process.
- Include descriptions in the video production process, that way you can ensure all important visual content is captured in a separate audio track.
- Write a separate script aligned with the completed video. If you’ve already posted your video and want to add descriptions after, take the video’s audio and add the descriptions in the pauses. If using this method, make sure you don’t overlap the actual audio.
- Create a separate track with additional time added for description.
Now, you may be wondering, “What should I describe?” Currently, no specific standards exist for audio description that are included in WCAG 2.0, or that are coming from the FCC, or that can be found within the CVAA.
But there are guidelines.
Most companies will have their own internal stylistic guides, but if you’re a DIY describer the DCMP’s description key is a great guide for creating quality descriptions.
Where do I publish captions?
If you’ve created and uploaded a video, most platforms will allow you to add captions. There are certain FCC requirements for captions, such as color, size, and placement, but most video platforms allow you to change the display of your captions.
In terms of the caption format, many players, like YouTube and Wistia, use a SRT file, which contains a caption frame with beginning and end time codes. SCC is another format, mainly used in broadcast. SCC uses hex codes so it’s a little bit more difficult to understand and create. That’s why we recommend converting caption formats.
For step-by-step guides, find the video platform you are using and refer to our how-to guides to learn how-to manually upload captions or how-to upload captions using your 3Play Media account.
Where do I publish audio descriptions?
Currently, few video players support second audio tracks for description. But, there are options.
One way is to publish a second track or video that includes descriptions. Like this Frozen trailer on YouTube.
Another way is to publish a text track. A text track is like a transcript, but also includes the descriptions so that a screen reader can read it.
Lastly, you can use an audio description plugin. The plugin allow the description to be added as a supplement to the existing platform like YouTube.
You can view examples of published videos with audio description here.
And now you can call yourself a guru!
Congratulations! By now you should be feeling like a captioning and audio description guru. Don’t worry if you haven’t memorized everything, though. Download our captioning and audio description infographics for a full summary of the content you can easily refer to!
And if you are hungry to learn more, watch the webinar The Nuts & Bolts of Captioning & Describing Online Video.
5 Publishing Firms Doing Captioning Right
In the world of publishing, people are going digital. As a result, this outburst of digital content has created greater access to educational materials for a wider range of people. While digital content is easier to disseminate, it can also be made…
Q&A: McGraw-Hill’s Roadmap Towards Greater Accessibility
Through their Roadmap to Accessibility, McGraw-Hill is steadily incorporating its accessibility initiatives into their products. As a result, McGraw-Hill is becoming a leader in accessible publishing. While they are the first to admit that it’s not always a clear road ahead, McGraw-Hill’s…
4 Reasons You Need Caption Encoding
What is it? Caption encoding is when captions are embedded into the video and presented as a single asset. Typically, captions are added onto a video as a “sidecar file,” but this method is intended for online video where one can upload…