How Non-Profit Organizations Can Create Accessible Video [TRANSCRIPT]
SOFIA LEIVA: Well, welcome, everyone, and thank you for joining How Nonprofit Organizations Can Create Accessible Video. My name is Sofia Leiva, and I’ll be presenting today. So, let’s begin.
Today, we’re going to talk about the basics of accessible video and how it applies to nonprofits, how to create accessible video, why you should create accessible video, how nonprofits are using accessible video, quick tips for getting buy-in and budgeting, and who is 3Play Media?
So I’m just curious about how everyone here is using video. So if you want to type in your chat window how you’re currently using video, or your plans in 2020 to be using video. I’ll give you a couple of seconds.
So, some nonprofits are using videos for their trainings. Others are doing them to record presentations and publish on their digital library. I’ve seen many do promo videos through it, and also recording conferences and things like that.
Now, let’s start first by talking about, what is accessibility? So in order for something to be accessible, it must offer an equivalent experience to everyone, including those with a disability. This can refer to physical locations, but in the context of online accessibility, it refers to a disabled user’s access to electronic information.
The content and the design must provide the most convenient and all-encompassing experience possible to prevent any level of exclusion. And A11y, as you can see on the screen, is another term for accessibility. And it depicts that there are 11 letters between the A and the Y in accessibility. And it also stands for being an ally.
So, what is the formula for creating accessible videos? An accessible video is going to be closed captioned, have audio description, and also include transcripts.
Let’s dive into, what are captions? So, captions are time-synchronized text that can be read while watching a video, and are usually noted with a CC icon. Captions originated as an FCC mandate in the 1980s, but the use has expanded to online video and internet applications. Captions assume the viewer can’t hear, so they include the relevant sound effects, speaker identifications, and other nonspeech elements to make it easier for the viewer to understand who’s speaking.
Now, it’s important to distinguish between captions, subtitles, and transcripts. Captions, like I said before, assume the viewer can’t hear the audio. So they’re time-synchronized and they include the relevant sound effects. You can spot if a video has captions when you see a CC icon.
Subtitles, on the other hand, assume the viewer can hear but can’t understand the audio. The purpose is to translate the audio. Like captions, they’re also time synchronized. And transcripts are a plain text version of the audio. It’s not time synchronized, and it’s good for audio-only content.
In the US, the distinction between captions and subtitles is important. But in other countries, like in Europe, these terms are used synonymously. So, quick poll question. Can anyone guess what the first television show to air with closed captions was? And you can type your guesses in the chat window.
So someone’s saying The Dick Van Dyke Show, OK. Oh, someone’s saying Sesame Street. These are all great guesses. But the first show to ever be closed captioned was actually Julia Child’s The French Chef.
So, how do you create captions? There are many ways to create captions, and here are just a few that we’re going to talk about today. The first is to do it yourself. And the way that you do this is you can either use an automatic speech recognition software– which is going to take your video and, like a computer software, will translate all the audio content into words. And then, you would go back and edit that yourself. I’ll in a couple of seconds show you one free tool that you can use to do that.
Another way is to transcribe the video yourself. So this is by manually listening to the audio going, and typing each individual word– which, as you can imagine, could be very time consuming.
On top of that, then, you have to set the timings. So in order to make sure that the captions and the audio are lining up, you would have to use the software or put the individual timings manually. Lastly, you would then convert that file into a caption format that would be accepted by the video player that you’re using.
Now, all of this can sound very tedious and like a lot of work. But thanks to a software or video player that you may all know called YouTube, it makes captioning by doing it yourself a lot easier. With YouTube, you can upload your video and ask them to put automatic captions on it. Then, you want to always make sure to double-check those captions, because as we’ll see in a little bit, automatic captions can be notoriously incorrect.
So you would edit those captions right there in their own software. Then, you wouldn’t need to set the timings manually because YouTube does that automatically. And lastly, through YouTube you would be able to download the caption file either as a srt, vtt, or spv, which are just basically the names of caption files depending on the video player that you’re using.
Our process uses a similar sort of effect to that. We start with an automatic speech recognition software, and then we have two rounds of human editing for it.
Now, let’s talk about the caption quality. Caption quality really matters, and there’s a couple of best practices to follow. The industry standard for spelling is a 99% accuracy.
A 99% accuracy, though close to perfection, means that there’s still a 1% chance of error. In a 10-minute file of 1,500 words, this leniency allows for 15 errors total. Now, if your video is scripted content, then you’ll want to ensure your captions are by verbatim. So for broadcast, you want to include the ums and ah’s because it’s scripted. But for lectures, you want a clean read because the filler words could be very distracting to the viewer.
Now, each caption frame should be around one to three lines, with 32 characters per line. And the best font to use is a Non Serif font. You should also ensure they are time synchronized and lasting a minimum of a second on the screen, so it gives viewers enough time to read.
Another key thing to keep in mind is caption placement. Typically, captions are placed in the lower center part of the screen, but should be moved when they’re in the way of other important text or elements in the video. And as for silent bits, to make sure your captions go away when there is a pause or silence so that they don’t confuse the viewer or hang on for too long.
The DCMP, and FCC, and WCAG– these are all standards that exist for caption quality. So the DCMP is going to be the descriptive caption manual. And basically, it has standards for how you should write captions, what you should caption, how to include sound effects and things like that.
The FCC is going to be more for broadcast. And these are just going to be standards that broadcast and television programs need to follow. And WCAG, as we’ll discuss later, are just international standards. They mention captioning, but don’t have specifics as the DCMP would.
Now, let’s quickly do a example of what automatic speech recognition software is and why you need to always edit it. So, I’m going to play right now a audio. And on the screen, we’ll have a transcript that has been created through an automatic speech recognition software. And what I’d like you to do is to listen to the audio and also, in conjunction, read the script and see if you can find any errors.
– One of the most challenging aspects of choosing a career is simply determining where our interests lie. Now, one common characteristic we saw in the majority of people we interviewed was a powerful connection with a childhood interest.
– For me, part of the reason why I work here is when I was five years old growing up in Boston, I went to the New England Aquarium. And I picked up a horseshoe crab and I touched a horseshoe crab. I still remember that. I love those types of engaging experiences that really register with you and stick with you. As a child, my grandfather was a forester from my childhood–
SOFIA LEIVA: OK, so in the chat window, does anyone want to put which errors you found that were different from what you were hearing and what was in the transcript? Great. So someone is saying you need quotes for different speakers. That’s exactly correct. And that’s something that I noticed too.
Other things I noticed were some punctuation errors. The hesitation words weren’t removed. No speaker identifications– as someone has said– and also some acoustic errors. So, for example, “New England Aquarium” was “new wing of the Koran.” And “forester” was “four story.” So as a reader, these can be really confusing and sort of change the meaning of the context of your video.
So, how do you publish captions? There are several ways to publish captions. The one that most people will be familiar with is through a side car file. So this would be when you would upload your caption file to a video player like YouTube.
You can also encode captions. And this is where the captions would be burnt into the video, and you can turn them off or on. And this is good for offline video– so if you’re creating any DVDs, or for kiosks.
Open captions are going to be burnt into the video and they can’t be changed. So if you’ve been on social media and you’ve seen any videos with captions on them and you can’t turn them off or on, that’s what open captions are.
And then integrations are just going to be a way for your caption files to be automatically posted back to your videos. And this is something that you can get with a lot of captioning vendors.
So, why should you caption? There are many reasons to caption that we’ll cover today. First, let’s start with a discovery engagement and user experience. So, Facebook uncovered that 85% of Facebook videos are watched with the sound off. So if your video relies heavily on sound, a lot of people are probably scrolling past it.
Video accessibility has tremendous benefits for improving SEO, the user experience, your reach, and your brand as well. A study by Liveclicker found that pages with transcripts earned an average of 16% more revenue than they did before transcripts were added. And according to a Facebook study, videos with captions were 135% greater– reach greater organic search traffic.
41% of videos are also incomprehensible without sound or captions, which means that if someone doesn’t have headphones, they won’t be able to watch your videos. And adding captions to it can really be beneficial to avoid this, and enhance the user experience. And then– as many of you mentioned– you’re doing trainings, you’re doing online courses. So, captions actually really help to enhance that learning experience.
Many societies and nonprofit organizations provide educational video and resources, and would benefit from captions. So in a study by OSU– Oregon State University– they uncovered that 98.6% of students found captions helpful. And the majority– or the biggest reason they were using them, was because it helped them focus. And then they also found that 75% of students that said they use captions, said they use them as a learning aid.
Captions also help to improve your brand recall, your verbal memory, and the behavioral intent. They also make your content searchable. If you’re a part of a nonprofit or a society, probably creating a lot of really valuable resources for your members. And so, adding the searchability feature through captions really helps to engage your viewers more and allows them to really look for videos that they find interesting in the topics that they want to hear about.
So here on the screen, I have an example of what’s called an interactive transcript. And basically, you can search within a video for a keyword. So, for example, physics. And then the transcript would highlight where that word was. And you could click on that word. And the video would jump to that spot in the video.
You can also make a playlist search– so, where you would compile all your videos that are transcribed and captioned. And the user could search for a keyword, and it would show you all the videos where that keyword is shown. And then, they could jump to that spot.
Captions also help with interactivity. So like I mentioned before, the interactive transcript makes your videos immediately interactive and more engaging to the viewer. The transcript is playing right on the side. They can follow along. They can take notes. It’s really, really valuable.
Now let’s dive into, what is audio description? Because this is another element of video accessibility. So, there are 245 million people that have some kind of vision loss around the world. And so, what are we doing for them? This is where audio description comes in. So before I dive into what audio description is, I want to play you a short clip– two short clips. And what I want you to do is close your eyes and just listen to the video.
– [LAUGHS] Oh, Hello! [SIGHS] [HICCUPS] [SNEEZES]
SOFIA LEIVA: Great. Now, I’m going to play a second version. And what I want you to do after listening to it is just, in the box, first tell me what you thought the video was. And then, second, tell me what you now think it is.
– From the creators of Tangled and Wreck-It Ralph, Disney. A carrot-nosed curl line snowman shuffles up to a purple flower peeping out of deep snow.
– Hello. [LAUGHS]
– He takes a deep sniff.
– Ah. [HICCUPS] [SNEEZE].
– His nose slams on a frozen pond. A reindeer looks out and pants like a dog.
SOFIA LEIVA: All right. So if you know what the video was for, you can put it in the chat window. And you can also let me know any discrepancies that you saw between the videos and which one was much better to listen to with your eyes closed.
So, now that we’ve heard– yes, so someone said the second video was much better. And that’s because it included audio description. So now we’ve heard that, what is audio description? So, audio description narrates the relevant visuals in a video as an accommodation for blind and low vision users. It’s often compared to a radio announcer narrating baseball games over the radio. And it’s represented with an AD icon.
There are two types of audio description– standard and extended. What you experienced with the Frozen trailer example was the standard audio description. And standard audio description just means that the description was inserted within the natural pauses of the original Frozen trailer.
So what happens when the video you’d like to describe doesn’t have any pauses? That’s when extended audio description would be necessary. Extended description allows the pauses to be added to the source video in order to make room for more description instead of being constrained to the natural pauses.
Extended audio description is really useful for videos with complicated content like educational lectures, where there are very few pauses but a lot of information to explain. And you can see several different videos with standard and audio description on our website. And I’ll provide that link in the email.
How do you create audio description? AD– audio description– is typically and traditionally time-consuming and costly. And there are a lot of things that go into creating it including production time, recording and paying a voice actor, creating the descriptions, and writing the descriptions into time codes. These are difficult to do on your own, but there are a few ways which we’ll talk about in a moment.
So, the first way is you can narrate the visuals during the time of the recording. So if you’re recording a lecture, you can be narrating what you’re showing on the background or what you’re showing in your PowerPoint so that you don’t have to go back and add in the descriptions. You can also create a text description or a WebVTT file, which just basically means you write a note that includes all the descriptions that are happening in the video within the transcript of the video of the audio file.
You can also record voice description and then merge with the source audio. So you would record your descriptions in one audio, and then take the original audio and merge it together. Or, you could use the original audio and within it, start recording the descriptions there. Or, you could outsource it to a professional lender.
Now, professional or traditional lenders can be anywhere from $15 to $75 per minute, which is very costly. At 3Play, we’re doing something a little bit different. We’ve developed a unique process for audio description that uses a combination of humans and technology. And this creates really high quality, and brings down the cost.
And we also use synthesized speech. And there are many pros and cons to this. The pros– it allows us for faster processing, and it gives the users the ability to manipulate the speed of the voice and the distortion– yeah, the speed of the voice. And then, some of the cons is that you sort of lose that cinematic detail. But it really depends on the type of content that you’re trying to create.
So, how do you publish audio description? Unlike captions, description is not supported by most video players. I think some of the video players that do support audio description are the Able Player. AVPlayer, Brightcove, JW Player, Kaltura, and Wistia.
Since most video players don’t support audio description, there are different ways that you can do it. So for example, you can publish a second video that has a description in it. So, sort of like that video I showed earlier where it was the Frozen trailer. We had the original without the description, and then the second one with the description.
You can also create a secondary audio track that you can link to in your video. Or, you can upload if it allows your video player to upload it there. You can do a WebVTT track, which is we can upload it to a WebVTT player that allows you to have the descriptions that you created through the transcript.
Or you can do a text-only merged transcript, which is where you have the audio transcript and then you include the descriptions within it and publish that. And that would allow a screen reader to go and read it. Or, at 3Play we have something called the 3Play Plugin, which makes it easier to publish audio description on all the video players that you’re using. And it’s a free tool if you are using us.
So the benefits of audio description. Now that you’re familiar with audio description, what exactly are the benefits? So, the first benefit is flexibility. Audio description allows the viewer to view videos in an eye-free environment. So while you’re cooking or while you’re in the car– it’s sort of like an audio book.
The second benefit is for individuals on the autistic spectrum. They find audio description helps them to better understand the emotional and social cues which are only demonstrated through actions and facial expressions. Language development is also a major benefit, as listening is a key step in learning language and is associated with appropriate actions and behaviors. So the visual component of the video combined with the audio narration can help the language development.
There’s also research on how the brain processes information. And it reveals that there are two key channels, auditory and visual. So the visual component of the video combined with the audio narration can help with learning. And it’s not listed here, but sometimes audio description is required by law. And in that case, the benefit is compliance.
So let’s talk about the accessibility laws for captioning and audio description. For many nonprofits, the Americans with Disabilities Act is a major law in the United States that applies to them. And in this particular law, there are two acts that talk about and impact video accessibility– Title II, which applies to public entities, and Title III, which applies to places of public accommodation. This includes private organizations that provide a public accommodation– like a doctor’s office, a library, a hotel, a restaurant, and many more places.
Now, although the ADA doesn’t provide clear web accessibility regulations for online-only organizations, it does mean that a society or a nonprofit can’t be sued for inaccessible video. In fact, uncertainty in the law leaves space for interpretation by judges. And in recent years, the law has been extended to online accommodations through case law.
Because societies and nonprofits supply resources that are open to both private members and members of the public, they should assume that they fall under the category of place of public accommodation. Because of the ways the courts have ruled on past digital accessibility cases, these organizations should work towards accurately captioning all video content to avoid potential accessibility lawsuits.
And then, as I mentioned earlier, there is WCAG which stands for Web Content Accessibility Guidelines. And while this is a law not a guideline, many laws and lawsuit settlements mention WCAG as a standard to meet for accessibility.
There are three current area variations of the guidelines, but the most widely used is something called WCAG 2.0. And with WCAG, there are three levels of compliance. So level A is going to be the most basic to attain. And level AAA is going to be the most comprehensive and the highest accessibility standard.
Most laws and lawsuits mention WCAG 2.0 compliance. So for now, that’s what’s legally required. And only if a law explicitly states that web developers have to adopt the newest WCAG version do you need to make your content WCAG 2.0 compliant– which is the one that just came out.
Now, I’m going to speed through the last ones because I realize we’re almost out of time. How are nonprofits using accessible video? So one huge example is SPIE which is the International Society for Optics and Photonics. And this is a scientific and engineering organization that focuses on light-based research.
Essentially, they host a lot of conferences throughout the year– which they generate a lot of content. And they have recently started recording those contents and posting it on their website. They do a voice inside sort of posting of it. And they decided to start captioning this content in order to make it accessible. And in the long run, they also found it improved the user experience.
They use a tool like interactive transcript, which allows the members to search for relevant terms within the video. And SPIE has asserted that their users have expressed a lot of value in it. And that’s been a big improvement because they’re able to search in the video and find the keywords that are most relevant to them.
Now, some quick tips for getting a buy-in. So there are many ways to get buy-in, and this is an issue that many organizations face when it comes to video accessibility. One way is to apply via grants. So the US Department of Education provides several federally funded grants. And you can also check the Federal Register.
You can find them in your budget. So you can find through other budgets where you have leftover funds, and you can create your own captioning budget through that. You can also create an accessibility grant, where you raise money within your organization from different income resources to apply that to getting a captioning budget. So for example, while this is a university, the University of North Carolina State includes a small fee in their students’ tuition to create a captioning budget. So this could be applied to member budgets.
Another tip is just to prioritize the most important videos first. So, the ones that are getting the most views or the ones that are most popular– prioritize those first for captioning. And then if you have extra funds, begin captioning the rest.
With getting the most out of your budget, remember that quality matters. Don’t just cut corners, because if you get a low cost solution, even though it sounds appealing, you might be spending more time fixing all the errors. Always make sure to plan ahead when you’re creating your videos. Include captioning and accessibility into that process so that you’re able to really raise the funds for it and avoid any high fees for quick turnarounds. And then caption shorter videos in-house and then outsource longer videos to a captioning vendor. That can be another way to make the most out of your budget.
And another way for you to either convince higher ups that captioning is a valuable thing to do is you could start a pilot project where you set a small budget for captioning, determine a couple of videos that you’re going to caption, and then after captioning, measure the success. If you saw an impact from captioning those videos, then you have a really strong case for why you should be prioritizing making your videos accessible.
All right, quickly I’ll dive into, who is 3Play Media? But if you have any questions, I encourage you to begin typing them in the Q&A or in the chat window, and I’ll try to get to as many as possible.
So, 3Play Media– who are we? We work with over 2,500 customers spanning higher education, media, e-commerce, fitness, society. And really our goal is just to make the whole captioning process a lot easier. We offer a full-service video accessibility solution for closed captioning, live captioning, subtitling, translation, and audio description.
Our goal is really just to make accessibility easier. And so we have an easy-to-use online account system. We offer a range of turnaround solutions. And we also offer automated processes. We have video tools like the audio description plugin. And we also offer audio description, which we talked about earlier.
One huge thing that we do on the marketing team is provide a lot of content. So we do have weekly blogs, free white papers, how-to’s and checklists, research studies, and webinars like this on why accessibility matters. And you can check those all out free on our website under Blogs and under Resources, and you can begin downloading those. All our websites are also under Upcoming Webinars. And these are often taught by us or by accessibility experts in the industry and can range from topics on what’s WCAG, which laws apply to me, to how to make your content accessible.
And something really exciting that we’re very close to releasing is our online video accessibility certification. And this is going to be a free certification where you can become video accessibility certified. And it’ll go deeper into what is accessibility, why accessibility matters and give you a ton of resources to make this year your best video accessibility year yet.
All right, so I’ll do a couple of questions that came in. But if you do need to leave, this is recorded. So you can download it tomorrow. And I’ll also send some follow-up resources. And please feel free to type questions in the Q&A or chat window, and we can also answer those offline.
So, someone is asking about 3Play Media’s live captioning service, which I briefly mentioned earlier. So, 3Play’s live automatic captioning service is a new service that we recently released. And this is going to use automatic software. So it’s not going to be as accurate if you’re using a live stenographer. And you can get more information about it at firstname.lastname@example.org.
Great. So, any other questions we can answer online, don’t be afraid to type them in. Thank you so much for joining me today.