« Return to video

The Impact of Recent Lawsuits on Video Accessibility Requirements [TRANSCRIPT]

LILY BOND: Welcome, everyone, and thank you for joining this webinar entitled The Impact of Recent Lawsuits on Video Accessibility Requirements. I’m Lily Bond from 3Play Media. And I will be presenting today alongside Owen Edwards, who is a senior accessibility consultant at SSB BART Group, which provides accessibility consulting services and testing products. And with that, I’m going to start out with an agenda.

So Owen and I are planning to go back and forth, covering these topics both from a captioning perspective and then from a description perspective. So we’re going to start with an intro to captions and description, talk about the benefits of accessible video from a captioning and a description standpoint, go through the accessibility laws, and then we’ll jump into closed captioning lawsuits, followed by blind and low vision lawsuits. And then we’ll go through the quality standards and creation of captions, and the quality standards and creation of description. And then Owen will talk about some other video accessibility requirements, implications, and existing accessible media players. And then, of course, we’ll have time for a Q&A at the end.

So to make sure everyone is on the same page, I’m going to start out with some basics about captioning. Captions are text that has been time synchronized with the media, so that they can be read while watching the video. They assume that the viewer can’t hear, so they convey all sound effects, speaker identification, and other non-speech elements. For example, keys jangling in someone’s pocket would be a relevant sound effect that you would want to caption.

Captions originated as an FCC mandate for broadcast television in the 1980s, but the requirements have expanded with the proliferation of captions and video to the internet. And now captions are being applied across dozens of devices and media. The difference between captions and a transcript is that captions are time synchronized with the media, whereas transcripts are just a plain text version of the audio.

Captions versus subtitles– captions assume that the video can’t hear, whereas subtitles assume that the viewer can’t understand the language. So subtitles are really about translating the content into another language.

And then closed versus open captions– closed captions allow the end user to turn the captions on and off, whereas open captions are burned directly into the video. So that’s just some terminology that I’ll be using today, that I wanted to make sure everyone understood clearly.

So Owen, I will hand it off to you to cover what is audio description.

OWEN EDWARDS: Great. Thanks, Lily. Audio description is less well-known. Captioning is quite widely-known amongst the general public. An audio description is a method to make video accessible to people who are either blind or low vision, who can’t see the video content– the visual content. And so it’s typically narration added to the soundtrack to describe important visual details that can’t be understood from the main soundtrack alone.

The distinction of what’s important is a difficult area. But in general, as Lily mentioned, there are parts where keys jangling would be important. And there are parts in the visual scene, particularly text on a screen, which is important. And there are parts which aren’t important.

Just in terms of naming, we see audio description called a number of different things, and that causes some of the confusion. It’s sometimes called video description or narrative description, or sometimes just description. It’s been around for some time, and it’s very similar to a concept that people do know about, which is the director’s commentary that’s often added on DVDs– the idea of somebody extra talking over the main video and providing a little extra information. It’s becoming more widely available on TV via the secondary audio program as a result of some laws that have changed in the last few years. And it’s now available on a number of online services. I’ve particularly highlighted Netflix here.

And we’re just going to play a short example– a short YouTube video– which shows how, for people who can’t see the screen, there’s a need for additional narration.

ANNOUNCER: The following clip is intended to simulate the experience of a student who is blind or visually impaired.


MAN IN VIDEO CLIP: Good morning.

ANNOUNCER: The following is the same clip, but description has been provided to describe visuals, actions, and settings not conveyed in the existing narration.

NARRATOR: A yellow Beetle pulls up and Lisa glances up momentarily before looking down. Then the car door opens and the driver’s foot appears, clad in a clean white loafer and an argyle sock. Lisa looks up again and does a double-take. Her mouth drops open and she stares toward the car.

MAN IN VIDEO CLIP: Good morning.

NARRATOR: She gazes fixedly as the figure passes her.

OWEN EDWARDS: Right. So that gives an idea of how difficult videos can be to understand without that additional description, and the reasons that description is needed, particularly as we’re seeing video being used more and more in educational and training settings.

LILY BOND: So just to cover some of the benefits of captioning, since we talked so much about accessibility– the number-one benefit of captioning is obviously to make video content accessible to people who are deaf or hard of hearing. There are 48 million Americans living with hearing loss, and that’s about 20% of the population. But captions also provide better comprehension in instances where the speaker has an accent, if the content is difficult to understand, if there is background noise, if you know English as a second language. And captions also provide the flexibility to view videos in sound-sensitive environments, like the office, library, or gym.

And also, more and more people are posting videos on social media. And those videos often autoplay without sound, so captions allow you to understand that content and make those videos accessible for all users. Captions provide a great basis for interactive video search, and also for search engine optimization. So Google can’t watch a video, but they can read the text of your transcript. And that really helps with having people find your video content.

Captions are also really reusable. You can use them to create infographics, white papers, case studies, other documents, and course material. They provide a great basis for translation. So you can translate an English transcript into other languages to make your video accessible on a more global scale. And of course, they may be required by law. And Owen and I are going to cover that thoroughly, shortly.

OWEN EDWARDS: Great. And similarly for description. As I mentioned, the main benefit is accessibility for blind and low vision users. There are about 21 million Americans living with vision loss. So that’s a large population for whom video accessibility is a big concern as video becomes more widespread. We’re also seeing advantages for people with cognitive or learning disabilities, where all of what’s being presented to them may be unclear. And some extra narration which explains what they’re watching adds to the experience– benefits those people. Again, there’s a benefit for people who, for any reason, are unable to give the video their full attention, to have some kind of audio version of the video that conveys the essential meaning of it– and then again, as we’ll go into, required– and increasingly so– by laws that are coming into play.

LILY BOND: Great. So to cover the federal accessibility laws, I’m going to start with the Rehabilitation Act of 1973, which has two sections that really impact video accessibility. Section 504 is a broad anti-discrimination law that requires equal access for individuals with disabilities. It’s very similar to the Civil Rights Act.

And then Section 508 was introduced later, in 1998, to require that federal communications and information technology be accessible. And video accessibility requirements are written directly into Section 508, and are often applied to the broader Section 504. And Section 508 requirements are often extended to non-governmental organizations, through the Assistive Technology Act. And I wanted to mention that there is a Section 508 refresh coming, which will update the standards to reflect the dramatic changes in technology since 1998. And that should be introduced in October.

The next major accessibility law in the US is the Americans with Disabilities Act, or the ADA. There are five sections of the ADA, and Titles II and Title III impact video accessibility. So Title II affects public entities at the federal, state, and local level– like schools, courts, police departments, public libraries, and public universities. And Title III impacts places of public accommodation, like hotels, libraries, museums, airports, movie theaters, etc. And to be a public accommodation, the entity must be operated by a private entity. It must affect commerce, and it must fall within one of the ADA’s categories, which includes education. So this is often applied to higher ed.

And the big question that’s been in the legal sphere recently is, what constitutes a place of public accommodation? So when the ADA was written in 1990, the internet was only beginning to become as powerful as it is today. And the law was written to affect physical entities– for example, providing an accommodation of a wheelchair ramp. But more and more, the ADA has been tested against online businesses. And I will get into that shortly.

The Twenty-First Century Communications and Video Accessibility Act is the most recent federal accessibility law. It was enacted in 2010. And this really affects the entertainment industry most. It applies to online video that previously appeared on television with captioning. And it has been extended to include video clips of larger programming that appeared on television with captioning.

OWEN EDWARDS: OK. And then, as far as WCAG goes– or the Web Content Accessibility Guidelines– these are a more recent standard for web content, and, in fact, aren’t specifically a law, but more a standard– a set of guidelines and success criteria which give guidelines about the specifics of how accessibility should exist for online content, for web content. It’s also been expanded to cover mobile applications. And it’s the basis of the 508 refresh that Lily mentioned. It’s really to give more specific guidance in terms of accessibility and in terms of how accessibility should be implemented for online content and content on mobile digital devices.

There are three different levels that are laid out in WCAG 2.0 in terms of compliance. Level A is a basic entry level. Level AA is a more complicated level. And Level AAA requires a lot of complexity for compliance. What we’re seeing broadly is that lawsuits and settlements are generally focused on Level AA, and in some cases with some specific requirements either dropped out from AA or added from AAA, depending on how the settlement is agreed.

There are some specific guidelines for video– and actually, also for audio– within WCAG 2.0. And they’re a little confusing. But really, to break it down, captions are required at Level A for any pre-recorded video, and also at Level AA for any live video– live broadcast to video– which gives the accessibility for people with hearing impairment.

For people with vision impairments, at Level A there’s an option for either a transcript– as Lily mentioned, something which isn’t synchronized with the video– or audio description, which is. And those two are optional– one or other is required at Level A. But at Level AA, audio description is required. A transcript alone isn’t sufficient, because it doesn’t track with the video that plays. It isn’t really a sufficient accommodation as that video is played. Then at Level AAA, you get into some much more complex requirements. You require both audio description and transcript, and also, potentially, a sign language video in parallel with the video, which allows people who can understand sign language to keep their attention on that.

LILY BOND: Great. So now that we’ve covered the major laws and the WCAG standards, we’re going to jump into some of the recent lawsuits that impact these requirements. To begin with, I’m going to talk about a closed captioning lawsuit, which is the National Association of the Deaf versus Netflix. Netflix was sued by the National Association of the Deaf in 2012 for failing to provide closed captions for most of its watch-instantly movies and television shows that were streamed on the internet.

So as I mentioned before, this is a case of public accommodation. And it’s the first time that Title III of the ADA was applied to an online-only business. And this was a landmark lawsuit. So Netflix argued that they don’t qualify as a place of public accommodation in accordance with the ADA. But the NAD’s lawyers argued that the ADA was meant to grow, to expand accommodations as the world changed.

And the court ruled in favor of the National Association of the Deaf, saying that, quote, “The legislative history of the ADA makes clear that Congress intended the ADA to adapt to changes in technology, and excluding businesses that sell services through the internet from the ADA would run afoul of the purposes of the ADA.” So Netflix ended up settling this case and agreed to caption 100% of its streaming content. And this case set a profound precedent for companies that were streaming video online across industries, including entertainment, education, health care, and corporate training content. And actually, FedEx was sued for not captioning their training videos recently.

The next lawsuit that I’m going to talk about is more recent. In February of 2014, the National Association of the Deaf sued Harvard and MIT for providing inaccessible video content that was either not captioned or was inaccurately and unintelligibly captioned. So this is the first time– outside of the entertainment industry– that the accuracy of the captioning has been considered in the legal ramifications. And it’s basically saying– or the NAD is saying– that automatic captions cannot guarantee an equivalent alternative for deaf and hard of hearing viewers. So they’re not ADA-compliant.

Arlene Mayerson, who was the lawyer for the National Association of the Deaf in this case and was involved in writing the ADA, again stated that the ADA was meant to grow and expand to include new technology, and not to deny or limit the accommodations available. And Arlene had a pretty powerful quote. And she said, “If you are a hearing person, you are welcomed into a world of lifelong learning through access to a community offering videos on virtually any topic imaginable, from climate change to world history or the arts. No captions is like no ramp for people in wheelchairs, or signs stating people with disabilities are not welcome.”

In June the Department of Justice submitted a statement of interest supporting the NAD’s position that Harvard and MIT’s free online courses discriminate against deaf and hard of hearing individuals and said, “The ADA applies to websites of public accommodations, and the ADA regulation should be interpreted to keep pace with developing technologies.” The final argument for this case was held in September, and we’re still waiting on a decision for that. But in February, the judge denied Harvard and MIT’s motion to dismiss the lawsuit. So the outcome of this will have huge implications for higher education.

I wanted to mention, briefly, that the OCR– the Office for Civil Rights– and the DOJ– the Department of Justice– have taken a vested interest in web accessibility in higher education, recently. And they have led investigations into inaccessible IT at dozens of schools. And at least 15 of those have lawsuits or resolution agreements in place. A couple of these had video-specific complaints, including the University of Montana, who had videos without captions, which was resolved in 2014. And then South Carolina Technical College system– the video-specific complaints there were videos without captions and an inaccessible media player. And that complaint was resolved in 2013.

And then I wanted to bring to your attention three Dear Colleague letters from the Office of Civil Rights to educational institutions, regarding inaccessible IT. So while the laws don’t provide specific requirements or standards for online video, the OCR is taking a stance on the issue through their investigations, and their Dear Colleague letters provide really valuable insight on the standards, requirements, and solutions that they believe educational institutions should be following. And these three document particularly interesting trends.

So the Dear Colleague letter to the University of Cincinnati is interesting for a couple of reasons. First of all, it lays out the legal standards and applies them to concrete examples. And then it gives examples of students who can’t access them. So this really humanizes the inaccessibility of the University of Cincinnati’s IT, and it also identifies the breadth of online services that the OCR will cover in one of their investigations.

The University of Phoenix– this investigation was generated by a student complaint, and it was settled before a findings case. But the OCR’s letter to the University of Phoenix is important because it actually lays out WCAG 2.0 standards that they should be following, which, as Owen said, are not written into any current laws in the US. And it also defines remedies for students who may have been harmed in the past by inaccessible IT, which is interesting because it’s saying that schools need to consider the retroactive damage of their inaccessibility.

And finally, the OCR letter to the Michigan Department of Education is important in a couple of ways. First of all, it lays out requirements for making a video accessible, including streaming versus hosting a video, captioning those videos, and making sure the player is keyboard accessible. And they used absolute language– so they wrote, “Videos must have captioning synchronized with the audio and must be verbatim of spoken words. Ensuring access to the control panel of a video is also critical.”

And it also uses the word violations instead of complaints. So calling their findings violations presumes that even though the law does not currently specify IT accessibility, the OCR is interpreting it as a requirement. So I just wanted to share these because, as you can see, the OCR Dear Colleague letters can provide a lot of insight and direction for schools that are hoping to improve their IT. And I definitely suggest looking to those for guidance.

OWEN EDWARDS: Great. And then, as far as low vision issues go, a lawsuit brought by the National Federation of the Blind against Penn State University was a landmark case in 2010, given that Title II of ADA covers public entities, such as universities. The National Federation of the Blind brought a very broad complaint about accessibility of a lot of different areas of the universities and digital presence. It covered library and departmental websites, their course-management system, the smart podium– which is used in their classrooms, which is clearly looking as much at lecturers and people presenting information as the people receiving it, as the students of the courses– and then also a bank, which the university had partnered with so that students could get particular benefits of using their student ID card there. The implication into the bank’s digital presence– both in websites and on-campus ATMs– meant that that was within the scope of the NFB complaint.

The Department of Education’s Office of Civil Rights became involved in this complaint. And it resulted in a voluntary resolution agreement through their early complaint resolution process, which allows them to establish ways to address this complaint, and particularly focused on WCAG compliance for websites, for course-management systems, for setting up a whole structure of compliance management within the university– an office that handles compliance issues and complaints, that looks at existing systems, digital systems, and how new systems are developed.

And then another lawsuit that was more recent was filed in 2012 by, again, the Lighthouse for the Blind and Visually Impaired and a number of individuals, relating to access to video– specifically from Redbox. And Redbox had a number of touchscreen kiosks that allow the rental of DVDs. And the Lighthouse for the Blind represented a number of individuals who brought an ADA, as well as California Civil Rights and Disabled Persons, complaint about the inaccessibility of these touchscreen kiosks.

And we’re seeing that that really dispelled any of the assumption that, oh, blind people aren’t affected by these kind of issues related to video, and really reinforces the fact that there are many people with either blindness or low vision who want access to video, need access to video– whatever method they’re going to consume it by. So that was settled in 2014, and affected, really, that concept of access to video by people with vision issues.

And then another specific one that narrows that focus even more down to audio description was only filed in February of this year. And that specifically relates to movie theaters, where the California Council for the Blind found that in a lot of movie theaters that had claimed that they provide description for people who are blind and low vision– they provide equipment that allows people to listen to an extra description track, while they’re watching a movie in a movie theater. It turned out that a lot of that equipment either didn’t work– people weren’t trained on it. And so that really raises the profile of description as a remedy for access to video content and the expectation that that will be included. And we think that that will start to affect how people look at whether description is needed for video in a broad setting, as well as the CVAA requirements that Lily touched on. But audio description is getting a lot higher-profile in both lawsuits and technology.

LILY BOND: Great. So now that we’ve covered the laws and the lawsuits, we’re going to talk some about what is good enough for these accommodations. So to begin with captioning, the ADA, Section 508, Section 504, CVAA, WCAG, and FCC all state that an equivalent alternative must be provided for video content, but they don’t give any specifics about percentages. But I wanted to share this chart because I think it’s really powerful in determining how accurate that accuracy percentage really is.

So basically, you can see that when the percentage rates– the word-to-word accuracy percentage rates compound as you string more and more words together. So you have to look at a 90% chance of the first word being right, times a 90% chance of the second word being right, and so on. And what you get is a pretty dim picture of accuracy. And that’s why really high accuracy rates are truly important for creating an equivalent experience.

And that being said, I just wanted to talk about automatic captioning, quickly. So on this screen is an example of YouTube automatic captions. And the caption reads, “Plaques double dealing allowing double the Minot for them and.” And what was really spoken was, “Flax, double the vanilla– always double the vanilla– cinnamon.” So even if you assume that YouTube automatic captions reach 80% accuracy– and I would say that usually ASR is more like 60% to 70%, and that differs based on the audio quality– 80% accuracy means that one in five words is incorrect. So an eight-word sentence will be about 17% accurate, and a 10-word sentence will be about 11% accurate.

So this is a 10-word sentence. The automatic captions have 10 words, and only two of them are right. So that’s a pretty dim picture of what automatic captions can be. And I think it’s pretty clear to everyone that those accuracy rates are really not good enough for an accurate caption file.

I wanted to talk about the FCC standards for caption quality because the FCC released, in 2014, four standards that they said impact and need to be required to create a high-quality caption file. So this is a requirement for anyone who is covered under the FCC or the CVAA, although these standards do not span to other industries. But they’re a really good basis to follow. So they say that captions should be accurate. They should match the spoken words to the fullest extent possible and include non-verbal information. So those are the important non-speech elements– the speaker IDs and that kind of thing. And they do allow some leniency for live captioning.

Caption synchronization– the captions must coincide with the spoken words and sounds to the greatest extent possible. Program completeness– the captions must run from the beginning to the end of the program. There were some issues with shows not having the tags after the credits being captioned. And so that would be considered an inaccessible caption file by the FCC. And then finally, onscreen caption placement– so the caption should not obscure other visually-important and relevant content. So in a documentary, for instance, the text at the bottom of the screen that says who the speaker is– you wouldn’t be able to obscure that with the caption file. You would want to move those captions to the top of the screen.

And just to talk about some best practices for captioning quality– these are just some of them. There are a lot, and there are other resources out there, like the DCMP caption key is a great resource for more standards. But the basics– for transcription standards, you should use proper spelling and grammar. You should include a speaker identification tag. You should include relevant sound effects. And relevant refers to something like, as I said, keys jangling. At the beginning, you would want to include it if the keys are jangling behind a locked door in a horror movie. That’s a relevant sound effect. If it’s someone walking down the street and there are keys jangling in his pocket, and it has no effect on the rest of the content, then that’s not as important to include.

You can and should use punctuation to make the speaker’s intent clear. So instead of using a tag like “(SHOUTING) Hi,” you can say, “Hi!” exclamation point.

And finally, using verbatim transcription is really important, particularly when you’re looking at a TV show. If the speaker says um, it’s intentional. It’s scripted. And it should be readable in other types of content.

For caption frame standards, you want to make sure that there is a minimum duration of one second. Again, the captions should not obscure other visual information. You should not exceed 32 characters per line, nor more than three lines of text at a time. You don’t want to allow the last caption frame to hang on the screen through 15 seconds of silence. The font should be non-serif, like Helvetica Medium. And caption frames should be precisely time-synchronized to the audio.

And then briefly, how to create captions yourself– you can start by transcribing the video, which usually takes five to six times real time. And you should include non-speech elements in that transcription. And then my recommendation is to use YouTube as a tool. You can transcribe and set timings by pasting in your video transcript and then setting the timings. And those are fairly good. And then another option would be to edit YouTube’s auto captions. So you could download that file and edit them, and re-upload.

And then just to talk quickly about caption formats– you need the right format based on the use case that you have. So on the left is a chart of common caption formats and the use cases for those. And then, just to use the SRT caption file as an example, that’s in the top right. This is an easy file to create yourself, or one of the easiest caption files to create yourself. I wouldn’t say any are particularly easy. But this is very readable. So you can see that it starts with the number of the caption frame, followed by the beginning and ending time codes for that file, followed by the text that should be included.

And then in the bottom right is an SCC file, which uses hex frames, which are a lot more difficult to understand and create from scratch, since they are just a sequence of numbers and letters. And just to mention– WebVTT and SRT files are used in a lot of web-based media players, so those are likely the ones that you will see a lot. And those work in things like YouTube, Brightcove, Wistia, HTML5, and JW Player.

OWEN EDWARDS: Excellent. So in the realm of audio description, this idea of good enough is a lot more nebulous. It’s not an area that is very well defined. There are no specific standards from WCAG, from the FCC, or within CVAA. There are guidelines. And again, the DCMP– the Described and Captioned Media Project– they have a key on descriptions. So that gives some guidelines on how descriptions should be created, or what should be described. And a number of existing description companies have their own internal best practices or standards for what should be described– and particularly, what shouldn’t be described. One of the biggest decisions that needs to be made around audio description is what content can be described in the available time, in the available pauses in the dialogue of the original movie or video content.

As the CVAA requires more description to be created around broadcast video, we’re seeing that this may lead to standards development or lawsuits around quality. As I said, there’s a number of companies around that do descriptions, and their quality is very good. But the question is whether an equivalent of something like the auto captions– the automatic speech recognitions that Lily touched on– would come into play, and whether they would be considered good enough. We haven’t seen this yet, but it’s an area that we are very much keeping an eye on, because there’s such an explosion of video that needs description. And in order to fill that requirement, there needs to be some idea of what quality will be acceptable. And we haven’t seen that played out in the courts yet.

In terms of how description’s implemented, the list here has a number of different options, and really goes from the lowest cost to highest cost. The best way to include description within video is to include it at the production stage, so that a separate track isn’t even needed. We see a number of videos being created where a lot of thought is put into– particularly for educational videos– including different viewers who have different disabilities, and making sure that the content is provided in an accessible way for different methods, whether you’re just listening to it, whether you’re just seeing the visual.

If that’s not an option– which is clearly the case for preexisting video content– then a number of different methods are available. One is a user-selectable audio track that includes the descriptions. And another one is an entirely separate video that has those audio descriptions already included– a little bit like open captions, but the description’s already included in the video. And typically, that’s what we’re seeing for things like YouTube– that people will upload a second version of a video, which has the description in it.

In a number of cases, we’re seeing that people will actually go one step further in recognizing that it’s hard to give the quality of description that is required in the available time in a video. A separate cut of a video will be created to allow more time for descriptions. That’s obviously a more expensive process, and is less common. The second and third option are more common. And really, the difference between them is the way that the video player or video platform provides the audio description– whether it supports a mechanism to play back a separate audio track.

There’s a fifth option that hasn’t been widely, really, investigated. It’s a potential of the future, and that’s the idea that something like captions tracks– a text track in one of those formats that we just touched on– could be used by people, particularly people who use web content and are using something like a screen reader, which weeds out text on the screen, that they could use that, also, to weed out the text track and therefore create an audio description. We haven’t seen that widely used. Only very few players support it, and it doesn’t seem like a viable solution at this point. But we’re certainly keeping an eye on whether that will become a mechanism that can be used for description.

As I mentioned, a number of vendors exist– Audio Description Associates; WGBH has a good presence in this and the captioning space; Narrative Television Network; Bridge Multimedia; DCMP; CaptionMax; Audio Eyes; DICAPTA. And indeed, DCMP has a much more exhaustive list of people who provide description.

So there’s a good number of reputable vendors out there who know about quality and also have a lot of experience in terms of what is good enough, and the mechanisms to create it and to provide it to end users. What we’re seeing is there really isn’t a good in-house solution, an equivalent of what Lily described for captioning. And that presents a high risk for organizations. If organizations choose to try and create their own in-house, post-production description, there is the potential that that opens them to liability of something similar to what Lily was describing about the online educational content, where there was a lawsuit about the quality of automatic captioning.

So moving on, one of the things that [INAUDIBLE] starting to consider video description then brings up is, beyond captioning, beyond looking into the needs of other people– other people with other disabilities, who need access to video– you then have implications into the video player itself– particularly for people using screen readers, but also for people who can only use the keyboard, who for some physical reason can’t use a mouse; people who have low vision, rather than are fully blind; and people who use voice control– something like Dragon NaturallySpeaking– to control their computer, and then, obviously, to control the video player.

And then the platform or video player also, obviously, needs to have one or other of those mechanisms that supports description playback. And a big thing that causes problems, or we need to highlight, and we see a lot in online video, is that auto playback– landing on a page, and the video automatically playing– is a huge issue for screen reading users, because it’s talking over their control system, their mechanism for controlling that page. And so it’s very hard for them to even pause the video, if the video plays automatically.

So just touching on a number of video players and platforms which provide accessibility– there are a number out there that specifically focus on accessibility. The Able Player that I showed on the previous slide, the OzPlayer that’s on this page, and PayPal have come out with a specific HTML5-accessible video player. They are specifically focused on accessibility and provide some features which aren’t present in other players, but maybe aren’t the best platform or player to use for people who have large content, large amount of content, or have an existing relationship with a vendor.

And so there are a number of other vendors. Lily mentioned some of them– Kaltura, YouTube, obviously, JW Player, Brightcove. And what we’re seeing at this point is that people who have content and are going to a platform or player vendor– who are going through the process of selecting a platform or player– need to increasingly consider the accessibility of that player as they go to consider the accessibility of the video that they’re going to put in that platform. And we’re working with clients to identify which of those players are the best players to select for their needs, in terms of being able to support their content and also provide the accessibility that’s required.

LILY BOND: Great. So at this point, we are ready for Q&A. I wanted to mention that we have several upcoming webinars that may be of interest. In particular, at the end of September, Lainey Feingold, who is probably the most recognized disability rights lawyer in the US, will be presenting on a legal update for IT digital accessibility cases. You can register for those on our website.

So let’s see– a lot of questions are coming in. Again, I want to encourage people to keep typing those in, as we go through those. So Owen, there’s a question for you regarding audio description for recorded lectures used in online courses. How do you recommend using audio description for lecture slides and static images?

OWEN EDWARDS: Well, again, that’s an area where it’s about whether the images contain content, contain information which isn’t being described verbally. I’d like to think that the presentation that we did– we covered everything that was available visually in what we described. And the visual presentation was just there to back it up. And so similarly, in lecture slides, the intent of the lecturer– the intent of the presentation– should be to include all of that visual content. And an area to really think about is whether there are images that are important, whether there are graphical content included in that presentation, that really needs a description beyond its visual presentation.

LILY BOND: Thanks, Owen. Another question here is, what is the source for the 80% of people using captions are not deaf quote? That’s a great question, and I’m sorry I did not speak to that at the time. The Office for Communications in the UK published a study that looked at over 7 million users in the UK. And they found that 80% of the people who were using closed captions were not deaf or hard of hearing. So that’s where that number comes from.

Another question here, Owen, for you, is are there copyright issues with creating separate audio description videos?

OWEN EDWARDS: That’s a very interesting question. And there has been some debate about where this falls, within copyright issues. Typically, when you’re working with an outside vendor to create description, they are then adding the description. They’re creating the description as a separate entity, which is then put back into the video content. And the video content is somehow owned by the entity that’s adding the description. But there are discussions about what the implications are, including a description– which is essentially a derived content. And a number of the blind advocacy organizations– including NFB and AFB– have been looking into the legal impact of this. That’s not an answered question yet.

LILY BOND: Thanks, Owen. Another question here is, is it a best practice to caption silence or leave it blank? That’s a great question. From an accessibility standpoint, if you imagine a deaf or hard of hearing viewer watching a video, they would have no idea if the video is silent, if it’s not tagged as such. So from an accessibility standpoint, in order to create that equivalent experience, you do want to say silence or no speech, so that they know that there’s not something that they’re missing.

Another question here– and Owen, maybe you can speak to this, since I think you were starting to cover that a little bit– is it possible to create one description or caption file that is adequate to be viewed or heard by all audiences, like blind, low vision, and deaf users?

OWEN EDWARDS: That’s a very interesting question– as single file which contains both the captions and descriptions? We haven’t seen that widely used. There are people who have considered that, in terms of a transcript, where there isn’t necessarily the time synchronization with the video. And so that would meet the WCAG Level A requirement of providing a transcript, but wouldn’t meet the caption requirement. So there isn’t an easy way, then, for that to become a description track.

LILY BOND: Yeah, it’s a complicated question. And I think you were starting to speak to the potential for a time-coded– like a time-synchronized description file that would run like a caption file. And I would imagine that there is a world in which those could be combined. But mainly, it would be complicated for making sure that all users know which is which, and differentiating between the audio, the caption file, and the description file. But it’s an interesting idea.

OWEN EDWARDS: And I think one of the issues that it brings up is that in description, there’s in general way more information presented by a video than can be described. And so adding captions would need to be done in a way so that people who didn’t need the captions weren’t given those as well– weren’t presented those as well. Because a voice, it would just add to the overload.

LILY BOND: Yeah, I would love to see separate caption tracks– like a caption track and then a time-synchronized text description track. I think that would be a really great way to do that.

OWEN EDWARDS: Right, right. And that’s a very interesting area that there’s certainly research going on around. And we just haven’t seen it broadly implemented.

LILY BOND: Yeah. Someone is asking– you mentioned cases related to higher education, but are there differences for K-12 institutions? The legal requirements differ slightly, although public schools are required to comply with the ADA. And I will say that the OCR recently came out with, I believe, 13 settlements that cover K-12. And I will provide a link to those in the followup email. Owen, I don’t know if you have any other insight on that.

OWEN EDWARDS: Yeah, I don’t know how that would be different, K-12. As you say, it’s an ADA issue. And as the DOJ has started to include online and therefore WCAG compliance within ADA, that would be the obvious way that it’s impacted. And so it would seem to be the same. I’m not aware of there having been something specific that breaks that out.

LILY BOND: Yeah. Thanks, Owen. You might want to cover this one. Can you elaborate on the changes coming in October to Section 508? Where might I find more information on this, and are there discussions about which WCAG standards will be written into the Section?

OWEN EDWARDS: Yes, there’s definitely a lot of information online. The SSB BART Group blog provides some articles about what’s changing with the Section 508 refresh. It’s really aligning the existing 508 standards that were devised at a time when a lot of the digital online content wasn’t in the form that it is now. It’s lining it up with WCAG 2.0– typically, Level AA. So giving some specific requirements, in terms of access to websites, whether color is an issue, whether keyboard accessibility is possible, and just breaking out some granularity, but being more specific about video accessibility– bringing in some of those requirements from WCAG 2.0.

LILY BOND: Thanks, Owen. Someone is asking a similar question to the copyright question, about audio description. So if a video– YouTube, for example– is used in a course and has no captions, and the owner will not caption, will accessibility law trump copyright law? There’s a lot of arguments that say that education is a fair use. There’s also a lot to be said for the courts going after accessibility violations. But that being said, you should definitely take it up with your legal counsel, and they may provide you with some more information about that.

We have a very useful resource by Blake Reed, who is both an accessibility lawyer and a copyright lawyer. And he presented very articulately about the intersection of accessibility law and copyright law, and where you should stand on that. And I will also provide that resource.

We do provide a tool called the Captions Plugin, which allows you to publish captions on a video without republishing that YouTube video. So you could just embed the YouTube video, and then embed the caption file below it, without having to republish and take views away from that YouTube user. So that’s one way around the copyright violation.

Someone else is saying, what languages are mandated for captioning– English only? That’s a great question. The FCC and the CVAA– all of their requirements apply to English content and Spanish content, as well as mixed English and Spanish content. But as of now, there are no requirements beyond English and Spanish captioning.

Another question here is, WCAG is a little confusing at times. If we are striving to meet Level AA, do we have to provide transcripts, if captions are present in the pre-recorded video? What is the rule about one or the other, in terms of AA compliance? That’s a great question. For a video file, Level A requires the captions because they are time synchronized. And it is always better to provide a text transcript as well, but the text transcript is definitely required for audio-only content. And I think that there is some confusion, as you said, about whether a transcript and a caption file are required. But I believe that the law does say that both are required. Owen, do you have any other insight about that, for AA standards?

OWEN EDWARDS: Right, exactly. It does cause confusion. It is an area where it’s not been very clear. For Level A, a caption file and a transcript– sorry. A caption file is required, and either a transcript or audio description– for Level A. But moving to Level AA, audio description is required. And so in most cases, it makes sense to start with audio description at Level A, because it will be required at AA.

LILY BOND: Thanks, Owen. Another question here– how would you suggest universities respond to private law firms who send letters claiming that they have tested web accessibility of your websites– especially video content– and found them lacking, with the implicit threat of a Harvard/MIT-style lawsuit to follow? It’s a very complicated question. I will say that it’s clear the OCR and DOJ are taking these complaints seriously. It is always safer to be accessible.

It’s really important to get your legal counsel involved at that point, and determine what you should do from there. These laws basically say that– or they’re meant to improve accommodations, not define where you can get out of them. You always want to err on the side of accessibility. But it’s something you should be cautious about. There have been a lot of complaints. There have been a lot of resolution agreements, a lot of lawsuits. And it’s something to take seriously, and definitely to bring your legal counsel into.

OWEN EDWARDS: Right, Lily. And just to piggyback on that, absolutely bring the legal counsel. And we at SSB BART Group work with a lot of legal counsel in that kind of setting, where there’s been some kind of threat, or implicit or explicit threat of lawsuit. And typically, the first step that a legal counsel will take is to bring in an outside company that can do an audit– an investigation of what the exposure is.

LILY BOND: Thanks, Owen. Someone else is asking, is there an easy way to find copies of the Dear Colleague letters from OCR to the universities discussed in the presentation? Yes, all of the OCR letters are available online, and publicly available. A simple Google search should do it. But I will be happy to follow up with those links as well, if you would like them.

And let’s see– it’s just about 3:00 right now. So I think we’re sadly out of time. I apologize to everyone who did not get their questions answered. But feel free to reach out to us about those offline. Owen, thank you so much for presenting with me. It’s always a pleasure to have your expertise on the line.

OWEN EDWARDS: Thanks, Lily.

LILY BOND: And thank you to everyone who attended. A reminder that you will receive a recording of this presentation, as well as the slide deck, tomorrow. And I hope that everyone has a great rest of the day.