Tegrity Captioning: Strategies for Deploying Accessible Lecture Capture Video: IPFW and McGraw-Hill Education Case Studies [TRANSCRIPT]
TOLE KHESIN: So we’ll get started. Thanks, everyone, for joining us. And thank you to Tegrity for organizing this event. This session is titled “Strategies for Deploying Accessible Video.”
My name is Tole Khesin with 3Play Media. We are based right here, actually, across the river in Cambridge. And for the last five years, we’ve been providing captioning and transcription services to our customers in higher ed, government, and enterprise. We have a partnership and integration with Tegrity that makes the captioning process much simpler. And we’ll talk a little bit more about that later.
We are joined today by Mike Phillips from IPFW. That stands for Indiana University-Purdue University, Fort Wayne. Mark is a former Army ranger and a veteran of corporate media production in management. Currently, he is the multimedia technologist at IPFW, which basically means that he is the go-to person for all media delivery and management. And he is a native of Fort Wayne, Indiana. So welcome him to Boston.
We’re also joined by Neil Kahn from McGraw-Hill Education. Up to about a year ago, Neil has focused for several years on user experience, which means driving usability and design for McGraw-Hill software products.
Recently, he has moved to a new role as the digital product analyst at McGraw-Hill. So basically, in that capacity he is helping to transform McGraw-Hill from a traditional publishing company into the future of purely digital. And Neil is a New York native and he lives in the Upper West Side of Manhattan, so welcome to all.
So as for the agenda, we’ll start out with some highlights from some recent accessibility data from the World Health Organization and the US Census. We’ll talk a little bit about the accessibility laws impacting captioning and accessibility, recent upcoming accessibility requirements. I’ll talk a little bit about the value propositions of captioning and the Tegrity Automated Captioning Workflow. Then I’m going to hand things off to Mike from IPFW, who will do a presentation about their accessibility strategies and their critical issues that they’re thinking about. And then, Neil Kahn will talk about accessibility at McGraw-Hill.
So we have about 45 for this session. We hope to leave about 10 minutes or so for an open discussion and Q&A. And some of the critical issues that the speakers will touch on as they’re talking are accessibility strategy and laws that people are paying attention to, what types of content have the greatest need to be captioned and made accessible, proactive versus reactive approach, prioritization and how budget factors into the equation, and how new technologies have been helpful.Disability on the Rise
So to begin on a very high level, so this is a 2011 report from the World Health Organization, which states that in the world there are a billion people that have a disability. On a national level, there are about 56 million people in the US that have a disability. 48 million people, so about 20%, have some hearing loss. So I think the quote here from that report that I isolated. That nearly 1 in 5 Americans aged 12 or older experience hearing loss severe enough to interfere with day-to-day communications.
11% of post-secondary students report having some disability. And there’s also been a rise in veterans claiming disabilities. So of the 1.6 million veterans that have sought disability, 177,000 of them claimed a hearing loss.
And what’s actually really interesting in this report is it says that all of these numbers have been rapidly on the rise– very disproportionate with the population. And it’s interesting to look at why disability is on the rise. And there’s quite a few different reasons why that’s happening.
One of the biggest contributors is medical advances. So for example, premature births or stroke victims are much more likely to survive now. But they’re also more likely to have a disability as a result.
Society is also aging.
Recession probably has a lot to do with it, with people more likely to report and claim a disability.
And then of course, a decade of war, which sort of ties in with medical advancements, just technological advancements in general. For example, with modern armor soldiers are 10 times more likely to survive an injury today than they were in previous wars. But as a result, they may sustain some sort of injury, like a hearing loss. So all of this basically points to the fact that accessibility is a critical issue that will become even more prevalent in the coming years.Relevant Accessibility Laws
So a little bit about the relevant accessibility laws. Section 508 and 504 are both part of the American Rehabilitation Act of 1973, which requires equal access for people with disabilities for federal programs. And really, any program that has a federal subsidy.
Section 508 is a fairly broad law that requires equal access to federal communications and information technology for video. That means that you have to have closed captions. And for audio or audio podcasts, that means that you need to have– just a transcript is sufficient, so you don’t necessarily need to have the time codes.
But the video doesn’t necessarily need to be a moving picture. It could be a slide show presentation with the recorded audio track. So that would require closed captions as well because a deaf person would need to follow along at the right time and read those captions at the right time.
So a more recent law is the 21st Century Communications and Video Accessibility Act, often abbreviated as CVAA, which applies basically to online video that airs on the internet. But also in parallel, airs or has aired on television. So this really applies to broadcast and cable companies. So companies like Netflix and Hulu are affected by that.
There are a number of milestones associated with that law, which was passed in October of 2010. Two of them have already been phased in.
So right now, pretty much any kind of prerecorded programming that airs on TV and also simultaneously airs online has to have captions. So if it’s unedited, if a show is on TV and you also put it online, it has to have captions.
Any live and near-live programming such as news or sports events now need to have captions as well.
There’s another deadline that will trigger in September of this year that will apply not only to television programming that’s on the internet, but also for anything edited. So if you take a show and you create clips from it, or you edit it in any capacity, then you need to have captions as well. That’s actually a pretty significant phase.
And then finally, in March of next year, that will broaden to include all archival programming. So for example, a TV show that aired on TV 30 years ago and it’s an SMR clip. It’s somewhere online. That will have to have captions as well.
Actually, one other thing not on the slide, but it’s worth mentioning. The Americans with Disabilities Act has had a lot of attention recently with the lawsuit against Netflix. So the thing about the ADA and the way that it applies to accessibility is that if the question that courts ask is whether it’s a place of public accommodation.
And in the past, what a place of public accommodation has been open to interpretation and has been vague. So it wasn’t really clear how it applied. But last year, in a case against Netflix, the courts pretty clearly ruled that Netflix was, in fact, a place of public accommodation and had to have captions for their movies as part of the Americans with Disabilities Act. And that actually has pretty profound implications in other areas, including corporate and education.
So it’s still early. Courts didn’t get into the details of what constitutes reasonable accommodation. But things are definitely happening, and we expect to see a lot more in that area.The Benefits of Captioning Tegrity Lectures
So a little bit about the value propositions of captioning. So obviously, the primary reason why content is captioned is for accessibility for people– as an accommodation for people’s hearing disabilities. And that’s critical.
But it’s also worth noting that there are many other reasons. We have more than 500 customers. And it’s interesting that many of them actually caption their content not for an accessibility reason, but for one of these other reasons.
Just to kind of go through some of these, like the flexibility to view the content anywhere. So in the workplace, often speakers are turned off. The sound is not allowed. And other sound-sensitive environments, like a library in a university.
The other interesting thing is that most people that use captions are actually not deaf. They don’t have any hearing disabilities. They use it because it helps them to comprehend the content. So students that know English as a second language are actually the most common users. Especially if a professor maybe has an accent or is hard to hear. Captions really add a lot of value, especially in education.
Some other reasons why people do it is that once you transcribe or caption content is it becomes searchable. And that’s really important as universities are amassing these archives of video content. And unless it’s transcribed, you can never really find anything. It’s impossible to find video content and reuse it.
And so that’s also important not only for internal search, but also for discoverability in external search. If you’re searching for something in Google, for example. If you transcribe your video content, people are much more likely to find what they’re looking for.3Play & Tegrity Captions Workflow
Before I hand things off to Mike, a little bit about the captioning workflow that Tegrity has built. So the traditional captioning workflow is a little bit more complicated. There’s more involved to it. So it really involves figuring out what player or platform you’re using.
And so what you do is you would take your video and you would upload it to a service, like ours. We would create the captions. Then you would figure out what captions format you need. There are many different– these are a few of the different types of captions formats. You’d figure out which one you need for your video player or platform. You would download it, and then you would upload it to your video platform. Then you would publish it. So there are a lot of steps involved.
And what we have done with Tegrity is that we basically automated that captioning workflow. So now, the way it works is you have your Tegrity account. You have your 3Play Media. You create a linkage. It’s a one-time process that really just takes a few minutes to set up.
And then, faculty or anyone can initiate a captioning request. You just select which classes you want to have captioned. And there’s an admin approval step were the admin person has to approve that job.
And then, that media file is automatically sent to us. We process it and create the captions and send the captions back to Tegrity. And they get re-associated with your media files. And so basically, all of this happens behind the scenes. All you really need to do is just select which files you want to have captioned. So everything is taken care of on the back end.
And then you can monitor the status, the live status. Either from Tegrity or from the 3Play Media account system.
The standard turnaround is four days, but you can actually expedite that. We actually, recently rolled out a same-day service. You can have it back, literally within a matter of hours.
This is an example from a screenshot from Georgia Tech. So what you can see here is the captions show up right below the video player right there. And they can be toggled on or off. They’re searchable. And they post back automatically.
And then, last thing is that once you caption your Tegrity content, it opens the doors to a lot of other options.
Let’s say for example, you wanted to co-publish on YouTube, or on some other platform. That’s very easy to do because in 3Play Media, we actually store all of the captions formats indefinitely in your account.
So you can login, download whatever format is required for YouTube or whatever platform you want to do, and you can use that for co-publishing.
You can also install– this is what you’re looking at here in the bottom-left corner. That is an interactive transcript, which uses the existing transcript and captions data to make the video searchable, interactive, and more engaging. So it sort of takes that video to the next level using the data that– using the work that’s already been done. There’s actually no additional cost to that. So these are additional options that you can do beyond Tegrity. So now I will pass things off to Mike from IPFW.IPFW Approach to Captioning University Classes
MIKE PHILLIPS: [INAUDIBLE] than we had last year. Last year, this room was standing room only. So you guys have taken the pressure off. This is good.
A little bit about IPFW. We are a unique entity in the university system in Indiana. We are an entity of both Indiana University and Purdue University. There is also an IUPUI Indianapolis.
The unique part about that is that IU governs IUPUI and Purdue governs IPFW. So my paycheck comes from Purdue.
IPFW was founded in 1969. It was an effort of the state legislature to expand both Indiana university and Purdue university to the two largest cities in Indiana.
Our students can earn degrees from both Indiana University, Purdue University, or IPFW. And we are the fifth largest campus in the state of Indiana.
When it comes to captioning, our approach at IPFW came out of need. We had already been dialoguing with the Purdue attorneys about what we need to look for and pay attention to in captioning.
And as many of you know, the costs can be quite great. And so we took a posture of accommodation. So as a student with a disability approaches us, we then accommodate that need. And strangely enough, as we were dialoguing about this, the next semester we had several students approach us with this need.Universities & Accessibility Law
What the Purdue attorneys told us to be watchful of– and it’s their interpretation of the law. All I can do is relay you what I was told as the technologist to watch out for.
And that was, first of all, we have to have a plan in place. And apparently, the way the law is written– 508, you have to have a plan in place. And secondly, we had to be able to react– react being a keyword– within 11 days.Creating a Captioning Workflow at IPFW
Well, at IPFW, we have several systems that we felt needed to work within our captioning process. Number one, it has to work with Tegrity. Number two, it has to work with Kaltura. Is there anybody not familiar with Kaltura?
Kaltura is a cloud-based media storage and accessibility platform, that when you look at it and work with it, you might immediately think YouTube. But it sits behind our LDAPS and is fully secure for our university. And it’s great.
You can edit within the environment. You can create playlists with the environment. It’s fully integrated with Blackboard. It’s a great tool.
We also have Echo360 suites. And I’ll probably get hit with lightning bolts since I’m at a Tegrity conference. But nonetheless, we have 10 Echo360 studios at our campus.
There was a time when our campus was very siloed. Distance learning kind of did there thing with Echo and IT. We serviced the open classrooms with Tegrity. And then we also had a Mediasite room where the college of business for the master’s program went out on their own and built this Mediasite room. And we’ve since eliminated the Mediasite room because it just didn’t work well for us at all.
Well, when you look at closed capturing companies, pretty quickly there are two big players in that industry. One being 3Play Media and the other being Auto Sync Technologies. And we actually worked– we sent projects to both companies for evaluation so we could see how the companies were to work with, how willing they were to accommodate our needs as opposed to just working within their box. And obviously, we chose 3Play Media. And we choose 3Play Media for three main reasons– service, support, value.
Every time that we challenged them with a project, they not only met but exceeded our expectations each and every time. And they were very willing to work with us when we would challenge them with our budgetary constraints.
Our first project with them went flawlessly. This was actually an Echo360 presentation that was archived, that we played back through Kaltura. We brand it, and we use a special captioning player. And the really beautiful thing for us is that there’s zero latency. And virtually, it’s accurate. It’s right-on accurate. And we’ve had nothing but success.The Cost of Captioning for Universities & Institutions of Higher Learning
Let’s talk about the 800-pound gorilla in the room– the cost. Well, as we were looking at all of it, as you all know, the cost can be pretty high. Typically, about $150 an hour. And we were looking at doing entire semesters and class sessions that run– what? 70, 80 minutes a piece.
Well, when you add that add, that adds up real fast. And so in talking to Tole and 3Play– and I’m beating the poor guy up, trying to beat him up on price. And what they came back to us and proposed is, why don’t we pre-purchase a bucket of hours?
And within that bucket of hours, our costs were– we weren’t paying rate card. And for us, when we stood back and took that 10,000-foot view, and then referred back to what the attorneys told us we needed to be cognizant of– A, it started to fulfill our need to have that plan in place. But it also fulfilled the need for us to accommodate that reaction time.
Because I don’t know about you at your universities, but part of our process is when we get a ticket in that somebody needs something closed caption, it comes to me. And then I, in turn, send that for approval to the director of students of disabilities. So he’s going to give it a thumbs up or a thumbs down. If it’s just an instructor wanting to caption for the sake of captioning, it’s not going to happen.
But if it’s a student with a hearing impairment, it’s definitely going to happen. So if Eric’s traveling, or he’s out of the office or he’s in meetings, I don’t let grass grow under my feet. I start trying to locate that media right way.
If it’s in Tegrity, if it’s in Kaltura, I can find it in minutes. If it’s in Echo, that’s a different story because we archive a lot of things in Echo over in the library. And then, there’s also this terrible mess within the Echo servers that it’s not very well-managed workflow. So if it’s in Echo, I know I’m going to have a challenge finding it.
But then, what we start doing is we compile that media and we start getting everything in place. So when we get that thumb’s up from the director of students with disabilities, we fire it off to 3Play Media. It’s like Tole says, everything happens in the background. And over the following days, as you’re watching your content, things start populating. And that gives you the opportunity, as the administrator, to go in and start checking things.
Well, moreover, with the budgetary constraints, we had to pony up that money for the bucket of hours. And our vice chancellor of academic affairs ponied up that initial money. And now, policy is in place at our university that says, if you’re teaching a history class and you have a student that needs this accommodation, the bill’s coming to you.
And then we do a chargeback system against that department to pay the vice chancellor’s account back when we need– because we want to replenish his account.
The final step is if we find we have time in our bucket of hours left over, then we start looking at video that’s on the public side of our website. And then we start captioning that. So that’s kind of our plan for use with our bucket of hours. But the bucket puts our plan in place and it greatly helps us exceed that reaction time. And from the time we get a request, it may be one day. It may be three days. It may be four days before all the work is done.University Captioning Turnaround Time
And we’re talking about 29 classes, an hour and 20 minutes apiece. And that’s a pretty amazing turnaround time. And we owe a lot of that to 3Play Media in helping us come up with this plan.
The one thing that I can say about 3Play Media is I have three main vendors, three main go-to vendors that I really count on heavily. And 3Play Media is one of them, because every time we challenge them with a project, it comes off flawlessly. With that, if you have any questions of me, I’ve left business cards up here. You can grab a business card and give me a shout. And I’d be more than happy to talk to you about any challenges you have at your university.
And with that, I’m going to pass it off to Neil.McGraw-Hill Education Digital Publishing
NEIL KAHN: Thanks, Mike. Good morning, everybody. [INAUDIBLE]. Hold on a second here. So as Tole mentioned, I am a digital product analyst for McGraw-Hill. McGraw-Hill is the publisher of Tegrity as you well know.
Very quick background on McGraw-Hill. McGraw-Hill was started in– let me tell you it’s [INAUDIBLE]. McGraw-Hill was started in 1888 by a teacher in Upstate New York named James McGraw. He bought a publication called the Journal of Railway Appliances.
At the time, antitrust legislation was very much becoming an important issue. And he first saw the need for accurate information for industry. And so he bought this publication, which has all these kind of cool products for the railway industry and pretty much just about every other industrial industry at the time, heavy industry. So was 1888.
And so we’re talking about, what? 145 years. And since then, McGraw-Hill has become one of the most recognized brands around the world, particularly in educational publishing. Everyone seems to remember this little red square on textbooks from when they were kids.
Today, we employ about 6,000 people in 44 countries. We publish in more than 60 languages.
And just a few weeks ago, McGraw-Hill actually split into two separate companies. So there are now two companies, McGraw-Hill Financial and McGraw-Hill Education. Two entirely separate entities. So that’s hot off the presses.
So we are now a digital learning company, and we partner with educators and institutions to improve education and results for students and professionals around the world. We’re doing this. We’re combining more than 100 years of trusted content and pedagogy with technology that’s been proven to deliver results. So we’re going to talk a little bit about making accessibility– making your products accessible. And you just heard a lot about Tegrity. I’m actually going to talk a little bit about accessibility a little more generally as opposed to specifically Tegrity.Making McGraw-Hill Education More Accessible
I’m going to talk about some of our efforts at MHE to make some of our products more accessible. Hopefully, this will give some of you who are new to accessibility an idea of where to start.
So where do you start? There’s obviously a lot of disabilities out there. I actually heard the other day that one of the leading disabilities in the United States now is ADHD. And I know some of you– I don’t know how many instructors we have in the room, but I know a lot of people are using– sorry, I’m gradually losing my voice this week.
A lot of you are using Tegrity to deal with ADHD issues because students are less inclined to read than they are to watch video. And so I thought that was a really interesting thing I don’t think that many people think about when they think about this. But for the most part, educational institutions today are starting with dealing with accessibility in regard to hearing impairments and visual impairments.
So when we talk about accessibility, what can we make accessible? There are basically two things. We can make the platform more accessible. We can make the content more accessible.
In terms of the platform, I’m just going to mention this quickly. But things like using proper semantic tabbing when you’re building a site, allowing students to navigate through the site by keyboard.How-to Make Educational Video Accessible
But today, I’m going to focus on content. And specifically, video content.Video Transcripts
So in the past, one of the best ways to make video accessible was to offer transcripts to students. As you see here, just little link here that says New Transcript. You click that and this window pops up and shows you a transcript of the video.
We’ve been doing this at McGraw-Hill for years because it was relatively simple and it didn’t require a lot of technology in terms of the viewer.
Some advantages. So the user can hide and display. it can be read all at once and printed easily. It’s easy to copy and paste and print out.
The disadvantages are that it’s not synced with the video, so you have to read it separately when you’re watching the video.
It requires more screen space. You often have to figure out where you’re going to put it so it doesn’t block the video. That was more of an issue when screens were a little smaller, but still something of an issue.
And of course, there are more pages to code and manage. That little window has to be a separate page that has to be built.Open Captions
So the next thing that was popular was open captions. So we could talk about open captions and closed captions. Do people here know the difference between open and closed? Show of hands?
OK. So open captions are when the captions are actually part of the video. So this little area that you see here, this is actually branded into the video. You can’t turn it on. You can’t turn it off. Kind of what you might imagine when you see subtitles in a movie. Those are actually in the film itself.
So there are some advantages to that. It doesn’t require any technology. The captions always travel with the video. There’s no separate files to manage or anything like that. It tends to be good for things like DVD, a projection where you’re not actually sure what the setup is going to be when you’re presenting the material.
The disadvantages are that you can’t turn it off. It’s always there. It can be very expensive to change, as you can imagine. If you need to make some kind of edit, or let’s say you wanted to do it in a different language, you would literally have to produce that video all over again. You’d have to go back to the video production house, or whoever, to create new titles. And last but not least, it can’t be indexed or searched. It’s essentially dumb text.Closed Captions
That brings us to closed captions. So closed captions are– they look very similar to open captions. They appear at the bottom of the screen, usually. But they are not actually part of the video itself. They’re kept in a separate file.
So the advantages here– and we’re all used to seeing this on television or a lot of websites now. And that’s primarily what we’re going to talk about today. So the advantage is you can turn it on and off. Either the user can or an administrator can. Administrator can say, I want this little button here to be available to my students, or I don’t want it to be available. I don’t know why you wouldn’t want it to be. But let’s say they did, they could do that.
The material can be searched and indexed. I think both Tole and Mike mentioned, this is a really important thing. Video– it’s very difficult to index the content in video. And using the closed caption data is a workaround. It allows the search engines basically to understand what the content is in the video. And that’s really, really important.
Very simple to edit to localize if you had captions and you needed to create them in another language. It’s a very simple change to the data file.
And of course, it can be read by screen readers. So if you are visually impaired and you’re using a screen reader, that the material can be read out loud to you.
The disadvantage is that it requires a modern browser. Some of the older browsers don’t quite know what to do with closed captions. That’s becoming less and less of an issue as things like IE6 and [INAUDIBLE] sort of fade into the background.
And in terms of captioning, at MHE in terms of closed versus open, we’ve decided to use closed. And I think that that’s more or less the trend that everybody is following now.
So just to give you a very quick idea of how captioning works– I don’t want to get technical, but there’s basically a file. There’s something called an SRT file that’s being used.Caption Formats
I think in Tole’s slide, he showed that there were about a dozen different formats for captions. There’s a Windows program called SubRip Text that became popular a few years ago. It was used to pull captions off of DVDs. It’ a free program. And over the years, the format that it uses, called SRT, gradually became the de facto standard. So almost every video player out there supports and SRT file.
And if you look at what SRT file is, it’s really, really simple. It’s just a text file with an SRT extension at the end. And these are just time codes. So from four seconds to six seconds in a video, this is what’s displayed at the bottom of the screen. Very, very simple.
There’s a new– and it’s very, very similar to a style sheet in that the web page where the video appears would simply call this page as a reference. The web page would need to know where this page is in terms of sitting in your directory. So technically, very, very simple.
There is another web format that’s coming up that I want to mention that’s going to be very important called WebVTT. And this is not really implement yet. It stands for Web Video Text Track. And this is being proposed for the new HTML5 specification.
SRT calls are very, very simple. There’s very little you can do with them. You can make things bold. You can make things italic. You can change the font, that’s about it.
With this new VTT standard that’s coming, you can do a lot of things. It’s basically anything you can do with CSS, you can do to a VTT file. And this is going to become the standard over the next few years.
Now, it’s currently still only a specification, or being proposed for the specification. And there are currently no browsers– I think possibly with the exception of Chrome, latest version of Chrome, that support it. But over the next year or two or three years, this is going to, I believe, become the de facto standard.
And you can do cool things with it. You can even style your captions like karaoke style where it sorts of paints over the words as the person’s saying them. There’s really a lot of things you can do.How to Caption Educational Content
So people often ask me, where do you start when you have content that you need to caption?
I’m going to say that you really don’t want to spend money more than once. So I always suggest looking at the video that your content teams are creating now. The last thing you want to do is be retrofitting your older content while your content creators are pumping out new stuff that you’re then going to have to go back and later and spend money on again to retrofit.
So the first thing you want to do is draw up guidelines for your content creators so they can distribute them to your internal teams, external vendors, whoever’s creating new content for you.
And then after you do that, you can start creating a plan for retrofitting your existing or legacy content.How McGraw-Hill Education Prioritizes Captioning
So prioritizing. Obviously, you can’t just flip a switch and make all your content accessible. So you have to prioritize. What we’re doing right now is we’re currently prioritizing our most popular titles and products. That gives us the biggest bang for the buck.
And then we’re going to our newer titles. These are actually all at the same level. It’s not really first priority, second priority. These are also at the same level.
Our newer titles, these are the ones that are going to have the longest shelf life. And again, it’s easier to make something accessible when you’re first creating it, rather than going back later. While the content’s sort of fresh, everybody’s working with it. Everybody knows where it is. You should really only be spending that money once.
And then finally, requests. Any time we get a request from an instructor or a student who has a specific need, we’ll always make that content accessible and deliver it directly to them, so that they can access it.9 Steps to Make Educational Content Accessible
So let me just go give you sort of a quick nine steps towards making your content accessible.
Do we have enough time?
OK, so the first thing– and this is all based on what we did at McGraw-Hill, sort of stuff we did right and the stuff we did wrong. This is where we ended up.
So first thing, coordinate your efforts. Establish a steering committee and make sure all the different constituencies in your organization are represented. And have regular meetings.
Second, create tiers of compliance. So you want to figure out– give your teams options. You may have different funding options. So when you’re meeting, you might want to decide– well, level one is we’ll do light compliance in terms of accessibility, medium compliance, heavy compliance, or full compliance, which would obviously be the most expensive. So when it comes to meeting with management and you have to figure out what’s going to get paid for, it’s always a good idea to give them some options.
Finally, don’t reinvent the wheel. Look to industry standards bodies who have already created tiers of compliance. And try to model your tiers after theirs. It will make it much easier. It’s also much easier to market. It’s much easier to say, we are conforming to WCAG Level 2 compliance and everyone understands what that means, as opposed to having an entirely home-brewed collection of criteria.
For those of you who– well, I’ll talk a little bit more about that. Does everyone know what the WCAG is?
OK, so this is the Web Content Accessibility Guidelines. This is created by The W3C who is the main standards body for the web. They’re the ones that create the standards for HTML4, HTML5, what everybody else uses. And the Web Content Accessibility Guides, or WCAG, which is right now– you’ll see it referred to as WCAG 2.0, which is the latest version, has the full explanation of different levels of compliance.
And most organizations that are starting to think about this or work on this are moving more and more towards the WCAG. It’s obviously not mandatory. Nobody– I don’t believe yet– is legislating that this is the standard that you have to follow. But this is where everybody on the web is going right now.
Four, establish a list of a few approved vendors for each category. So I wrote here transcription, tagging, et cetera. Tagging is not really relevant to what we’re talking about now for video, but certainly in terms of transcription, find yourself a few vendors that you like.
3Play Media– excellent choice. We’ve been using them. And that will allow your teams to get competitive bids and have a few options.
Five, get your feet wet. Run some pilots. Mike talked a little bit about this. So once you get your vendors and once you get your plan in place, use these vendors. Pick a few pieces of content or topics and actually do it. And it will give you a chance to figure out what’s involved, who in your organization needs to be involved, who with the vendor. How long is this going to take? How much is it going to cost? How complicated is it? The only way to figure this stuff out is really, to do it yourself.
Six, confer with management and stakeholders using your pilot auto-collected data to layout a roadmap and funding for your different levels of compliance, the one, two, three that we setup earlier. Kind of self-explanatory.
Draft a roadmap and decide what your milestones are and when you’re going to hit them.
Eight, draft and distribute guidelines to content creators. That’s the thing I talked about earlier. The first thing you want to do is get to the people who are creating new content. Do a little training with them if necessary.
And then finally, get instructions to your team for retrofitting.3 Accessibility Tips for Educational Technologists
So I’ll give you three tips for technology implementers. I know we have a lot of technology people here. So just three things you should be thinking about.
Decide on open or closed captioning. Like I said, at MHE, I think this is more or less a no-brainer– everyone’s going closed caption. So not too much to think about there. But always good to know the difference between the two.
Look at HTML5 media players with Flash fallback. This is a huge thing now. So most video in the past was played using Flash Player. Still, a lot of it out there is in Flash. Everyone is now moving to HTML5. It’s mostly being driven by the fact that iOS, iPads and iPhones, don’t support Flash. They only support HTML5 video. So everyone’s getting on the HTML5 bandwagon.
There are now media players and this makes things so much easier. There are now players that will try play HTML5 video. And if the device doesn’t support it, they will automatically play a Flash video. So you have to have both versions of your video file on your server. But this makes it really easy in terms of trying for the optimal and then falling back to the sub-optimal.
And we’ve been playing around with that JW Player, which is one of the more popular players. And there are a bunch of them out there. You can look them up.
And then finally, figure out what captioning format you need. So I mentioned SRT. That’s probably what you’re going to go with now. That seems to be the de facto standard. But in the future, it looks very strongly like it’s going to be VTT, WebVTT. So just keep an eye on that and monitor progress.Accessibility Resources
And then finally, I just wanted to put up a few links. So the first one is the US government Section 508 compliance site and section508.gov. Great site. It has a ton of really great information.
In terms of web accessibility at other educational institutions, you can look at this page specifically on the second link. On the 508 site, here’s a list of a dozen or two dozen different schools and other educational organizations that are pushing the envelope. And you can see what they’re doing, and contact their teams and collaborate a little bit.
And then finally, if you want to learn more about the WCAG 2.0 specification, there are a million documents out there on the web. A lot of them are really confusing because this is a specification. So it’s a very, very big, very, very technical document. Write down this link in particular. This page is probably the easiest one to understand of all the ones I saw.
And you can actually turn on and off in the spec different parts of the document, which makes it much easier to read.
And that’s all for me. Thank you very much.
TOLE KHESIN: Thanks, Neil. I know we’re a little over on time. Do we have a few minutes for questions? Or should we wrap up?
OK. So we’re out of time. But we’re happy to chat afterwards or outside. But I wanted to thank you all for joining us. I wanted to especially thank Mike and Neil for sharing their knowledge and their experiences. And yeah, thank you.