Pennsylvania State of Higher Education (PASSHE) Virtual Conference [TRANSCRIPT]
JOSEPH ZISK: Hi, everybody. This Joe Zisk. I’m the moderator of this session. And welcome to Transforming the Teaching and Learning Environment to 2013 PASSHE Virtual Conference. The session today is one of the many 60 hour long sessions that we had that will go up until tomorrow, February 22.
For those of you who may be new the Blackboard Collaborate, please make sure to mute your mic. That means don’t push the talk button until you’re ready to speak. And then when you’re done speaking, please turn it off. You can also use the text chat window at any time to add comments or to ask questions. You can also click on your Raise The Hand button to indicate that you would like to speak, to be recognized in the order that you raise your hand.
As a reminder, all sessions are closed captioned. To turn on the captioning, just click on the Closed Caption icon above the video window. And you can do the same thing to turn off the captioning icon. In fact, today’s session, Video Captioning for Accessibility– Penn State Demos Its Solution— will be starting in just a few moments. And we have three presenters. And I’ll start off by handing it off to Josh Miller. He’s from 3Play Media. And then he’ll go introduce the other folks and begin the presentation. Josh, can you please begin?
JOSH MILLER: Great. Thanks, Joe. So my name is Josh Miller, and I am one of the founders of 3Play Media, where we focus on making video content more accessible through transcription and closed captioning. So I’m going to start off by going through a quick overview of what closed captions are and some of the relevant legislation. And then I’m going to turn it over to Keith Bailey from Penn State who’s going to actually demonstrate that the solution they’ve built around creating accessible media.
So what are closed captions? From the very beginning here, caption refers to the process of taking an audio track and transcribing it into text, to then synchronize that text with the media. Closed captions are typically located underneath a video or overlaid on top. In addition to spoken words, captions convey all meaning and include sound effects.
And this is a key difference from subtitles. They’re often confused with each other. Closed captions originated in the early 1980s by an FCC mandate that applied to broadcast television. And now that online video is rapidly becoming the dominant medium, captioning laws and practices are proliferating there as well.
So some basic terminology– captioning versus transcription. A transcript is usually a text document without any time information. On the other hand, captions are time synchronized with the media. You can make captions from a transcript by breaking the text up into small segments called caption frames and then synchronizing them with the media, such that each caption frame is displayed at the right time.
Captioning versus subtitling. The difference between captions and subtitles is that subtitles are intended for viewers who do not have a hearing impairment. They may not understand the language that the content is being displayed. Subtitles capture the spoken content, but not necessarily the sound effects. So for web video, it’s possible to create multilingual subtitles and display that with your video.
The difference between closed captioning and open captioning is that closed captions can be turned on or off by the viewer, while open captions are burned into the video and cannot be turned off. Most web video allows for closed captions so that it’s a better viewer experience.
Post production means that the captioning process occurs offline and usually takes a few days to complete, whereas real time captioning– as you’re seeing here in this session– is done by live captioners. And there are certainly advantages and disadvantages of each process, depending on what it is you’re doing.How are Captions Used?
So how are captions used? With online media, there are actually quite a few applications that go well beyond just the obvious hearing impairment requirements. And we actually have a number of guides on how to handle the fact that with web media, every web player handles captions differently. So we have a number of guides on our website you can find on how to page– or the how it works page, I should say– that will explain how to add captions to different types of media players.Understanding Accessibility Laws
So quick overview of some of the accessibility laws that are in place now. Section 508 is a fairly broad law that requires all federal electronic and information technology to be accessible to people with disabilities, including employees and the public. For video, this means that captions must be added. For podcasts and audio files, a transcript is usually sufficient.
Section 504 entitles people with disabilities to equal access to any program or activity that receives federal subsidy. Web-based communications for educational institutions and government agencies are covered by this as well. Section 504 and 508 are both from the Rehabilitation Act. Many states have also enacted similar legislation to Sections 504 and 508, oftentimes even the exact language.
And then the 21st Video Communications and Accessibility Act— which is often referred to as the CVAA– was signed into law in October of 2010. This expands closed captioning requirements for all online video that previously aired on television. And there is expanding legislation that will eventually move beyond just network television, certainly being discussed right now also. Some of the milestones have already begun. So there are already requirements in place for network television providers to actually get captions on their video once it’s up online.The Benefits of Captioning
So a little bit about the benefits of captioning. And with the internet being the internet, there really are more benefits than what you’d be used to with just traditional television. The most obvious is it makes video content accessible for deaf and hard of hearing people. This is really important and should not be overlooked.
The next thing is, as I was alluding to, there are a number of benefits for people who can hear as well. Specifically, captions will improve the comprehension and remove language barriers for people who know English as a second language. Captions also compensate for poor quality of audio or a noisy background, or certainly allow the media to be consumed in sound-sensitive environment like a workplace or a library. So if someone’s not supposed to put the sound on, they can actually follow along still.
From search engine optimization, that’s certainly a big thing for a number of organizations. But also just search in general. So the idea of having synchronized text with your video really makes it possible to use the text as a navigation tool and actually go to a part of your video based on that timed text. And then certainly, once that video has been found, it just allows for content to be reused, found more efficiently.
It’s a huge learning tool that way. For example, if you’re looking for something in one hour lecture, you can actually jump to the exact point in the video if the captions have been applied. or published in a certain way. We also have a number of tools we call an interactive transcript that allow you to use that text as the searching tool.
And then finally, transcription and captioning is a really important piece if you want to translate. So if you do you want to reach a global audience or if you do want to reach an audience that speaks a different language, the captioning is the first step. And so adding subtitles is something to really get enabled by having captions.
What we do at 3Play is really try to make this process easier. Our whole focus is high quality captions at a reasonable rate that can be done without crew recreating a huge workflow. And so we really try to make it as simple as upload, download, publish. Just a little bit about caption formats. And this gets into– making it simple means being flexible.Captions & Transcript Formats
So we produce about 20 different caption and transcript formats, actually. And here’s a list of some of the common ones. The reality of web captioning is that each player does things a little bit differently, as I mentioned earlier. And so it’s important to make sure you’re getting the right caption format and apply that to your video as easily as possible. And so these are just some of the examples.Simple & Flexible Captioning Workflow
Another part of making the process simple is integrating with existing platforms. So we have out of the box integrations with a number of different video players and platforms and lecture capture systems. So that once this linkage is created between your accounts– which literally will take a matter of minutes– it’s very easy to request a file to be captioned and then have the captions be sent automatically to the right place for publishing with very, very little effort. So again, it’s all about making this process easy, so you’re not relearning systems or even implementing new systems that take more than just a few minutes to get going.
The other thing– and you’re going to see this with Keith’s presentation– is that we have an API that can be used to build your own workflow. So again, really enabling people to do things in the most flexible way possible. So that captioning isn’t something that has to be really thought through carefully. it can just be done so that everyone who needs them can get them.Video Captions Plugins
A couple things we’ve done is create embeddable plug-ins. This one is what we call the captions plug-in. So for video players that maybe don’t support captions very well, such as Vimeo, you can still add captions to that video. Or maybe you there’s a YouTube video you want to show, but it’s not yours. This is what you can actually wrap the caption track around the video when you publish it on a page. And it’s done very, very easily.Translation & Multilanguage Subtitles
One of the nice thing about this captions plug-in that we built is that it supports search as well as multiple languages if you do go through the process of creating subtitles. So in this case, you could actually use the caption text as a search criteria. So you can search by keyword and jump to a point in the video all off of the captions.
Another service we actually recently launched is the ability to take an existing transcript and add the time codes to create closed captions. That way, you would be able to– say you have a process where you have a script that exists already for video. You could submit that with your video to us. We’ll add the timing, create the closed captions, and you’re ready to go.
And then I mentioned translations. We’ve actually fully integrated a translation process with our system, with multiple pricing tiers. So that depending on the quality you require, you can actually match that up with your budget. Easily select any language you want, and then even edit the output afterwards. So we have a whole editing interface for the subtitles. So that if you want to make changes, you can do that on the fly.Interactive Transcripts & The User Experience
As I mentioned, we do offer a number of interactive, embeddable widgets that will tie into a video player on a page. So that the text really comes alive and becomes a navigation tool as well as an accessibility tool. So you can now click on a word, jump to that part of the video, search within a single video by keyword, or even search across many videos at the same time and jump to the exact point based on your search results.
So with that, I’m actually going to turn it over to Keith Bailey. Keith is the assistant dean of online learning at the College of Arts and Architecture at Penn State University. And he’s going to walk through the captioning solution that they’ve built.The Penn State Captioning Solution
KEITH BAILEY: Thank you, Josh. Thank you for that introduction and the background to captioning. I’m going to start off. I’m going to give the background of the context by which this all came about. So I’m going to give you a little bit the information about Penn State University. Who we are, how we’re structured, and how online education has grown to a point where we needed to think about how transcription actually impacts us on a daily basis. So many of you may know this.About Penn State
Living in this state, we are a 24 campus system with one virtual campus spread throughout the state. We are a land grant university. So we do fit under that federal regulation of making things accessible for students. And it’s not only a good thing to do, but it is a requirement of us as well. So we have to taken this on very passionately, and are doing what we can to streamline our efforts throughout our growth in online education.
Now a little bit about myself. I’m the assistant dean for online learning in the College of Arts and Architecture. I run a unit called the e-Learning Institute, which is focused on one of the 13 colleges that the main campus at University Park. And my oversight is in the following disciplines that are demonstrated on the screen here, from architecture, art history, visual arts, music, theater. So we are a very visual culture and rely a lot on media. So transcription, as you can imagine, plays a very integral role in a lot of what we produce and how we deliver our materials in more of an online format.Penn State Delivery Options for Online Courses
So just to look at the delivery mechanisms that we have at the university, given the reach out past any single campus, we have a couple different ways of delivering our online courses at the university. Maybe probably you’re aware of the World Campus, which is our 25th campus, actually. And it is the campus that if a student does not exist as involved in any one of the 24 campuses and are taking courses online, they are part of the World Campus.
And then we have something called the e-Learning Cooperative which allows us to actually deliver any one of our courses to any one of the campuses as long as they are fully online. And there’s a mechanism to do that and allow students to enroll in those courses. And then of course we have each of the campuses independently can offer online courses to themselves.
Our portfolio– just to start to demonstrate the need again– you can see what we produce in the way of online education. We have 55 courses online. We cover each one of our disciplines. We are very heavy in the general education, general arts, which is where a lot of our needs for accessibility fit in. Every student at the University has to take six credits of general education, general arts, in order to graduate.
Thus, our portfolio is 31 courses in the general arts, general education arena. We deliver these courses at University Park. We deliver them through the World Campus. And we deliver them through the e-Learning Cooperative. So we reuse them. And that is part of our strategic mission in online education here. We also have a Masters of Professional Studies and Art Education.
A Digital Arts Certificate that is fully through the World Campus as well the MPS in Art Ed. We have music education, where we just have two online courses that are offered at University Park. And then we have a new degree coming out– a wholly online degree in geodesign, which is going to be housed out of landscape architecture. So this is our portfolio. It doesn’t represent the university, but it represents what we have to deal with and what I have to deal with on a daily basis as part of the College of Arts and Architecture.e-Learning Enrollments at Penn State
So then we can look at the increasing need for accommodation in online education. And if we look at the university as a whole, in the 2011/12 fall-spring semesters, we offer over 524,000 enrollments, either at the world Campus– which is the WD designation– or at University Park. So we have an awful lot of enrollments that we manage.
9% of those enrollments where actually what we consider as a web course, which is a fully online course. Which is offered at World Campus or University Park. So 10% of our enrollments come from purely online courses. Of those enrollments, you can see the breakdown is we’re about 50-50 between what we offer at the World Campus and what we offer at University Park.
Our portfolio here within the college– we offer 12,000 enrollments just within our own college every academic year and in an online environment. So students are taking them from the dorm rooms. They’re taking them from wherever they are. But we can see that this trend is actually going to continue to go up. Our students are expecting it. And we are helping to meet those demands by creating more online education for them.Fulfilling University Accommodation Requests
So then in looking at accommodation requests, we have ODS, which is our Office of Disability Services. In the 2011/12 academic year, you can see we had 1,140 registered needs through the Office of Disability Services. So our accommodations are pretty high, and we need to react pretty quickly to them once they come about. So the idea of streamlining a lot of it and developing our online portfolios so that they accommodate these needs up front is of critical importance to us. And that’s part of our role here within the college, to support that for our faculty internally.
In academic year, we had six students at the university that either needed captioning or interpretation done in the classroom. And that went up from four from the previous year. And that number continues to rise. And I can tell you this year, just this semester, we’ve had three needs for captioning in three different courses. And we were given about two to three days in order to be able to accommodate that request.
In addition, we have five visually disabled students, and some of which at varying levels of blindness, if you will. Some of them are completely blind. Others have some form of vision, and require larger image sizes and larger text format or captioning in order for them to be able to view it in a more timely fashion for themselves.
So then if we look at the accommodations at the World Campus, you can start to look at the numbers from fall of ’11 to spring of 2012. And to the number of students that had accommodations come through. And the number of unique courses that actually affects. So in spring of ’12, we had 37 students enrolled at the World Campus who had requests for some form of need in any of the courses. But that also impacted 82 courses.
So it’s not as simple as a single student coming in and needing a course. We understand that once that student matriculates in through the system, that we need to figure out how to accommodate as they move through their portfolio of courses and matriculate through to graduation. So it is important for us to keep on top of this as well and understand proactively where the student sits and what they’re going to need as they move forward through. So being proactive is a very important piece of this.
We had three blind students enrolled in the World Campus– six courses being taken. And we know if they’re an undergraduate student, and if someone came in with a need for visual accommodation, we are going to have to do that for at least two of our courses. So the question is which two, and how are we going to manage that?
Deaf and hearing as well– nine students were enrolled in 27 unique courses.
So our accommodation process– as I mentioned, we have the Office of Disability Services. But the student– there’s a very defined process which is very helpful to all of us– that the student has to disclose to the Office of Disability Services. We’ve had students actually come in and claim that they need a request without ever talking to ODS. And we have to refer them back to ODS in order to make the accommodation, so that they can figure out what the accommodation is– an appropriate accommodation is.
So the Office of Disability Services basically works with that student, documents the need, identifies it, and then communicates that with the faculty. So my learning design staff never really knows who the student unless that’s part of the accommodation. But they are told specifically what needs to be accommodated. And then they have to go in and start to make those accommodations based on those needs.
So the learning designer becomes a very critical role in this in how they implement and coordinate the implementation of this. It seems like a very simple, smooth process, but once the accommodations come in, it becomes very different. Because you make reasonable accommodations. And sometimes that becomes a negotiation and a talking with the Office of Disability services as to how to accommodate that.Penn State Learning Design Accessibility Initiative
So now let’s look at Penn State’s learning design and how we have structured the way we deliver our content to help with this. And some of our processes and policies that are in place, and how they impact our need to accommodate. So first and foremost, the university has actually established these 12 quality assurance design standards for all courses. And one of those– actually point number seven– is accessibility.
So we realize the importance of it. We feel that we need to keep on top of it. As we’re designing their courses and matching the courses to these learning design standards, we realize that accessibility is one of them. And we need to keep that in the back of our minds. So as we’re picking media, as we’re creating learning design strategies– or whatever we’re including into the course experience– that we have to have in the back of our mind what a reasonable accommodation is for all the different types of disabilities out there.
So that becomes very difficult. And it’s something that we have to explain to our faculty and worked with our faculty. And it’s become much more smooth over time as scenarios have come about.
In addition, we’ve put together a task force. It’s a quality initiative, which is looking at accessibility specific to online courseware. And really, what we’re trying to do is develop a process by which we can have our instructional designers and our faculty identify best practices for implementation of online courseware. Looking at this and proactively documenting how accessible our courses are before the courses actually start. And we’re always striving to be better. So we always want to modify and tackle the things that we can tackle up front and make the accommodations that are needed as they come out.
We also have a university policy on web accessibility. And it’s called AD-69. And as Josh was saying, there’s 508 compliance. The university used to follow the 508 compliance, but we’ve recently moved to the WCAG, which is a different form of 508 compliance. And from what we’re hearing, that 508 is actually going to move in the direction of the WCAG at some point.
So we’ve moved to a compliance level of AA. And there are three levels to it. We are not exactly sure the rationale behind why they chose AA. But it is a very grueling process. And I forget how many different standards there are. But we have to be figuring out how to make those accommodations. And at a university, it says pages is less than two years old must comply. So that means any courses that we’re creating now, we need to comply with this standard. And we need to train faculty to be able to do that.
Older pages must be made accessible by a determined date or by request on accommodation. So that’s the reactive approach, if you will, but our unit is taking a very active role and trying to accommodate beforehand. Because once accommodation comes about, there’s a large scramble to make that accommodation come true, depending on the need that comes about.
We’ve also identified some key blockers. And I won’t go through the list of them, but you will notice that one of the key blockers to be fixed– that we are saying have to be put into place for all courses– is the video captioning and transcription. So that became a critical piece to all this as we started to move forward.Penn State e-Learning Institute’s Learning Design Approach
But then if we look at our learning design approach and how we have used that to not only enhance the quality of our courseware, but how we can use it to help with these accommodations. I’m going to take you through a little bit about our approach and how we’ve used the design to help influence accommodations upfront and make things more accessible for our students. Through screen readers and/or transcription and captioning.Content Separate from Communication
So one of the approaches that we’ve taken is we like to claim what we do is content separate from communication. We’re keeping our content, our course material outside of ELMS. And this provides us with a lot of different opportunities. And as you can see on this screen right now, we have a visual that goes along with each of these courses. Which becomes very important, especially in a college like Arts and Architecture.Content Separate from Design
But then we use the communication tools to do what it does really well. And that’s the management of your class roster, your drop boxes, your grading, and things like that. So then if we take that one step further, we manage things in a content management system. So we can keep our content separate from the interface. We can create the interface in a certain way. And then we mash them together to publish our content.Media Separate from Content
And then we take that one step further. Not only is the content and the design separated from one another, but we keep our media separate as well. And as I walk through the process, you will see why it is so important for us to keep our media outside of the content so we can make quick, acceptable changes, and have it happen replicatively across the board as we implement media.Penn State Accessibility Solution
So our solution here is really looking at three different pieces. ELMS is the E-Learning Management System– that’s what the acronym is. And we’ve developed a content management approach, a media management approach, [INAUDIBLE] and then an online studio to support the art side of things.ELMS: Content
If we look at the content management, it’s an open content management system based off of Drupal. We’ve develop community module, or used community module, developed our own module, pulled it all together to manage all of our course content. And this is all of the content that is outside of ELMS. We also develop our own visuals in the form of themes, and/or we can use public themes.Accessible Design
And one of the powers to that is we can develop these themes to be accessible. So when we look at our theme, we not only develop it to be visually appealing, but we develop it to be accessible up front, and that a screen reader can move through it and get you to the content. We can do [? skip ?] locks. We can jump around as needed. But we can ensure it. And so a faculty member doesn’t have to manage that aspect of the visual look.Accessible Content
In addition, the content editors– we have the ability to either add or take away controls based on those needs and those style sheets to make the content more accessible for the student as well. And screen readers can go through this very easily. And we can modify and install modules to make this available to our students.
But the big debate always is, is how much power do you take away from faculty by limiting the features versus the balance to making sure that things are accommodated upfront? And we get into the debate all the time about whether design is an academic freedom or not, or whether accessibility trumps the need for design freedom. And we can debate that for hours upon hours, but we won’t do that today.
This is an open distribution. We have a very open philosophy. So everything we develop is open and giving back out to the community. it’s an ongoing development effort. So you can actually download the distribution of this off of drupal.org.Assessing Video Management
So now we get into the media. And this is where the video, the audio, and all of those assets are managed. And we built this basically to eliminate the duplication of media and allow for reuse across curriculum. We know that as we’re offering this across the World Campus, through the campuses, and then around campus, the same assets are being used. And there’s no need to duplicate, especially when we’re talking about video.
We wanted to simplify the workflow of copyright and transcription to the more critical things of our 12 quality design standards. And we needed to make sure that we were compliant with copyright things. Whether we’re using TEACH Act, Fair Use– however we’re using it, we need to make sure we were complying. And this ELIMedia system helps us manage that.
The system itself is built off of a Flash Media server with the Drupal front end. It manages the copyright of all of our digital assets. You can put images, video, audio files, PDFs, Word documents. We can put basically anything up there. It also then manages our transcription and our caption files associated with each of these systems.Embedding Media into University Courses
So now that I’ve given you the background, let’s look at how we go about embedding one of these pieces of media from the media system into one of the courses. So I have a bunch of screen shots here to demonstrate how that works. So if we look in this, this is the content management system. And you will see– sorry, this is the ELIMedia system.
This is a piece of video lecture that was produced. It was captured already. You can see that the embed code that’s down on the bottom, and all a learning designer or a faculty member has to do is copy that piece of material, paste it in the WYSIWYG editor, hit Save. And it publishes it, and it drops it right in. If there’s a transcription file with that or a caption file, it will automatically appear within the system and overlay on top of the video. If it does not exist, and we go actually produce the transcript or the caption file and bring it back in, it will automatically appear through that embed code. So that’s part of what we have built into the system.Benefits of ELIMedia
The benefits to us is that we feel that we are much more in compliance with TEACH Act, Fair Use, Creative Commons. We can quantify all our digital assets. We are much accessible on things. We’re in a position where we can now caption all of our media and know reliably that students have access to the materials.
We’ve removed the need for learning designers and faculty to manage the whole process. And actually having to go upload the material and overlay the closed captioning on top of the video– that became a very daunting process for many individuals. We also allow for tagging, so we can easily retrieve this. In the media system, we can see exactly where it’s being used in what courses. So we know what the impact is if we change something or move something around.
Again, this is an open distribution, giving back to the community and higher education at large. So feel free to have a look at it and check it out.Penn State’s Transcription Process: The Evolution Manual Transcription Method
So I’m going to go quickly over the transcription process– the evolution of what got us to where we are today. So when you look at it, back in 2006, we had this very manual method. And this is the cumbersome method. Where we had a lecture file that a faculty member created– they were 39 flash files– and we had a student with an audible disability that needed all of this transcribed.
Well, we had to burn all of those on a disc. We had to send them to a company and mail them– physically mail them. It was 26 hours of lecture. It took two weeks for this to come back, and it was actually an open caption. Not even a closed caption. So everybody had it, and there was no way of toggling it.
So we looked at that said, this works. It was very costly. The turnaround was way too long. And it wasn’t a very effective approach for us. But we made the accommodation, and we did what we needed to do at that point in time. So what we went to do is we said, we want to find a better solution. How are we going to do that?
We need something that’s going to be accurate, reliable, quick turnaround, quite affordable. We needed to do volume quickly. We needed options for multiple formats so we could do transcription. We could do closed captioning, open captioning. We had the power to choose what we wanted to do.
We also wanted to integrate with ELIMedia. We’re already managing our assets. How do we wrap this into it? So [INAUDIBLE] this one day, our instructional technologist– our manager of instructional design and myself were at a conference, and we bumped into Josh and Tole of 3Play Media. And we started talking about how we can do this and make this more efficient. So we’ve been working down that path for the last several years, trying to figure out how to make this better and better and better for us. And I believe it’s a win-win for both of us at this point. And I could give you many scenarios just within the last semester of how this has saved us.Partially Automated Transcription Process
So going back to our evolution of where we are, the second phase of this was our partially automated system. So we took our core of ELIMedia. All of our media assets are managed. Now how do we manage the caption of this and associating it?
So basically, the process was we would take the video. We would upload it to ELIMedia. We then upload it, in addition, to 3Play Media to get the file put in and turned around. We would sit there. We would watch and make sure to check to see if it was completed or when it was completed. And it was typically at two, three day turnaround, which was great. So we could rely on going back to the site in two to three days. To then have to go download it, pick our export format, then to upload it to the media.
And again, this was a daunting process. But it saved us an awful lot of time as well. So going back to that embed code that is automatically layered over top of the video so any video that existed and we uploaded the caption file automatically went with that video. And we could still reliably see that that media was then transcribed or captioned at that point.
At that point in time, we didn’t do anything with the transcription file, though. We didn’t download it. We didn’t upload it. We didn’t make it available. Downloading two things versus the one thing, uploading things– it just wasn’t what we needed to do at the time. So of course we wanted to improve that. So we fully automated the process. And here’s how we do that.Fully Automated Transcription Process
So going back to the APIs and all of the wonderful tool sets that have come along with the 3Play Media system, we utilized those APIs to automate the whole process from top to bottom. So now our media specialist uploads piece of media into the ELIMedia system. They choose the course that it’s associated to.
They can now classify that it needs transcription. That triggers an event for approval to our instructional designer or instructional designers to say, media is approved for transcription. Learning designer, are you OK with this? Is the faculty member ready? So we don’t have to reproduce and waste a lot of time.Video Transcription through an API
Once these approvals have occurred, we can then submit it and hit Execute to send to 3Play Media’s system for transcription and captioning. Once that happens, it automatically gets submitted. There’s a cron job that runs every night, and it will look at the system, see which ones have been queued to be submitted, and they will all get pushed up to the system and get put into the queue.
We don’t have to see this screen, but this is showing that after we have submitted it, everything is in progress. Nightly, then, our system looks at 3Play Media’s system to see if any of these are complete. If they are complete, they get downloaded immediately and associated directly to that file. So we’ve eliminated the whole process. Now what we have done is we’ve streamlined the approval process so that we ensure that we are captioning what we need to caption, and that it’s done in a more efficient way.
Just this semester, one of our history courses had a need for captioning. The course had not been captioned. I forget how many files. Several hundred files we had to upload– I think we spent about $6,000 on all the lecturing files that were needed. But we were able to turn this around in less than a week to get it all done. So the value-added was incredible.3Play Media’s Proven Results
Proven results. Thus far, we’ve gone through and looked through our things. 100% reliable. We selected 10 of our more difficult video files to demonstrate. And you can see we’ve had well over 1,000 videos that we’ve pushed out. We’ve had 571 of those files transcribed to the three day turnaround– $13,000 cost. Some questioned me on that cost, as to, boy, isn’t that expensive?
But that was over a two year period of time. And if I were to hire a graduate student– which many people have said, what you need to do is hire a graduate student to do that– it would cost more than that just in assistantship funds to pay for that individual with less reliability, shorter turnaround time, so forth and so on.Additional Benefits to Penn State Students
The other thing is that we’ve been able to associate not only captioning files, but transcription, and make a better meaning for English as a second language students. Which is probably one of the biggest areas of benefit that we have seen students come back and say, boy, that was really helpful to have that transcript or those caption files.
We have projected to save about 15 minutes per file on a 12 hour course. So we have seen a huge benefit in how we have managed it.
So I am going to stop at this point and I’m going to turn it back over to Josh, so he can demonstrate some of their tools– some of the cooler features that they’re doing.
JOSH MILLER: Great. Thanks, Keither. I’ll just wait for my screen to pop back up.
So I’m going to just quickly walk through a few implementations, talk a little bit about some of those automated workflows I mentioned before, and show you what some of these more interactive, searchable libraries might look like.Captions for Lecture Capture: Echo360
So this is just an example with Echo360. So we have a number of integrations with the lecture capture systems that are commonly used. And basically, all you have to do is once you set up that integration, you can very quickly request any Echo to be captioned. And the way it’ll work is the media file will get sent to us. We’ll transcribe it, create the captions, and post it back automatically for you. So as soon as it’s ready, it’ll show up with the presentation.Captions for Lecture Capture: Mediasite
Same thing here. Same idea with Mediasite. This is an example. You see the captions on the left there, right below the video. And you can turn them on and off. And the concept is almost exactly the same. [INAUDIBLE] So you have the option from your Mediasite interface, just like with Echo from the Echo interface. You’d be able to request captions on the fly.Captions for Lecture Capture: Tegrity
And then finally, Tegrity— again, same idea here. You can see the Tegrity captions are just slightly higher in this case– right below the video. It’s combined there in that window. So again, very similar idea. One time setup and then request whenever needed.Interactive Transcripts with Video Archive Search
Getting into some of the more interactive tools that we have, this is what we’ve done recently with MIT OpenCourseWare, which incorporates both that concept of the interactive transcript and what we call archive search. So you can actually search across an entire course load of videosand lectures by keyword, and then be able to find which lectures have that keyword and exactly where, and be able to play from that point.
So you can see the professor in the video is speaking. In this case, it’s a YouTube video wrapped with a JW Player. And this will work with either one or both combined. So you can see the word that he’s speaking at the time is actually highlighted. There’s a search window there on the left– here, I’ll scroll in here. There’s a search, so you can search within this video here. And then on the right here, if you want to search across the entire course load, you can search in here and then the results will be displayed.Interactive, Searchable Video Libraries
This is another example of the MIT Infinite History Project. There are about 200 hours of content here– a number of MIT luminaries. What you can do here– this is actually more of a visual example of what happens when you a search across all the interviews. And you can see they’re all pretty long interviews. And then will actually highlights where that word appears. So you can see where in the timeline your results are, and then you can click and jump to that part of the video. And here’s that interactive transcript. And that’ll switch as well.
One thing I should note is that way we do things– this is basically an included output option with our services. So there’s no additional cost to implement this. It’s all plug and play. And the formats are all included with any captioning we do.Customizable Transcript Interface
This is an example of what Al Jazeera recently did with one of the Presidential debates. What you see here is actually an interface that they’ve customized using our time synchronized transcripts. So it’s the same concept of that interactive transcript. It’s just styled a little bit differently. And so they’ve actually customized all the styling of the transcripts. So each word, again, will be highlighted as it’s spoken.
The other part that they’ve done is pretty cool. They’ve customized the entire page around this idea of timed text. So in this search here, you can search for a word. It’ll tell you who said it using the speaker labels, and how often it was said. And then here it even shows you the timeline down below of who said it based on segments of video, and how many times within a certain segment it was mentioned. So you can see there’s a lot of talk about the Middle East right in here.
So this is just an example to show just how flexible that output is. You can use it however you like to create these interactive, searchable, accessible experience once you’ve created timed text.
So wanted to put this up. We’ll answer some questions. Certainly feel free to be in touch with us. And Joe, I’ll turn it back over to you.Webinar Q & A
JOSEPH ZISK: Thank you. That was a great presentation. I certainly learned a lot. Do we have any questions? You can either put them in the chat window, or you just start speaking by clicking on the talk button.Does 3Play Media offer captions for live broadcasts?
JOSH MILLER: So there is one question about live media. I just wanted to quickly address– everything we do right now is based on recorded content. So we don’t actually do any live captioning ourselves. It’s something that we’ve been talking with a couple of organizations about possibly trying to offer in the future and then incorporating into our workflow. But we don’t really have a timeline for that right now. So everything you’re seeing that we’re showing you in terms of workflows and interactive publication and captioning is all based on recorded content.
JOSEPH ZISK: So it does seem like by this incorporation with 3Play Media and the videos that were being done at Penn State, it really saves a lot of time in getting the transcripts put up there. And I like the fact that you can search through the transcripts to find keywords and click on that part to start the video over again. So I think that would be very useful for all students.What are the benefits of transcription for students?
KEITH BAILEY: Yeah, as I mentioned, we’ve found a huge benefit well outside just the ability for people with audible disabilities to use it. English as a second language is probably the biggest benefit out there. And the opportunity to not only have the captioning on top of it, but be able to download a transcript and be able to take notes on that and then watch the video again, over and over and over, is a huge benefit.
And then we do run into scenarios where we have to accommodate someone and we legally can’t transcribe it for everybody. So say it’s a piece of digital material that we’re using under Fair Use, if you will, and it’s a clip from let’s say, Star Wars. And if we transcribe that and make that widely available as a transcript, that is an absolute no-no. Because we’ve created a transformative work in the form of a script that we made broadly available to everyone.
So the accommodation as it comes up is the student with that disability has access to the transcript. Everybody else has the ability to see the captioning. So this type of a system allows us to have that type of flexibility to build that in and create that accommodation as needed.How easily can an instructor embed a video and add captions?
JOSEPH ZISK: And also, once the closed captioned video is made and that code is generated, does the instructor get that code? Or is that being done by the instructional designer? Or how easy is it for the instructor to get that code?
KEITH BAILEY: So the embed code itself that’s in the system, that’s auto-generated when the image is uploaded into the system. Then there’s just an expanding field where you open that up. And you click on that embed code. You just copy it and then paste it over into the content side. Now, you can’t just paste that anywhere. You can’t go out to an open space and paste it in and hope that it moves through. It is a protected conduit between the two systems right now, so we can maintain that privacy and the copyright compliance, if you will.
But once that embed code is there– right now our instructional designers do it. The faculty don’t really see it that often. But they could. But once that code is there, everything else happens along with it. We also theme it, so that we create styles that bring the copyright information along with the image and display it appropriately to stay compliant. And then it will bring the transcription once it’s done along with that code as well.How does the Penn State faculty collaborate for e-Learning?
JOSEPH ZISK: I just wanted to see how that was done. I guess at Penn State, the instructors can’t really change a course design too much without it being collaborated with the instructional designer and maybe other people on the team? Is that how it works at Penn State?
KEITH BAILEY: I won’t say broadly at Penn State. That’s how we are doing it within our own college right now. There are other solutions that people are using. They’re not– normally they don’t have the automated transcription pieces with it. But we do have a fair amount of faculty that will put things out on YouTube or Vimeo and need to transcribe. And then they need some of the other services– those overly services, if you will.
But it is unit to unit, based on who has access to want, to be able to modify on their own. In our case, we have our instructional designers do it for the faculty member. But we could also open up access when a faculty member wants to do it, that they could have access to do it themselves.
JOSEPH ZISK: Very good. Thank you. Are there any other questions before we come to a close? And the information– their contact information is up on the screen. I may be changing screens in about a few seconds, so if you need to jot something down. Of course, this is all archived. You can always come back to it.
I’d like to thank our sponsors for helping us make this conference possible. And the sponsors are our CourseSmart, Mediasite, and Blackboard Collaborate. And here we go– here’s the slide.
And they will be having some sessions throughout the conference. I know we only have one more day, but you might want to check the schedule for dates and times. And I’d like to thank everyone for attending this session. Now also again, I want to thank all of the presenters. You guys did a great job. And I want to thank you all.
KEITH BAILEY: Thank you very much.
JOSEPH ZISK: And I just wanted to say, when you do leave this session, please make sure you exit out. All right, folks. Thank you all for attending, and hope to see you at the next session.
KEITH BAILEY: Great. Thank you.