Home

Plans & Pricing Get Started Login

Penn State Demonstrates Its Award-Winning Captioning Workflow

Introduction

TOLE KHESIN: Welcome. And thanks again for attending this webinar on Penn State’s demonstration of its award-winning closed captioning workflow. Before we begin we’d like to do a quick sound check. If you wouldn’t mind raising your hand if you can hear me OK. Very good.

So my name is Tole Khesin and I’m one of the principals of 3Play Media. Also, I would like to introduce the other presenters.

Dr. Keith Bailey is the Assistant Dean of Online Learning and Education Technology at the College of Arts and Architecture at Penn State University. Keith has been involved in workforce training, instructional technology, and distance education for over 15 years. One of his areas of focus is to enhance learning through the integration of technology-based solutions.

Bryan Ollendyke is the Lead Developer and Information Architect at Penn State University’s College of Arts and Architecture. Bryan focuses on the e-Learning management system and the Drupal community at Penn State.

And then Josh Miller is one of the founders of 3Play Media.

This webinar will last about an hour. Keith will begin with an overview of ELIMedia, which is the e-Learning Institute’s media asset management system. Next, Bryan Ollendyke will do a live demonstration of the system. Finally, Josh will talk about the closed captioning workflow and 3Play Media’s involvement. That should take us about half an hour and we’ll save the rest of the time for your questions.

The best way to ask questions is by typing them in the questions window in the bottom right corner of your control panel. Please feel free to type your questions any time during the presentation. We’ll keep track of them and address as many as possible at the end.

Also please note that this webinar will not have live captions. However it is being recorded and everyone will receive an email within 48 hours with a link to view the recorded version. Which will have closed captions as well as a searchable, interactive transcript. Also, if you want to follow along, the hashtag for this webinar is #PSU3Play. That’s PSU, the number three, and then play. Now I’ll hand things off to Keith Bailey from Penn State.

Presentation by Dr. Keith Bailey from Penn State University

KEITH BAILEY: OK. Well thank you for that. Please let me know if I need to turn that up anymore. But we are here today really to talk about the innovative strategy that we put together to manage digital assets in an online courseware.

So to kind of start to give you a sense– a little bit about who we are as an e-Learning institute within the College of Arts and Architecture. Our primary goal, really, is to support the design and development of e-learning courses within the College of Arts and Architecture. That is actually made up of seven disciplines within the college, and our primary role is to help support the design development and implementation of those courses for the college.

Also, one of the primary goals is to stimulate instructional technology innovation within the the college itself. Given the arts discipline, there are a lot of needs that seem to be fairly unique to our discipline. And we’ve taken that challenge head on. And have developed some innovative tools to help facilitate the teaching and learning within the college.

Now to give you a sense of where we stand as a college, currently we have about 39 online courses within the college. And you can see the various disciplines listed here. So being the school of visual arts, including fine art. Landscape architecture, music, theater, art history, integrative arts, and architecture.

We have approximately thirty-some faculty that have helped either develop courses and/or teach courses. And our portfolio includes well over 12,000 enrollments annually across the college and across the University. We do service a general education/general arts population within the college. And that accounts for a large majority of the enrollments at this point in time.

So really when we came about with the technology, one of the key goals that we’ve always found is that we really need to establish an infrastructure that is actually driven by an instructional philosophy. We found that it is very important to have this instructional philosophy and then tie the infrastructure to it. And not just throw technology at what we would perceive as a problem. So really the power and the goal is to empower faculty and designers to provide a stable, reliable, scalable, reusable, technical infrastructure that ultimately improves the course design, delivery, and the reuse of those materials.

One of the primary goals that we’ve come about was to keep our course content separate from the course communication. Really maintaining the content in a content management system. Allowing content management systems, with what they do very well, to manage the content, and the delivery, and the design of the courses. And then also allow the communication platforms– more of the LMSs of the world– to do what they do well. So we’ve done that, and that’s been part of our philosophy is to really keep this content separate from the communication.

The second goal then, was to actually extrapolate the content from the design itself. So given the fact that we were in the arts college, there was a very strong need to keep the visual design of our materials separate from the content, and have unique looks for each of our course materials. As you can see here, this is one of our courses. It’s a film music course. You can kind of see the theme that we put into place. And then we layered the content into that using the content management system. And then what the student sees is actually down at the bottom of the screen there.

So then, third, the goal of keeping the media separate from the content. Again, very important aspect of this. Given the fact that we use public pieces of media. Things in YouTube, things in flickr, and other sources like Vimeo. We also use and embed a lot of purchased video. Since we are an arts college, we are very visually based and we have a lot of media needs. So we follow each one of these three areas. Public, purchased, and then private.

The private side is what we’re actually going to be talking about today. And that helps us finalize goal three of keeping media separate from content. And allowing us to store that media in a private location. That then gets embedded into the content, which gets wrapped into the course content interface.

So if we start to look at some of the needs that really came about as a result of this philosophy, we wanted to eliminate the duplication of media assets as courses are actually duplicated. We offer 39 courses and many sections of those courses. And current content management– the way that we would duplicate it– is we would duplicate an entire course. And all of that media would go right along with it. And we would end up duplicating media. And it would become an asset management nightmare, really. So we wanted to be able to eliminate the need for duplication. Store once, use many.

Being able to allow for the reuse of these digital assets across courses, many of the courses can utilize different pieces of media. Say their film clips from one movie that can be used across different courses.

Then, also provide a mechanism really to embed the media into the courses with minimal effort. We have learning designers and faculty members developing the courses, and it became very cumbersome for them to be able to actually embed the media and also maintain the need to eliminate the duplication.

In addition, simplifying the workflow of copyright and transcription. Accessibility and compliancy with these areas became very critical to us. And we wanted to make sure that we were compliant with copyright laws. And also that we could easily embed transcription code along with any audio and video file as we have developed it. So then to ensure online courses meet copyright and compliancy of the media, that became a critical asset need, or management need.

And then, finally, really to facilitate communication needs with the media production. We have a media development staff, an instructional design staff, and a technologist that we needed to be able to communicate the needs of these media pieces across all these mechanisms. And make sure things are getting done efficiently and effectively, and that things aren’t being missed. So what we wanted the system to be able to accomplish that as well.

So the solution was really to build what we’re calling is the ELIMedia Server. And this is really just an asset management system that is used to store images, video, audio files, Flash files. Basically anything that we would want to embed in a course that is not content.

We also have added in the feature of being able to manage copyright. So as things are uploaded you have to put in the copyright information and classify it as per the terms of use of that piece of media. And that would then get embedded directly into the courses. Also, quick and easy mechanism for being able to associate transcript files with the appropriate media assets. So the system will help merge those two things together.

And that we use a front-end interface of Drupal, and then the back-end is a Flash Media Server. So the ELIMedia Server is a combination of the Drupal front-end, and the Flash Media back-end.

So some of the benefits we’ve actually seen of this system thus far, is that we are able to maintain compliancy very quickly and easily now. Thus requiring copyright information to be associated with the digital assets.

Also we can classify them as to how we wish to use them. Are we classifying them under TEACH Act? Or are we classifying them under Creative Commons? If we’ve created it, we can classify if as Creative Commons– a Creative Commons item. Or we can classify it that what the Creative Commons license that came along with that asset. And that all gets stored right along with it.

The other thing is– and this is where 3Play Media really comes into play– is the accessibility side of closed captioning. We now have a very streamlined workflow as to how we can take the media files, upload them, get the transcribed files back. Link them to the associated media file, and then have that work right alongside and get the closed captioning for the media within the course itself.

We’ve also removed the need for learning designers and faculty to manage a lot of the aspects of copyright transcription within the course. By allowing them to actually associate that information in the media server, they no longer have to think about it in the course content itself.

We’ve also added a tagging system to this so that there is easy search and retrieval of all of the media assets. And we have the ability then to go through and look at each of the courses and how many media assets are associated to each one.

And then finally, providing a lightweight project management tool through what we’re calling a virtual media request form. Where an instructional designer will request a piece of media to be created. That information will then get transferred over to the media department. They will implement that form and create the media asset. In which case then it can come back for review to the learning designer and that has streamlined that process.

So if we look at the workflow of ELIMedia right now. Right now we would upload a file to ELIMedia. You can see the orange blocks there on the left hand side. Adding copyright information. And adding tags. Those are required pieces of information in order to even upload it in. And that helps us ensure compliancy in those areas so that a piece of media can’t be sitting there without that information. And then once those pieces of information have been put together, we assemble them into a playlist, galleries, or put themes around it that would then get embedded into the course content. [AUDIO CONNECTION INTERRUPTED]

So why did we pick 3Play Media? Well first of all, we had 778 videos, 184 audio files. To date we’ve transcribed 203 of those files, which equals about 42 hours of video. We did a test ourselves. They have a high percentage of reliability. Over 99%. We tested randomly on 10 different files that were transcribed. Manually went through and checked to see the reliability of the transcription. And came back that it was 100% reliable. We did not find a single error in any of them, so that has really supported everything that we’ve been doing.

In addition, the API’s that come along with 3Play Media will not only allow us, right now, to streamline what we’re doing, but in the future to be able to automate this. And make it so that– with a future goal of being able to upload a file and it automatically goes up to 3Play Media. Once it’s transcribed it automatically comes back down and there’s no manual intervention. So that’s going to become a very powerful aspect to this system. And as the need of accessibility becomes more important to higher education, this will help ensure that in a much more quick and easy manner.

Then finally, there is an interactive transcript aspect that we aren’t utilizing to the extent that we can, and we plan to use it more. But the ability to actually take a video, get the transcribed file that sits alongside of it. And you can click on each of the individual pieces of text in the transcribed file that will jump to certain aspects within the video. And that becomes a very, very powerful tool. Especially when we get into English as a second language. We’ve found that while we’re meeting needs of accessibility, we’re also meeting needs of alternative forms of delivery of content. So video now gets transcription and people can kind of work through things at a different sense.

And on the right hand side you can kind of see the number of media assets we have per category.

So with that, I am going to now hand it over to Bryan so he can actually give you a demo of the system.

Presentation by Bryan Ollendyke from Penn State University

BRYAN OLLENDYKE: So hello everyone. Thanks for joining in on this webinar. I’m going to demonstrate what’s called the ELIMedia System. We’re transitioning this to a project called ELMS: Media, in case you are looking for that. But ELIMedia, you should be able to find it.

All the code that you’ll see here today you can download from drupal.psu.edu. And if you want to check out more about ELMS in general– because Keith mentioned we have a content management system component– you can go to elms.psu.edu to get that information and see the roadmap.

So first screen that we get when we come to ELIMedia is an overview of all the media in the system. So you can see there’s filtering. So I can say, all right, I’d like to select only these types of pieces of media. So I’ve already filtered it to video and Theatre 105, as an example.

We can further go and drill down into lessons if we want to. This is where we get into the tagging structure and things. If you want to tag it to make it easier to find after the fact you can do that.

So if I go and I’ll click through it. And you’ll see there’s a video for Theatre 105. It says World Theatre. What you’ll see on the page, once you have one completed is– you’ll see who submitted it– so this is our Media Production Specialist. You’ll see what the resulting output is. Its player settings, so if I need to adjust this for where I’m going embed it in a course, I can do that here. Just real quick on the fly.

And then the embed code. So one of the major goals I had in creating this system was to eliminate the need for instructional designers, faculty staff, or even frankly, myself, to have to understand what object code is.

In fact, just to see what it is that this would generate. If I click this, we can see this is originally what you would have had to write to make this video appear on the screen right here. And this embed code is the exact same thing. Now, we’re working on– in the future– even simplifying it further. But this is at least somewhat readable by a human being quite frankly.

So let’s see what we would actually do to make that show up in the system. So I can go to upload media. You see I have all these different categories of media. Let’s do video. You’ll see we have required fields of title, course. So if I start to type in a course, you’ll see I have autocomplete. I have Art10 here. Lessons. I can populate this. This is optional.

Say it’s for lesson 10. We have on-demand or buffered streaming types. on-demand streaming is technically more secure because the user is never actually downloading the file. So we typically stick to that. It’s an on-demand stream.

You also have the video. But this also supports multiple compressions of video. So if you have a low bandwidth situation and you upload a lower quality version of the video alongside the HD version, it will automatically throttle over based on whether or not the user has the bandwidth to do so.

You can see that we support MP4, FLV, F4V, and movie files. We also manually grab a thumbnail of the video to put in there. And then the caption file is where we upload here. So this kind of mashes everything together on this form.

You can do caption files. And you’ll add them manually and things to the JW Player, which is what we use. Or any streaming media player, but it’s very confusing.

Copyright information. So again, these are somewhat for categorization in-house, but you can see we also have the ability to put in whether or not it’s Creative Commons. And we have the various forms of Creative Commons licensing. You place the citation here.

And then there’s an additional privacy setting, which is more or less to protect the instructional designer or instructor from themselves. So if I have a video that is part of a larger work– you know, it’s five minutes from Indiana Jones, for example– I don’t want that to ever even potentially be released to the public. So by saying this is protecting the courses- and you can read the message here– it’s effectively locking it to our domain. So that the little embed code would never actually work anywhere else. You can also protect to a specific course too.

We have some additional details. Fields and things. Sometimes we keep track of this. And then usage I’m going to demonstrate now. So let’s go back and we’ll actually search for a piece of media.

So let’s go into test. Alright. You’ll see I have a video here. So if I wanted to embed this, I could go through the steps here. I see the video. I can verify it’s working.

And we can also verify the closed caption aspect too. We can jump around and it will pick up right where the closed captioning should. And turn that off after the fact. You’ll see you can change volume. Make it full screen. But lets actually see what this would look like in a course.

So this is a sample page from our Art 10 course. You’ll see there’s the video there. And I click edit to see what I would have had to do to generate that. So this is all that’s actually on the page. And if you don’t believe me, I can hit disable rich text and you’ll see it’s basically the exact same thing.

So all you have to do is add in this little short code, which will then ask ELIMedia how it should handle the file and generate it. If you need to embed it or you want to embed more media on this page, there’s a little editor plugin.

And so we can jump to and say, search for video. And we’ll search in Art 10 in this instance because this is Art 10. So now I just have the Art 10 videos. And I can select one.

You’ll see that the embed code floats here, so I can then copy that. And then I come back to my documents. And we paste it in to save. So I have some additional options on our content editing interface. You can just kind of ignore it because I’m an administrator of the system.

We also– while that’s saving– have the ability to do dynamic playlist building. So you can actually create a playlist of videos mashing up YouTube videos, audio pieces. If you want to record from a webcam, you can put a webcam video in there.

If you look at any of the work that I posted on drupal.psu.edu or elearning.psu.edu, I actually use ELIMedia System to do all of our media. So it’s also a mechanism that I can use to broadcast to the world what’s going on here.

So there’s the file that I added. We can jump through that file. There’s the original one that I had up there.

There’s also the ability to do artwork galleries. So this handles much more than video. We’re going to focus on the video today, obviously, and transcription. But it has the ability to autorotate images, perform transformations to them. You can assemble image galleries in here. It’s a pretty robust, flexible system and we’re really looking to get more people excited locally and globally to try and help build this out even further.

This is 100% free project. Keith mentioned it is built on top a Flash Media Interactive Server, but we are investigating how we can move that to Wowza or Red5 in the future. Which Red5 is a free option.

But just very quickly what happens with an image. So I would upload an image. I just have the image in its original state. You can see here– so this is the original image– and then we can select various treatments to perform to the image, which can be defined in the system. So in this case, what it would look like is an old picture.

So instead of our media staff wasting time, quite frankly– as far as I’m concerned– going through and processing each one of these images, and make sure that it renders a certain way. Putting drop shadow, for example. We can have the system generate that for us. And generate in a consistent way the size and shape. We can then embed– and because it’s by reference– say that we figured out there’s a problem with the media in this course– we actually embedded the wrong image or associated it to the wrong name– you can come back, hit edit on this image. I can remove this image and add a new one in, which I won’t do. The instructor of the acting course might get mad about that. But I can remove this, record the new image and that will propagate to all the places that that image is referenced.

I also mentioned the usage tab, which I’ll end on. Media is also somewhat self-aware. So I can look real quickly and see this piece of media has actually been embedded on this page. So I can straight into the content to that page and ensure if I make changes this, how is this going to influence the content that it’s presented on.

Now I’m going to hand over to Josh since we’ve pretty much reached the end of this demonstration.

Presentation by Josh Miller from 3Play Media

JOSH MILLER: I’m Josh Miller, one of the founders of 3Play Media. I’m going to give a little bit of an overview of what we do and how we fit into this whole system.

So we originally started with research being conducted in the Spoken Language Lab at MIT. We started with the project to help MIT OpenCourseWare add closed captions to their lecture content.

Our focus from the beginning was to build a system to achieve high levels of quality and accuracy for the transcripts themselves as well as the precise synchronization of the text to the media. Furthermore, we aim to provide a scalable, cost-effective solution with easier workflows, really than any other option. And that’s what you’re seeing here today. And finally, we’ve developed a number of interactive tools to enhance the experience and provide greater user control and interaction with the synchronized text and media.

One of our number one concerns is always quality. We use a multi-step review process that delivers 99% accuracy. Even in cases of poor audio quality, multiple speakers, difficult content, or accents. What you see here is that there is an automated process, and then a complete, very rigorous, human cleanup process. So it’s a very unique approach to this transcription problem. And without exception, all of the work is done by professionally trained transcriptionists here in the USA who have been screened and trained on our system.

So our goal is to make the workflow extremely flexible and ultimately unobtrusive. There are a number of ways to initiate the process. Everything from a direct web upload to various platform integrations.

We’ve actually set up a number of out of the box capabilities to link a 3Play account to the most popular video platforms and lecture capture systems that are being used today. And that makes it so that the captioning of files just takes a couple of quick clicks as opposed to a number of complicated steps. So this, ultimately will simplify not just the publishing, but also the file transfer process. So there are two pieces here that we’re building integrations for.

So as I just mentioned, we offer a number of ways to upload content into the system. And as you’ve heard from what’s been described from Keith and Bryan at Penn State, one of the methods is our API in addition to FTP or other methods.

With the API, publishers can actually design a custom workflow to conform to their specific requirements as you’re seeing with Penn State. And there a number of different ways to use that.

This works for not just a file input into our system for processing, but also the file output out of the system and back to an appropriate place for publishing. You can actually pull any transcript or caption format over our API and into wherever it needs to go on your server so that you can publish it pretty much automatically. So we’re essentially becoming more of just a processing engine so that the publisher doesn’t have to worry about some of these other complicated steps.

And finally, our interactive tools are also built on our API. So you can host your interactive transcripts and video search– sorry– we can host your interactive transcripts and video search capabilities or you can host them as well. And that will be tied into the custom workflow quite easily also. So that all these pieces can be tied in quite nicely.

And another tool that can be run over the API that we’ll talk about in a second is this ability to edit transcripts in real-time. And in case you ever need to make a change to a transcript or a caption file, that can be done very, very quickly and updated immediately.

So one thing we’ve found is that no matter how hard we try, certain proper nouns or vocabulary can be a bit difficult to get exactly right. So we built the ability for a publisher to make changes on the fly. And by default this interface lives within the online account that we provide.

But when you think through an implementation like what we see with ELIMedia Server, this editing interface can also be built into a more customized workflow. So that, say professors, who have uploaded their content into the system, can also log in and make edits from a simpler interface such as the actual ELIMedia workspace on their own. And any changes that are made immediately propagate through to all of the output files. So there’s no need to actually reprocess anything. Everything gets updated immediately.

So we’ve also built a number of more interactive tools that can be used with the time-synchronized text that we create. These are in addition to closed captions and are simply another option as opposed to a replacement. And this is completely a choice for the publisher depending on what it is you’re trying to achieve. So it’s not an either/or scenario by any means.

Like I said, these tools use our API. And so that makes it very easy to build into an automated workflow and into a pretty consistent, easy, publishing process.

So with an interactive transcript the text is precisely synchronized with the media and it’s actionable as well. So it’s a little bit different from closed captions in that way. Each word leads to an exact point in the media so that users have the ability to truly follow at their own pace. You can actually click on a word and jump to that exact point in the video file.

So this is extremely useful for anyone with hearing impairments. But also people who have any issues following difficult content. Or if they’re used to speaking another language other than English. It really makes the process of reviewing content and finding pieces to go through again much, much easier.

So finally, the other part to this that’s really interesting is that the video becomes searchable by spoken word now. So you can actually search by keyword and then jump to precise segments based on your search results.

One thing I’ll mention about the interactive transcript is that this is built to be compatible with a number of different video players. Basically all of the different video platforms that we showed before, as well as platforms such as YouTube, and Vimeo, blip.tv. So it’s a very flexible tool that automatically recognizes the type of video player when it’s published properly and can very easily be synced up onto a page.

So we’re going to take about one minute to aggregate some of the questions that have come in. And then we’re going to go through those questions together. It’ll be us as well as Bryan and Keith. So feel free to ask any more questions you might have in the window right now and we’ll be back in about one minute.

Q&A

TOLE KHESIN: OK. Let’s begin with some of these questions that have come in. There have been a number of questions around cost. Maybe you guys can talk a little bit about the costs involved in doing the transcription and captioning.

KEITH BAILEY: Yeah. Thank you. The cost for us has been absolutely worth– I believe it’s $2.50 per minute of transcription. And the value we’ve seen in that is– the cost of what it would take to actually hire someone to be sitting here and then push something at them to do something with a– probably would not be at the level of quality. Without 100% reliability the cost has been absolutely tremendous for us.

And I think to add to that is the turnaround time. We’ll upload a video file and within two days we’ll have that back and transcribed. So one of the requirements with ADA is that a student will come in and make a special request for the need for accessibility in a course. And the turnaround for us becomes very critical at that point because the clock starts ticking as soon as that request gets put in.

So our ability to then just push those files up and realize that this is going to happen within the next two or three days, we can get that back and turn it around and make it accessible to the student. And we don’t have to go out and find a resource to go in and manually do this, and then edit it, and then check the quality assurance of it.

So I believe– you guys can answer this a little bit better– the exact costing of it, but from what I’ve seen it’s about $2.50 per minute. And the value of that has far exceeded the other possible solutions for us.

JOSH MILLER: So just on the $2.50 a minute. There have been some questions about that. That’s actually per video minute or audio minute. So it’s based on the duration of the content, not how long it takes us to transcribe or caption. So it’s all completely based on the content itself.

We’re waiting for Keith and Bryan’s connection to come back– their audio connection– to talk about who’s covering cost in terms of centralized IT versus various departments. So as soon as we get them back we’ll have them answer that question. We’ll continue with some other questions in the meantime.

TOLE KHESIN: OK. There’s a question here about how the transcription process works with respect to speech recognition.

JOSH MILLER: So what we do is we have a unique process that involves both an automated step as well as humans. And so content will go through speech recognition first, which gives us a draft. And we then take that draft and completely clean it up.

Everything goes through a very rigorous editing process by humans so that the quality is top notch. And, in fact, because it’s more of an editing process, the people who are doing this process are able to actually spend a little bit more time and thought on what is being said. So that in most cases the quality is actually higher than manual transcription. We are able to catch a lot more of the difficult words and some of the nuances that wouldn’t ultimately be caught with just plain transcription from scratch.

TOLE KHESIN: There’s another question here. What media formats are supported?

BRYAN OLLENDYKE: We support JPEG, PNG, MP3, MP4, FLV, MLV. There isn’t any type of compression going on the server. You can add in software that just manually takes any format– almost like YouTube– and converts it to what’s usable by the system. But we’ve chosen not to integrate that at this time.

There’s also the ability to accept documents and the document field can accept basically anything. Obviously it won’t be transcribed, but if you’re just looking at it from a purely asset management perspective you can do it that way.

JOSH MILLER: Can I go back it costs just for a second? There’s some questions about how you at Penn State have handled the distribution of cost. And whether that’s something that’s covered by you guys as a centralized body. Or if it’s being distributed to the actual professors or departments that are requesting the captioning.

KEITH BAILEY: Right. OK. That’s a great question. Currently the way that we function is the e-Learning Institute handles the cost of transcription. The need for this and trying to be proactive on transcribing all of the pieces of media is something relatively new and is something we want to establish. So we’re trying to figure out how we can get those types of the fees covered up front to be able to support the needs for transcription.

Right now if we are reactive we have the ability to go back to a central unit within the university. And say, we have a student with a need or disability, and we can submit documentation to say we needed to do this type of format conversion or transcription. And then we can recover our cost. But that’s a reactive mode right now.

So we would like to try to figure out how we can be more proactive. And our learning designers right now are very proactive in making sure that courses are accessible before the need comes about.

Right now we cover the cost. The instructor does not. It is a service that the e-Learning Institute provides.

TOLE KHESIN: Another question here. In your courses are your faculty your primary content creators? Meaning, are they the ones doing the electronic resources creation? Or does your staff primarily create the content for them?

KEITH BAILEY: The faculty are the primary subject matter experts. What we will do in cases where we do a lecture, or we do stage production in a shooting studio. We’ll use the media staff to physically do the shoot and/or get their recordings prepared so that then the post-production becomes much more simple. But the faculty members are the primary ones for the content and the information that are being pushed into the audio and video files.

TOLE KHESIN: Does a caption video add significant burden to bandwidth over an uncaptioned one?

BRYAN OLLENDYKE: I can’t say as if we’ve noticed a difference. We haven’t done testing at that level, but it shouldn’t. The caption file is, I believe, usually like 46KB or something like that. So we haven’t noticed any issues with it. We use the media streaming side of the system to power literally all of our media both on our public facing websites and our courses. And we haven’t experienced any load balancing issues at all.

TOLE KHESIN: How many programming hours were required to develop this system?

BRYAN OLLENDYKE: I worked for four months to bring this system up and get it to a state that we could give it away to people. So I’m the only programmer on this project. I can’t really quantify hours. It’s usually 40 to 60 hours a week on this type of stuff.

But the beautiful thing with Drupal is the knowledge builds on itself. So we are able to build far more sophisticated systems every time we come out because of the knowledge gained by building the previous one.

KEITH BAILEY: I think about a year ago– year and a half ago– Bryan kind of sat back and noticed that we were struggling a little bit with how we were embedding the media in the courses. And literally one weekend went home and constructed the prototype using Drupal as the front-end to what the possibilities might be. And when he came in the next week and demoed it to us, it was like wow, you’re really on to something here. And then we worked with a visual guy to help develop the interface. He did a lot of programming on the back end to make things work.

And we realize that this is still early on. That this product, this is still in its infancy. There are directions we would like to take it and we would like to get more community impact and involvement in this. Realizing that it is an open technology and we want to stay true to that.

But that the advantages that it can provide– especially from instructional standpoint of making sure the transcription occurs, managing copyright, keeping the media separate from the content itself– is so tremendous that it’s a huge advantage for us. And if we can come up with an open source streaming server solution with it maybe using Red5 or something like that, then this is an out of the box type of solution that anybody could then take and use. And the framework being able to be customized to fit your individual needs.

BRYAN OLLENDYKE: That’s kind of our underlying philosophy, as Keith mentioned– with separation of all these different systems and layers of course material– is also to select Drupal as a platform to develop all of these tools on. Because knowledge gained in one aspect such as this asset management system can be transferred to our next big project which is a total overhaul of the ELMS Instructional Content Management System.

So it’s a very sustainable approach we’ve found. As I mentioned, we only have one developer for all the systems that you would ever see us talk about right now.

With this system specifically, I felt that if I invested the time up front, the amount of time it could save our instructional design team in these tedious things such as– well we need to go through and verify that we put Alt tags everywhere– well, that’s not what instructional design is about. That’s not something instructors should ever have to think about from my perspective.

So by kind of abstracting the media, the images embed in there– we talked about accessibility and I didn’t really go over this– but the images, because you quantified hey, this image is called x. And you put in an embed code that isn’t in the form of an INGS or a CTAG, then I can automatically grab that and say, oh I know what you want. Put in the things that will make it accessible such as the title in Alt tags.

The little fine grained things that could trip people up is what the system is supposed to be good at catching. And then if we notice oh, well we’re implementing video. This isn’t tied to the JW Player even. It has that written the tags, but because it’s all centrally referenced– say there’s an accessibility issue with the JW Player– we identify that. We fix it at the the ELIMedia system. And that’s automatically propagated to all our our courses. So there’s no more going through with a fine tooth comb every time you make a little change.

JOSH MILLER: Bryan, that’s great. So you mentioned it took about four months to get the whole system up and running. Do you mind clarifying what some of the heaviest pieces were. In this case how certainly video– where that fit in as a proportion as well as certainly the captioning aspect in terms of getting that up and running and kind of how much of a resource requirement all those pieces were.

Sure. Sure. Honestly, the hardest thing to do was talking to the people that host the server for us. And figuring out a way of getting the files to upload into Drupal and dump securely within scope of Flash Media Server. This does say it’s built on top of Flash Media Interactive Server. But I always try to build things in an almost overly abstract form with the understanding that I’m going to guess in 10 years we’re not going to be using this system. And in 10 years we’re not going to be using Flash Media Interactive Server and that product may not even exist anymore. So it needs to be enough of a contained framework that it just says I’m going to push the a file to x and then let the streaming server pick up and worry about it.

So that was probably the hardest component. I believe we went back and forth for about two or three weeks just making sure that you would upload a file, it couldn’t be downloaded again by someone that was crafty and figured out what the address was. So it was more the security aspects in that respect.

The actual standard of developing the short code format. That probably took about a week or two. Standing up the Drupal system– as Keith mentioned for a prototype– that was about a day or two over the weekend.

Drupal is an extremely complex system and it’s really hard to understand up front. But now that I’m about three years into this, I feel I can literally do anything with it.

So the image processing– again, all this stuff just kind of builds– so once we built the scaffolding of the embed codes for video now the scaffolding for the image codes is extremely easy.

So actually the webcam aspect– which I didn’t even mention this does a live webcam recording thing– I added in just as a hey, we need to try this. It took two days, I believe. Because at that point in development we’ve gotten so far ahead of all the problems that we had in getting things up and running that now– Oh. OK.Well where do I have to push the video? I already know how to do that.

How do I integrate the webcam? That was literally the only question was how do I integrate the webcam to do the recording because all the rest of the framework had already been laid. Oh, generate it with an embed code and you’re going to render a webcam on the other end instead of a video. It really doesn’t care for the most part what it is, it just knows it should render whatever it tells it to.

KEITH BAILEY: I think the advantages of building in that type of a workflow become incredible. So you designate someone as kind of the gatekeeper for what gets pushed up and what does not get pushed up. So media could get uploaded and put into a, quote, a queue that someone can then approve. And through the use of the API’s and the accounts you could literally just click and say OK, approve anything that’s up there now. And it automatically goes up.

And then the other side is– from an accounting standpoint– we have seven academic departments. At some point in time I would love to be able to quantify how much we’re spending per department or per course, if you will, for the transcription of materials. Right now it’s all one big bag. But in the future through the API’s and creating separate accounts with different API’s and automating it, we would have the ability to keep that accounting much more clean.

BRYAN OLLENDYKE: Tole is there any way that you could give my screen control for a minute? I’ve had a couple questions on Twitter about the image processing.

TOLE KHESIN: Yeah. Absolutely.

BRYAN OLLENDYKE: So I had a couple of questions come through on Twitter about what’s– in this system is called image treatments– so I briefly mentioned you can take an image after the fact and do manipulations to it. So the whole idea is that if I take a picture on my phone. I am really bad at image editing. So I need something quick to change that and make it look cleaned up so it can be presented online reasonably.

So what you can do is use the image treatments functionality to build kind of these reusable components. And this starts to allow– you can get into actually writing HTML if you want to– but I was asked about how you account for context in the old tags and things like that. You can actually utilize the information from the image itself. So if I wanted to I could just write a little snippet here. And then below it say, OK, well actually I’d like it to place whatever the citation information is. Because that’s a requirement for copyright, yeah?

And then you can say resize options. What do you want to do to the image? So dynamically I would like to scale and crop the image. I know for this course they’re going to be 200 by 200. We’ll do pixel based.

Color manipulation. We have original, grayscale, negative, sepia, or we even do a color shift, which is a little more radical. Let’s just do it grayscale. Maybe this course is about the ’20’s and I’d like all the imagery to really be grayscaled fully.

Light box. Yes or No. You can turn that on and off there.

Additional [INAUDIBLE] visual effects. We could say hey, I want to add a drop shadow, picture frame, round the corners automatically. Let’s just do drop shadow.

Watermarking. Say that you’re worried. This is a Picasso. You’ve taken an excessively high resolution version of that Picasso and put it up here. You probably shouldn’t do that because it might be copyrighted. You can layer in an image so– a lot of times we’ll layer in the ELMS logo– on things so that it can’t really be used in the same way. But you lower the opacity to something like a 10. And you say put this right in the middle and it’s not completely distracting or taking away from the work as a whole. I’m not going to upload an image for this.

You can also add in some tagging for these image treatments. So let’s make one called quicktest, or quickchange, in this instance.

Kind of abstract those components. So this is my sample image, right? So I have DrupalCon costume in a meeting. If I click it, it’ll pop it up. So this is actually the image that’s uploaded. And you can see it’s utilizing some of the fields that have been taken into account by that. You can always go back and verify what the settings were to accomplish this through here.

But now that I’ve added that treatment to the system, I can go into search assets and I can find an image such as our original image that we showed earlier. And now I have that treatment here. So you could do quickchange. Let’s see what this looks like in quickchange. So there we go.

It does take a little programming to get those things in there in the first place, such as the drop shadow. But once you put that in place now you have an even further reusable component.

A similar thing can be done with the image galleries. So I can go and assemble an image gallery. And you’ll see I have just a real quick mock-up here. So I can add some images to the image gallery. And then in gallery, we have either artist artwork– which was the one I showcased earlier– galleria style, or fancy sliders. Basically just a test.

So we can make an image gallery dynamically based on the resources that we’ve already put in there. And those image galleries can be configured. More can be added if you have a programmer who can do so. But they can be configured and then look at and utilize the copyright information, again, to provide context. To provide legality.

So you’ll see you can take the name of these, place them there, and this would build out a list to the side. Again, I have an embed code for that. So I never need to know that it had to do an awful lot to make that happen.

TOLE KHESIN: OK. Well we are going to wrap up this webinar here. I’m just going to put on the slide with our contact information here. So I wanted to thank everyone for joining us today. If you have any follow up questions or if we weren’t able to get to your questions, please feel free to email us or Keith or Bryan at Penn State. Thanks again and have a great rest of your week.

Interested in Learning More?