Plans & Pricing Get Started Login

« Return to video

Quick Start to Video Search [Transcript]

LILY BOND: Welcome, everyone, and thank you for joining this webinar entitled “Quick Start to Video Search.” I’m Lily Bond from 3Play Media, and I’ll be moderating today. I’m joined by two of my colleagues, Lanya Butler, who is a product engineer here at 3Play Media, and CJ Johnson, who’s one of the co-founders and the CTO here at 3Play.

For a quick agenda for today, I’m going to go through some of the requirements that make the basis for video search, followed by the benefits of captions, transcripts, and video search tools. And then CJ is going to go through 3Play Media’s interactive transcript architecture and a basic example of the interactive transcript.

And then Lanya will go through some more advantaged customization options, and all of these examples are available for view and download. And I will chat them to you as we go through those and include the links in all of the followup communication.

To go through some of the components of video search, video search is built off of captioning and transcription. Transcription is the plain-text version of the audio content, and video search is built by making a transcript interactive.

Closed captions, meanwhile, are time-synchronized so that they can be read while watching the video. Closed captions assume that the viewer can’t hear, so they convey all the sound effects, speaker identification, and other non-speech elements.

And the interactive transcripts are time-coded transcripts that use the closed captioning as a basis to highlight the words as they are spoken in a video. And you can search within the video to find a specific term you’re looking for and jump directly to that point.

And video library search– basically it compiles transcripts across an entire library of videos and you can search for something, like linguistics, across the entire library and see every point in every video where that word shows up. And then you can just click and go, just like in the interactive transcript.

So what are interactive transcripts? Today, we’re specifically going to be going over our plugin that we offer, which is an interactive transcript that uses captions to make videos searchable and interactive. Interactive transcripts are not a replacement for captions legally, but they do improve the engagement for all users.

And our interactive transcripts are really easy to set up. They’re a plugin we provide to make your video search easy to implement, and there are lots of customization options which CJ and Lanya will go through shortly. And finally, Jeroen Wijering, who’s the founder of JW Player, said that interactive transcripts are one of the top three innovations in online videos. So they’re a really great thing to add to your video collection.

Some of the benefits of closed captioning and interactive transcripts– obviously, they provide accessibility for people who are deaf or hard of hearing. There are 48 million Americans living with hearing loss, which is about 20% of the population.

They also provide better comprehension to people who may be in a sound-sensitive environment, like an office, library, or gym. And they provide better comprehension in situations where the speaker has an accent, if the content is difficult to understand, if there is background noise, or if the viewer knows English as a second language.

They also provide a great basis for search engine optimization. Google can’t watch a video, so adding a transcript to your video provides more context for Google so that they can properly classify your video. And that will lead to more inbound traffic and views.

They’re also a great starting point to create other types of content. 50% of students at the University of Wisconsin were actually repurposing the transcripts for study guides. But you can also take that content and use it to create case studies, support docs, blogs, white papers, and infographics.

And finally, transcripts– once you have an English caption or transcript file, you can translate that into foreign languages to make your video accessible on a more global scale.

And then just a few statistics about interactive transcripts– MIT OpenCourseWare was using interactive transcripts on their videos. And they did a survey that found that 97% of their students had an enhanced learning experience based on these interactive transcripts.

And so now I’m going to hand it off to CJ, who is going to go through how to implement the interactive transcripts yourselves.

CJ JOHNSON: Thanks a lot, Lily. Just a quick overview on our new release, our latest release of the P3 software. And P3 is really just a software library that helps users to take video on our Youtube player and engage it with timed text to make interactive experiences.

Our previous version was really built from the ground up maybe about five years ago, and a lot has happened on the web since then. For one, the advent of responsive design was a huge driver of our latest architecture. So for those of you not familiar with responsive design, this is really how you design a web page for that when the layout dimensions change from my desktop computer down to my iPhone, how do different components on the page react and shrink or grow depending on the window size to make a good experience throughout those different environments.

On top of that, we’ve had a lot of requests from our customers that respond to them with this new architecture, for the ability to have more flexible color schemes, more flexible layouts on the page, and also to customize in other ways– to sort of build or have an imagination of what interactive transcripts can do, build on top of it. So we’ve done some cool things like highlighting text to share clips, but also to make assignments for students.

So if you have dozens of hours of content and you want to focus in on the key 5 or 10 minutes that really pertain to this week’s lessons, there are ways to do that through if you have a JavaScript developer, if you’re a JavaScript developer or have one on staff, ways to engage with the plugins to really just build custom features on top of it. And we’ll get into that a little bit, but we’re really just going to give you a pretty helpful overview in this webinar.

And finally, for some of our customers who really want to sort of keep assets in-house, we have built in the capability to automatically sort of post on your site all of the assets required to create the interactive transcript. And that’s either providing a link to a transcript from your own rear-end server, or even pasting another transcript into the interactive transcript embed, so you get the additional benefit of Google finding the text, like Lily mentioned before, for SEO.

Again, sort of back on the responsive design concept, for those of used to frameworks like Twitter Bootstrap or Foundation, you’ll see a lot of familiar themes in terms of how you lay out the interactive transcript with HTML elements and use class names that are reserved for special functionalities in the transcript.

So first, before Lanya gets into some of the cooler stuff that we can do, I’ll just give a really quick example of what it looks like to embed an interactive transcript. Our previous version, for those of you if you used it, was really JavaScript-based. So in once instance, you could take sort of a pre-made template that we have with our skins and put a JavaScript embed tag onto your web page. But all the customization really came through JavaScript code.

And basically, that presented few issues around having to have someone on staff who knew JavaScript. But also LMS’s tend to scrub out JavaScript. So some of those content management systems you might be using made it really tough to customize everything. So we’re really focused on making HTML the key driver of the interactive transcript. And I’ll just jump over to an example and show you how that works.

I should mention that each of these examples we’re putting up on our site interactive.3playmedia.com under the P3SDK folder. So this first one, basically HTML, will be live. All these examples will be live going forward. So you can come back to them, and check out the source, and really kind of dig in, use that as a basis for your own foundation. But feel free to reach out to us with questions.

So we’re reviewing the basic functionality here. I have a YouTube player with a really basic interactive transcript. So I’ll click Play. I’ll keep the sound off. But I just want us to show you that the word tracks along as the speaker is going. In this case, I’ve used CSS to make the active word red. I can still click a word to jump to that part of the video. And this is sort of the foundation of the interactive transcript.

You notice each word has a time stamp behind it. We do transcribe at a very granular level in terms of the timing. So this will be reserved for anything that you process through us here at 3Play.

Jumping into the code really quickly– and this will the only code we’re looking into. We don’t want to put everyone to sleep here on St. Patrick’s Day. But just how this works– there are really three main components here on the web page. The first is the P3SDK library. This is the JavaScript library that drives the interaction of the text with the media.

You’re to use this URL when you’re building out your own versions. This is hosted on our CDN, so it’s very scalable. So if you have a lot of traffic, don’t worry about it. Fell free to use this link.

Each time we release the developer diary, it comes with a version. So right now, we’re on Version 1.2. It’s named Kraken– the sea creature is the name of the P3SDK for now. It’s [INAUDIBLE]. It’s also always available under p3sdk.current.js if you want the latest version.

Next, again, some basic CSS– so very simply, we have the name component. So when a transcript renders, the p3dsk-current-word will always be what the current active word is in the transcript as the video is playing.

So you can do what you want here– change the background color to red like we did. Or you can change the color of cells to green. Or you can make it a really big font. You can do whatever you want to do with it– and then just some really basic styling of the transcript window itself just in terms of the dimensions and in terms of how it looks.

So as I mentioned before, the real meat of the new version here with the SDK is in HTML. And so anywhere on your page that you want interactive transcript with your video player, you want to have a div around all the components that make up that environment. And so we use a class name P3SDK container along with what type of player you’re using and what the player ID is.

So in this case, it’s Youtube. And we use the player ID youtube_handle. And what this does is when the P3SDK library loads, it looks on the page for every P3SDK container on the page and engages the video with the text so your experience comes alive. This also means that you can have multiple videos with interactive transcripts on one page, which is something that we’ve run into quite a bit over the last few years.

The next component is just a video embed. So this is really close to a standard YouTube embed you get. Right now, we’re running iframe embeds with a couple little tweaks here.

So first, being include their iframe JavaScript API. We enable it for the player, so we have access to it. And then the ID I mentioned before just needs to match up with what’s in the container. And that’s what gives us access to talk to the player.

Then actually creating the interactive transcript, we define it by creating a div with class P3SDK interactive transcript. This just says, this is where the module will be built. We configure it through attributes on the HTML.

So my 3Play account and my 3Play account system– this is my project ID and my file ID, so I know which content to grab over the API. And I can do other configuration options here, like I can set up what the scroll is like.

And the final piece that we have just with interactive transcript itself is just the target. So P3SDK interactive transcript content is where– when that content is pulled over the API, that’s where it’ll actually go and be running on a page. There are a lot of other components that you can build in. And I’ll let Lanya take you through those.

LANYA BUTLER: Thanks, CJ With this example, I’d like to illustrate some of the core features we have available for development and also some examples of how to customize the look and feel of the plugin. So first, you will notice that the functionality CJ described in the basic version is available here. But we’ve changed the red coloring of the P3SDK current word to be blue. And that’s just using CSS.

I’ve also added some components. So the progress bar– the blue and gray gray bar under the video– tracks along the video as I play, so you can see how far along into the video you are at a given time. If you type in the search bar, search results will light up in pink. And you can see that the results are marked along the progress bar according to the time that they occur. You can click the results, and it’ll jump to that part of the video.

Next we have a group of icons that add a couple of other basic features. The Download Transcript link is pretty self-explanatory. I’ve included an HTML markup on the right of how this was created. P3SDK interactive transcript download is the magic class that makes the link download a transcript when I click it. In this case, I decided to use PDF as the format. So it’ll download a PDF. You could also set it to download a TXT file.

The next feature is the toggle keywords feature. So when I click the Key icon, keywords and phrases in the transcript will blow up according to their importance in the video timeline. So I can quickly look through these, and see what words are important, and see what the video is about.

The Keywords button is similar to the Download Transcript button with a special class name to activate it. And in this case, we’ve also used Font Awesome to give the button a key icon instead of just that plain text.

The Lock button that is next to the Keywords button is similar to how the Keywords button is set up. And that just modifies whether auto-scrolling is enabled or not in the transcript.

So I’d like to imagine that all these components can be changed. You could throw the progress bar on the bottom of the page. You could make the download link red or an image. That’s completely up to you.

You can find out more about these features and more on our support documents. And we’ll share a link at the end of the presentation. Next, I’d like to show you an example of the P3SDK using responsive design.

So the major driver for building the P3SDK really was the emergence of responsive design for flexibility and presentation to different experiences across desktop and mobile devices. Since the layout is now in your hands, you can use any existing responsive design frameworks– we mentioned Bootstrap and Foundation earlier– to plug the interactive transcript components right into your web page.

In this example, we are using Bootstrap. For the desktop experience, we have a wide presentation with the search, download, progress bar, and keywords features enabled.

We run into a few problems when we look at the transcript on an iPhone. For one, this wide layout is just way too large to look on an iPhone screen. And some of the features we have on desktop just don’t make sense for an iPhone. You can’t really download a PDF to the desktop of your iPhone. And since for iPhones, the video takes over the entire screen, the progress bar doesn’t make a whole lot of sense outside of the video.

So when you shrink the screen, you’ll notice that the components have collapsed into a logical column. The download and progress bar features have disappeared for a more simplified experience. On an iPhone, the interactive transcript then serves more as a browse and preview tool. The user can read the contents of the video, search for a keyword, or look at keywords using the toggle keywords feature, and decide whether they want to watch the video and take the time to load the video.

So let me now take you through a more advanced example because what we’ve shown so far just scratches the surface on what’s possible. With the ability to customize your own layout, styling, and user event handling, you can make some really cool stuff. For today’s webinar, we decided to build an interactive children’s audiobook using Peter Rabbit. For this example, we wanted to pull together the SDK with some other cool tools around the web. And we thought that an audiobook would be really cool way to use timed text.

So demonstrating this– you can click on the timeline to flip open the book. And you can click pretty much anywhere in the interactive transcript below or in the book to sync the audio and the text. To build this out, we used the P3SDK with a LibriVox recording that’s playing on an HTML5 audio player. And then we used turn.js for the book visualization.

You can see in this example the layout and CSS are both pretty intensive. But to get the book flipping at the appropriate time, we used custom event handles thrown by the P3SDK interactive transcript to get the timing right.

So once the page is loaded, the P3SDK library is listening for a bunch of different things that are happening on the page. If a user clicks a word or searches in the transcript, we can build on one of those events and use the data to do really interesting things. In this case, each time the current word is highlighted in the transcript, the P3SDK lets us know which word got highlighted and the timing of the word in that audio track. So we can use that to figure out when we should be turning the pages and what page to turn to.

I’m happy to take any questions about how any of this works during Q&A. And more information about custom JavaScript events is available on our support site. All of these pages will stay up after the webinar. So feel free to check them out and look at the source code. Thanks for watching, and let us know what questions you have at this time.

LILY BOND: Thank you, Lanya and CJ– great examples. And as we prepare for Q&A, I want to remind everyone to please type your questions into the questions window of your control panel.

While we’re compiling those, I just wanted to mention that we have a few upcoming webinars– “A Legal Year in Review” for digital accessibility cases, “The Road to Sustainable Corporate Accessibility,” and “The Future of Captioning in Higher Education.” And there’s also a link there to see the basic example that CJ demonstrated. And all of those examples are also in the chat window.

So with that, I’m going to get into questions. And feel free to continue to ask those as we start to go through Q&A.

So the first question here is, do these plugins work within an LMS?

CJ JOHNSON: So I’m happy to take that one, Lily. They do work within LMS’s. Part of the design, I think I mentioned before, was really centered around using HTML as the layout part as opposed to pure JavaScript. So we kind of had LMS’s in mind as one of the major beneficiaries there.

That being said, I know there are a lot of LMS’s out there. We’ve been doing this for quite a long time now, and we still hear about new content management, learning management systems that are coming out. So the best thing is really just to reach out to your account manager at 3Play to see if we have some experience with the LMS you’re working with, or to support team if you’re getting really closely but maybe have one last little bug there you’re trying to get through.

LILY BOND: Thanks, CJ. The next question here is, how do we use the P3SDK interactive transcript with Brightcove-hosted videos?

CJ JOHNSON: So that works very similarly to what you saw with how we set up with the YouTube player. If you check out our support docs there that I think Lily sent a link out to, you can see that Brightcove videos are kind of similar in that you can use one of their embed methods that may be just a small tweak to embed to enable the JavaScript API.

And this will work both with Brightcove Legacy, the previous system, and also with their new studio that came out about six months ago. So both versions will work.

LILY BOND: Thanks, CJ. Another question here– does the interactive transcript operate based on the timings in the SRT caption file?

CJ JOHNSON: This is a great question. Sort of the basis for it, or sort of the default operating mode, is to use our JSON format, which includes timings for every single word. But it is possible to run the interactive transcript off of SRT file as well.

So if you’ve done some work with us, but you have maybe a few videos where you have previous SRT files, you can link to those as the content type SRT and load interactive transcript that way. It won’t be word-to-word timing. It’ll be more phrase-to-phrase. But that will definitely work.

LILY BOND: Thanks, CJ. The next question here is, how much does this cost?

CJ JOHNSON: So for the interactive transcript, it’s part of doing business with 3Play. So you get access to this library for free. And the formats are made for you as part of the captioning process. So it just comes with it– not much else to it than that.


LILY BOND: Thank you. A next question here is, how is the interactive transcript text actually created?

CJ JOHNSON: So the word-to-word timings you see there are kind of a byproduct of how we do transcription and captioning here at 3Play. So basically, beginning to end, timing it is a part of the process. It’s kind of hard to get into exactly how that works. But I’d be happy to talk offline about if someone really wants to geek out about some CS topics, if you have a couple hours.

But basically, our process retains all the timing information behind the scenes. And the captioning is actually sort of a watered down version of those timings where it turns into caption frames and things like that for the accessibility use cases. Like I said, obviously our format is pretty hard to replicate. So that’s why we wanted to make sure it was operable with SRT.

LILY BOND: Another question here– was the Peter Rabbit text transcribed automatically, or was there a manual step employed to clean up the transcript?

CJ JOHNSON: So it’s a little of both. So our process is a three-step process. So the first step is computer recognition, where our computer does its best to get as much done as it can. And then the second step is an editing process to make sure that it is accurate. So same 3Play process we use for everyone out there, we used for this example to create that product.

LILY BOND: Someone is asking, if they have an existing caption file, can that be used for an interactive transcript?

CJ JOHNSON: It can. So basically, if you have an SRT file, I think there’s a good case on the support docs there that shows how you basically provide a link to that file. So if you put that on your server somewhere and host it, using, say, my remote source is my SRT file, and it’s content type SRT, the transcript plugin will vacuum that in and create a block of text there.

I think I see another related question asking about paragraph breaks there. It is true, there are no paragraph breaks in an SRT file. But if you sort of check out one of our JSON formats, you could replicate that conceivably to put a couple of paragraph breaks in, or use one of the custom event handlers that Lanya mentioned to make those paragraphs.

I just remembered, actually, we did build an auto-paragrapher. So you can throw an auto-paragraph configuration that will basically look for ends of sentences on SRT files so you get some paragraphs. They might be a little bit weird, so you might want to put that data in yourself. But that is available.

LILY BOND: Thanks, CJ. Another question here is– there are just a few people asking about compatibility with various different players. Do you mind going over the basics of what the interactive transcript is compatible with?

CJ JOHNSON: Yeah, so you can definitely get a full list on our support docs. But basically any player– Flash, HTML5– out there that has a JavaScript API, we are able to be compatible with. The ones we’ve actually become compatible with have been the ones that our customers use. So that covers a pretty broad spectrum across YouTube, Vimeo, Wistia, Brightcove, Kaltura, Ooyala, Video.js, other, basically, HTML5 tag, and several others out there.

So you can definitely look for the full list on support docs. And if you have one that you’re using that we haven’t listed, if there is compatibility there, we can always build them pretty easily.

LILY BOND: Great, thanks. Another question here is, will the current plugin still be supported?

CJ JOHNSON: Short answer– yes. So if you’re using an older version of the plugins, we’re still hosting them. We’re still serving them. There will be a point where we modify our account system to use the new infrastructure for templated versions of the SDK to plug in to your site. But the old ones will still remain intact.

So the rug’s not going to be pulled from under you. They’re going to keep working. In terms of supporting customisation of new features for the previous version, we’re not going to be doing that going forward.

LILY BOND: Thanks, CJ. Another question here is, do you have examples of the library search plugin?

CJ JOHNSON: Oh, yes. So on our support site, you can see a couple examples there. We didn’t cover that because we really just want to focus on the transcript plugin. But that’s actually going to be made quite a bit easier as well to use.

So if you’re a 3Play customer and you have a linked account– so you’ve linked your YouTube account or a Brightcove or a Kaltura account– that’s something that will automatically be available to you, where it basically can list and make the files searchable for your account. Contact your account manager for sort of how that works. But the support docs will cover how to make that happen with a few examples.

LILY BOND: Thanks. That looks like about all of the questions that are coming in. So thank you so much, CJ and Lanya– great examples and great presentation. And I will send out an email tomorrow with a link to view the recording along with the slide deck and all of the examples you saw today. And I hope everyone has a great rest of the day.