Introducing Video Clip Captioner: the Easiest Way to Comply with the New FCC Captioning Rules
With FCC deadlines on the horizon due to the CVAA, media producers and broadcasters are preparing to comply with new rules for closed captioning online video clips. Two major deadlines are coming up:
- January 1, 2016 – single excerpt online video clips must have captions, if their source footage originally aired on television with captions
- January 1, 2017 – online video montages must have captions, if their source footage originally aired on television with captions
Introducing: Video Clip Captioner
To help media companies adapt to the FCC’s new captioning rules, 3Play Media is releasing a new tool to caption online video clips and montages: Video Clip Captioner.
Video Clip Captioner is a state-of-the-art solution to the problem of captioning video clips. It deploys an advanced algorithm to locate matches within the source, or ‘parent’ video file, and then copies the caption text and time-codes for the clip, or ‘child’ video file. This is an automated process, which is far quicker and cheaper than alternative solutions like re-transcribing clips or manually searching in the caption file for a match.
Video Clip Captioner was unveiled at the webinar, Closed Captioning Online Video Clips for FCC Compliance. 3Play engineer Andrew Schwartz explains how Video Clip Captioner works, its benefits and limitations, and best practices for getting the most accurate captioning for video clips.
Watch the full recording of the webinar here:
Webinar Q&A Highlights
How long does it take for Video Clip Captioner to caption a video clip?
ANDREW SCHWARTZ: The algorithm actually runs rather quickly– I’m going to say roughly one-tenth of the time of the video file itself. That is obviously going to depend exactly on queueing time and how many files we’re processing in our system at the moment.
But once the file is picked up by our system, it’s going to be about one-tenth of the time of the file. That applies to the– the longer the parent file– but once you’ve done this for a single parent– let’s say you’ve ordered one clip from a show. Then any subsequent clip you order from that same parent will be very quick because we recycle that same resource, and we don’t have to do all the computation again. So it’s really quick.
How much does the Video Clip Captioner cost?
JOSH MILLER: The pricing is based on usage, so it’s all based on the duration of the clips that we’re creating captions for. The full description is on our Pricing page, but it basically comes down to about $1 a minute. There are a few other stipulations, but basically, that’s what it comes down to. So it’s pretty inexpensive.
If the algorithm is automatic, how do we know that the resulting captions are accurate?
ANDREW SCHWARTZ: There will be a report that can be delivered with each file, and you’ll have access to that as long as the file is in our system. The report is going to include an overall score for the file, as well as a couple of breakout sub-components of what went into that score. So that will all be available to you.
We’ll work towards supporting workflows where we can notify you on how many files maybe fall below a score, and that can happen if, for example, you accidentally upload the wrong file attached to a parent, or if maybe the file was edited heavily and you didn’t realize it. And all of those– we have some good testing to indicate that we can detect those conditions, and we can alert you to it. Other than that, we’ll give you the score, and you can see that. We’ll have a review process for the captions and can review the accuracy.
If a source file is improperly synced, can you manually make changes to the captioning file to edit the in and out times?
JOSH MILLER: So we don’t provide tools for someone to make their own edits to closed captions. We do have services to do this. We have an alignment service. Depending on the type of content, it might work really well for it. It’s actually another automated, very inexpensive service. So that would be an option. Otherwise, it would be up to you in terms of fixing any sync issues.
Who is required to caption their online video clips?
JOSH MILLER: We actually have a webinar on this that talks about the requirements and exemptions for the CVAA. Most broadcasters need to caption all of the content that airs on television, which means it applies to any of that content, certainly, online. Basically, if it was aired on television with captions, it needs to have captions online.
There are certain exemptions, though, for when content may or may not need to have captions when broadcast on television, and that tends to be things like: it’s a new network, it’s less than two years old, or it’s not generating enough revenue.
At this point, most stations are required to add captioning.
Are captions required for live clips?
JOSH MILLER: The way the FCC and CVAA have talked about it is that the captions need to be as good or better than what was shown live on television. That even has been modified slightly to basically say it needs to be properly in sync.
So the expectation is going to be that the live captions actually get synced up better for the on-demand version when shown online. If it’s a straight-to-web live event, that’s different and does not fall under the CVAA.
For video clips starting in July of 2017, live and near-live clips will have to be captioned–so that’s, again, live when they were broadcast on television.
Would Video Clip Captioner work for music videos?
ANDREW SCHWARTZ: Yes. The algorithm will work on music videos as well, as long as the clip that you’re dealing with– if the audio matches that in the parent, then it should work just as well as with any other kind of content. So music videos should not be a problem.
How does Video Clip Captioner handle mixed language clips? Will it do the translation or caption in the mixed languages?
ANDREW SCHWARTZ: This service will not do the translation by itself. Basically, what you give us as input is what we’re going to give you back as output. So if the input captions are entirely in Spanish, we’ll give you a Spanish file as output. If there are mixed languages in the input captions, then whatever sections of that file were in the clip, we will give it back in the same combination of languages. If you need to translate it, that’s a different service, which you can order.
How clean are the closed captions between scenes in a montage clip?
ANDREW SCHWARTZ: If there are going to be errors, it’s generally going to happen between scenes, and that’s definitely something that the review process is going to try to catch. That’s also something that gets caught in our automated scoring algorithm.
So if there are captions that are kind of over the edge between scenes, they will get caught, and you should generally be notified about that. But in general, in the tests that we’ve seen, the results are all pretty good, and it tends to capture the frames correctly. Obviously, this is dependent on the frames being timed correctly in the input as well.
What types of formats can you take, and what types of formats can you produce?
JOSH MILLER: We can import most major caption formats: certainly SCC, CAP, WebVTT, SRT, DFXP, and STL. Learn more about supported formats.
In terms of output options, we actually support over 50 different caption output options. And one thing that’s worth noting is regardless of whether we create the captions ourselves or import the captions from elsewhere, we make all of those output options available, and they’re always available for download.
Learn more about the pricing of Video Clip Captioner.