Plans & Pricing Get Started Login

« Return to video

Accessible Video Captioning for Blended Learning and Lecture Capture – Transcript

TOLE KHESIN: All right. We’ll get started. Thanks, everyone, for joining us. Thanks to Sloan Consortium for organizing this event. This session is titled “Accessible Video Captioning for Blended Learning and Lecture Capture. ”

My name is Tole Khesin with 3Play Media. We are based in Cambridge, Massachusetts, where for the last five years we’ve been providing captioning and transcription services to our customers, who are mostly in higher ed, but also in enterprise and government.

I’m also joined by Dusty Smith here from UW-Madison. That’s the University of Wisconsin-Madison. Dusty is the Digital Media Manager at the College of Engineering. He is responsible for classroom recording systems and media servers. His goal is basically to provide faculty and staff with one-stop shopping for teaching and learning services. He’s been involved with technological planning, design, and maintenance of instructional spaces for the last 20 years.

So for the agenda, we have about 50 minutes for this session. During the first part of the presentation, we’ll go through some of the captioning basics, some recent and upcoming legislation impacting captions, and how captions benefit online and blended learning. I’ll then hand things off to Dusty, who will discuss UW’s accessibility policy and how it pertains to video captions.

He’ll talk about how they prioritize captioning, how budget factors into the equation, and he’ll also talk about the technologies and workflows that they’ve developed. We’ll leave the remaining time for Q&A and a general discussion. And since we have a pretty small group, probably if you have any questions or comments, just feel free to interject at any time.

So I wanted to start off a little bit with some recent accessibility data from the 2010 World Health Organization report and also the US Census report. So there are a couple of interesting high-level findings. So one is that there are over a billion people in the world who have some sort of disability. So that’s a very large number. In the US, about 56 million, of which 48 million– that’s 1 in 5 adults above the age of 12– that have some hearing impairment.

But the other interesting finding from these reports is that the number of people who associate themselves as being disabled, that number is really skyrocketing. And it’s increasing disproportionately with population growth. You might ask, why is that happening?

And there are a number of reasons, but the main ones really have to do with medical and technological advancements. So for example, premature babies are much more likely to survive these days, which is great. But as a result, they might have some disability.

Also, we’re coming out of a decade of wars. There are a lot of veterans returning with injuries. But also, due to technological advancements such as body armor, for example, modern body armor, soldiers are 10 times more likely to survive an injury than in wars of the ’70s. So again, that’s great, but a lot of these people have disabilities such as hearing impairments coming out of it.

So really, what all of that points to is that accessibility is going to be even more prevalent in the years ahead. And obviously, captioning is a big part of that, which is the reason why we’re talking about it. So I wanted to spend just a few minutes getting on the same page in terms of what are captions and some core terminology.

So captions are text that has been time-synchronized with the media so that a viewer can read the text while watching the video. Captions assume that the viewer can’t hear anything at all. So the objective with captions is to not only convey the spoken text, but it’s also to convey the non-spoken information, such as sound effects, speaker identification, basically any kind of information that a viewer might obtain from hearing.

So captions originated in the early 1980s as a result of an FCC mandate for broadcast television. And since then, they’ve expanded into other areas. But that was how they got started originally.

So some basic terminology, so captioning versus transcription. So the difference here is that a transcript is just the text, the spoken text, without any time information. So you could take a transcript and print it out on a piece of paper.

Captions, on the other hand, have embedded time information, because the transcript needs to be synchronized with the video. And usually, what happens is you take a transcript, and you chunk it up into what we call caption frames. And caption frames are displayed for a certain period of time while the person is talking through that in the video.

From an accessibility point of view, captions are required any time you have a video with moving images. And for an audio or podcast, a transcript is really sufficient, because synchronization is not really important in that case. You could just read a transcript.

And I should also point out that when we say a moving video, it doesn’t necessarily have to be a movie. It can be a PowerPoint presentation, for example, with an audio track. Any time the viewer needs to read that content at the same time as the video’s playing, that’s when captions are required.

So captions versus subtitles. Although these terms are sometimes used interchangeably, especially in other countries, in the US at least, there’s a pretty significant difference. Captions assume that the viewer can’t hear the contents, so that’s why captions have the non-speech elements such as sound effects and speaker IDs. Subtitles, on the other hand, assume that the viewer can hear everything but can’t understand the language. So usually, subtitles are associated with translation to non-English.

So closed versus open captions. So what this is, open captions are burned into the video, and they’re on all the time. And closed captions, it’s usually a side car file that lays over the video, and it can be toggled on or off by the user. Especially with the proliferation of web video, everybody’s really moving towards closed captioning.

There are a number of reasons. The workflow is easier. It doesn’t obstruct critical content on the screen. It can be turned off. But there’s still quite a bit of open captioning out there. It’s sort of dying down, though.

Post-production versus real-time relates to, really, when was the captioning process done? So post-production means that the event already happened, and the process of captioning is done after the fact, whereas real-time captioning involves basically engaging a live stenographer to type as the event is happening in real time. So two different processes, each has their own advantages and disadvantages.

So how are captions used? So as I mentioned before, captions originated about 30 years ago specifically for broadcast television. But now, with the proliferation of online video pretty much everywhere, especially in education, the need for web captions has expanded greatly. And as a result, captions are being applied across many different types of devices and media, especially as people become more aware of the benefits and as accessibility laws become more stringent.

So I’ll talk a little bit about the recent developments with accessibility laws and just in general what people are looking towards. So Sections 508 and 504 are both part of the Rehabilitation Act of 1973. Basically, the Rehabilitation Act requires equal access to any kind of content that’s by a federal government agency or any other program that’s subsidized by the federal agency.

Section 508 is a very broad law that applies specifically to electronic communications and information IT and requires equal access. And Section 504 basically has the same result, but it has a bit of a different tact to it. It’s sort of an anti-discrimination law. It basically says that if you have a disability, you’re entitled to the same sort of accesses as someone who isn’t.

But again, those two are really– those two laws really apply to any federally funded program, essentially. A lot of public and state universities are subject to that as well, because, for example, they have Pell grants, which are a federal subsidy. I should also point out that many states in this country have enacted similar legislation that kind of mirrors Sections 508 and 504.

So the next one is the ADA, Americans with Disabilities Act. It was originally enacted in 1990. There was an amendment in 2008, which actually expanded the definition of what it means to be disabled. And so the ADA is actually a very broad law. When it was originally enacted, it really didn’t have anything to do with electronic communications. But since then, it has been interpreted that way through a lot of case law.

Title II, which is for public entities, and Title III, which is for commercial entities, those are the two sections of the ADA that are pertinent specifically to captioning. And in particular, Title III has had a lot of activity with case law recently.

So one case, in particular, was the NAD– National Association of the Deaf– v Netflix. So what happened in that case is that NAD sued Netflix on the grounds that their streaming service– their movies that we all know– are inadequately captioned. In fact, most of the movies at the time didn’t have captions at all. And the basis for the lawsuit was that– well, actually, I should back up.

With the ADA Title III, in order to qualify, in order to be subject to that law as a commercial entity, you have to be deemed a place of public accommodation. And so the NAD said that Netflix was essentially a place of public accommodation. And Netflix said, well, we really aren’t. We’re just sort of a commercial entity, and we’re providing a streaming service for people that want to buy into it. And we’re not.

And so the case went on for a while, and in the end, Netflix conceded. And the judge, in fact, ruled that Netflix did qualify as a place of public accommodation. So that has some pretty profound implications that, if you extrapolate out really impact a lot of different areas. And I think that if Netflix is a place of public accommodation and has to provide accommodations for people who are deaf, then I think it’s definitely feasible that that will extend into other commercial entities and certainly education and government as well. So that’s really interesting.

And then the last law is the 21st Century Communications and Video Accessibility Act. It’s a mouthful. The abbreviation for it is CVAA. That was enacted in November of 2010. And what that is is that applies specifically to video content or audio content that once or currently airs on television and is also on a website somewhere.

So, for example, this applies to companies like Netflix and Hulu. It doesn’t really apply as much to educational institutions, because most of that video content is not on TV. It never airs on TV.

There have been several milestones which were put in place with the CVAA. A couple of them have already gone live. So currently, the law is that any content that aired on TV and is now in a website unedited, it has to have captions 100%. It also applies to live and near-live programming, like sports and news shows.

But the big one is coming up at the end of September this year. So with the previous phases, a lot of broadcasters were getting out of it because they were saying, yes, this content aired on TV, and now it’s on a website, but it was edited. So either it was cut into clips or commercials were inserted or taken out. And so previously, that was their out.

But come end of September of this year, it makes no difference. Any content that aired on TV, no matter what you do with it– you cut it up, you put in commercials, anything you do to it– you still have to have captions. So a lot of companies, a lot of video publishers and broadcasters, are scrambling right now, and they have been for the last several months in order to get 100% of their content captioned. It’s really sort of a big thing.

So a little bit about– before I hand things off to Dusty, a little bit about the value propositions and benefits of captions. So in higher ed, the biggest motivation for captioning is obviously accessibility laws. Or multimedia departments are driven by the accessibility policy that was put in place. And that’s fantastic. I think that’s great.

But the thing that’s really interesting for me is that we have many, many customers that caption their content for a reason other than accessibility. Accessibility might be the second or third reason. The primary reason is one of these other things. And I just want to go through them quickly, just to show the breadth of value here.

So one thing that’s interesting that we keep hearing over and again is that the people that consume captions are actually not deaf at all. I mean, deaf people do use captions. But they are, by far, the minority.

The majority of people that actually use captions in higher ed are students who know English as a second language. And they use captions because it really helps them to understand what the professor is saying, so they don’t have to rely on the audio only. They can read the content as well, which makes that much easier, especially if the content is complicated, and there’s a lot of terminology. I think those challenges are really compounded if a professor has an accent. So captions are really useful for students who know English as a second language, and they use captions more than anyone else.

The other benefit is that students use captions and transcripts in places where they can’t access the audio. So, for example, in a library, you can’t turn on your speakers, or maybe at a workplace, you can’t turn on the audio, so having captions enables you to watch the video, essentially.

Another value that comes out of captioning and transcribing of content is search. This is a really big driver because educational institutions and companies are amassing terabytes and terabytes of video content. And the challenge is that that content is not searchable, because unless you transcribe video, it can’t be found.

The only thing that gets indexed is usually just the title of the video, which is just insufficient for an hour-long lecture, for example, that has 10,000 words in it. You’ll never, ever be able to find what was spoken in that lecture. Whereas, if you transcribe that video content, all of a sudden it becomes accessible and searchable to everybody. And that’s a really big advantage that sometimes people overlook.

The other thing is that if that content is searchable, it also becomes reusable. So, for example, a professor might be looking for a video clip, and it would just be much easier to find that clip that has already been done if you can search by keyword.

Also, we’re finding that faculty are reusing their transcripts to create alternate types of content. So, for example, you have a lecture that’s an hour long, and that’s about 10,000 words. And if you take an entire semester of lectures and you take all those transcripts, professors are starting to use those as a basis for writing a textbook or writing papers or journals. It’s a lot of content which can be repurposed into other formats. So those are really the main things that apply to education.

Navigation is sort of another thing that really helps the ability to navigate through the video using the text. So at this point, I’ll hand things off to Dusty. And he’ll talk more about what they’re doing with captioning and accessibility at UW.

DUSTY SMITH: All right. Thank you. So I’m from the University of Wisconsin in Madison. We’re a public land grant institution. We were established in 1848. We have approximately 43,000 students at the moment. Our faculty and staff is about 21,000 $2.8 billion budget, and we have a beautiful lakefront campus. So you guys are invited over there any time. It’s just a drive to the west.

So I’m specifically from the College of Engineering. And I guess that’s where most of my knowledge comes from is handling the faculty and staff there. We have approximately 4,000 students, 1,500 graduate students, and 11,000 professional engineering education students, which is pretty much where our video history comes from. I’m from Media Services, and since the 1980s, we’ve been taping classes for the professional engineering students. We did it on VHS and moved online as that technology came here.

So as of now, we have at least 5,500 hours worth of video. That’s just what I know about. And I know there’s other videos out there from other departments and people who have put stuff up on YouTube and departmental servers. So that’s where that’s coming from.

The UW has a web accessibility policy. And the policy specifies that every non-text element on the web needs to have an equivalent alternative that the text is synchronized with the presentation. And that implies podcasts, audio files, videos is the big thing now. And then, there’s a link right there if you actually want to read the whole thing. It goes into a lot of depth about what’s going on with that.

The policy does have an exemption out. And as of I think last year, I knew that there were three departments in the university that had an exemption policy. It was athletics, health sciences, and us. And those are the big three on campus that had a lot of videos. And you just write asking if you could get out of it, and if they find that you financially cannot caption all your videos, they just pretty much give you exemption. And the caveat to that is if anybody asks for a video to be captioned, then you agree to caption the video.

Does the UW policy have teeth? That’s a good question. I don’t think anybody really enforces it. It’s more just a policy that’s there. It’s also enforced for our websites. Our websites have to be accessible.

I would say I’ve never heard of anybody that’s ever been contacted by the university saying that they need to come into compliance. I suppose if, at some point, somebody complained, a student complained, then they would roll down the chain, and we would get talked to.

UW pays attention to the laws that we saw earlier. It’s pretty much all the national and state laws. And I don’t know a lot about the rest of the UW system, but I assume they’re pretty much the same, because we’re all following the same guidelines.

One of the things that was established at UW-Madison, a group of us got together, and we put a bid for captioning contracts. And we evaluated a group of captioners, and they submitted their products, and we tested them on editability, correct content, ease of access. And we came up with a couple vendors. 3Play Media and Automatic Sync were the two we chose.

And we set up the contracts to help the faculty and staff actually go someplace and get captioning done. They didn’t have to manually caption everything. So they can set up an account with these guys, and they can upload the video, and they can get it back in a reasonable time.

So I guess prioritizing what gets captioned. These are the three things that get discussed every time I hear it. It’s who’s making the decision? Will your budget play a role? And who makes the final decision on what gets captioned?

In the engineering college, we prioritize it basically on the design of use and the permanence. We have a lot of classes that the professor will record something, and then a year later, he’ll re-record the same class. So we’ve decided that really isn’t an effective use of resources to caption those every year because they’ll change.

We try to capture all of our promotional videos and things that are going to have some permanence that are one-time affairs, something that’s going to stick around, and we’re not really going to be changing it. But a lot of our classes aren’t captioned unless there’s a student or something who requests it. And basically, anybody can make the request. On our website, we do have a little notice at bottom that says if you have needs for captioning, just let us know.

There are some problems though, and I guess price is a big problem. Also, knowledge of the laws. Most people don’t realize that they actually have to caption the videos. I’m in charge of one of the big servers, so I know what goes up there.

But there’s a lot of people just putting things on YouTube. There’s departmental websites, there’s course websites, and I know no idea what’s on there. A lot of people don’t tell you that there’s anything on there. And I would say probably 90% of the professors don’t even realize that some of that stuff needs to be captioned officially.

And if they do know that it has to be captioned, they really don’t have any idea how to get their content captioned, and that’s what our department’s for. We’re here to help them. And if they go the do-it-yourself route, where they try to do the transcriptions themselves, that costs a lot less. But it involves many other problems as far as somebody actually has to sit down and transcribe the content and then time it out.

So getting back to who pays. This is probably the biggest factor in what gets captioned, at least at the College of Engineering. We try to caption what we can, but you’re probably looking at about $100 for an hour of video. So if you have 5,500 hours worth of video, and it’s constantly changing, that’s not probably ever going to be all captioned.

Accessibility, there’s no central resource that handles it. Every department is in charge of their own budget and their own accessibility. So there are people that can help the faculty and staff, but there’s really no one pushing them to get this all captioned or follow the laws.

We do have the McBurney Disability Resource Center, which is a campus center, that is in charge of helping the disabled students get the resources that they need. And the students will contact the McBurney Institute, and they’ll show them their class schedule, and then they’ll contact the professors, and the professors will probably get in touch with us.

There’s also a contingency funding guarantee, which is put in place by the university. And it basically says that if it’s going to severely impact your budget, you can get reimbursed by the university to get the funding for the captioning. So you’re not going to decimate your departmental budget if you have to comply with the law. And that’s what that’s there for.

Some tips and tricks. The DCMP Captioning Key, it’s a pretty interesting website, especially if you’re doing things on your own, and you’re not getting transcriptions done. It’s basically just the proper way to caption, where to put the words, where to move things around, how to position them on the screen. Especially if you’re doing a do-it-yourself, that’s a good resource.

I haven’t used this in a while, so I’m not sure if it’s still there. But you used to be able to upload a video to YouTube, and then turn on automatic captioning. And it’s not very good. It’s actually pretty funny if you listen to some of the things because it’s not close. But you can get a transcription back and download the text. And at least you have a place to start where you can go in and clean it up a little bit.

And one of the things Tole was talking about is searches. It’s extremely useful in searches. And I think in the future, that’s probably going to be the biggest use of captioning, is you’re going to be able to specifically pinpoint straight into a video where you want to go by doing a search for the actual words in the video. And as he mentioned also, English as a second language is also a key when you get to captioning.

A little bit about what we use in engineering. We have a classroom recording system set up that’s a mix between Mediasite recorders and Ncast recorders. So we have those out in some of our rooms, and they record our classes, and those get uploaded to a centralized Mediasite server.

We also do some desktop recording with professors that use the Mediasite and also a software product called Camtasia. And there’s a lot of stuff that’s put up to YouTube on individual channels, but the way that works is everybody has access to that on their own, and there’s no centralized system. We do have a college web page, but it has to get funneled through a different department. And then, we use the captioning providers, 3Play and AST.

And the Mediasite is actually and interesting system because they have automated the workflow for captioning. By presentation or through a whole folder, you can actually set it up where anything that goes into a folder, if it’s recorded and uploaded to the server, will automatically get sent out to one of the captioning providers, and they provide a transcription and captions and send it back. And then it’s just placed into the file, and you really don’t have to do any work. That’s made things a lot easier. So then it’s just a matter of basically paying for the transcriptions, because everything’s automated, and it’s just up and back down in a couple days.

And then, these are some of the file types that the Mediasite supports. There’s a whole slew of different file types for the captioning, and they’re all pretty much the same. The code’s just a little bit different on them.

And I just wanted to show you a picture here of the Mediasite. And you can’t really tell, but you just go in there and you click. And you would say who your service provider was. And then you can even specify different accounts, so different departments can have the same accounts and the funding all goes to a different department. So it’s fairly easy to get stuff to run.

So does anybody have any questions for either one of us? Yeah.

AUDIENCE: You were just talking about some automated workflows.


AUDIENCE: What if we don’t use [? CVSIs ?] or have an automated workflow? Any recommendations?

DUSTY SMITH: The automated workflow there, it’s based on the server software. I think there are a couple other video servers that are starting to implement the same thing. You might know a little bit, too. So it’s all based basically on your server, where you just dump video into the server, and then based upon requirements that you set up, it will throw it back out.

TOLE KHESIN: Yeah. Just to add to that, so we actually have a number of workflows that we’ve built out with a variety of other video players and platforms. So, for example, even if you’re just using YouTube, so we consider that a video platform. So we have the same kind of automated workflow in place with YouTube. But if you’re using standalone, homegrown video players, like JW Player or Flowplayer are pretty commonly used, then we have processes in place to simplify that workflow as well.

AUDIENCE: Do you share that, those workflows on your website? Would you be willing to share?

TOLE KHESIN: Yeah, absolutely. Yeah, so there’s a bunch of information on our website.

AUDIENCE: Thank you.

AUDIENCE: When you talk about the law, can you explain that a little more? Like is it necessary just by law Madison has a standard that you must provide captioning or transcription? What’s the actual legal law? Is it the same kind of thing as students have to ask for it to be done?

DUSTY SMITH: Well, I think the actual law, it’s based on the 508 and the 504. It’s the federal laws that state that you need to caption and provide– Madison has a policy that’s in place that says you should follow the law. And I don’t know how well it’s enforce– I think enforcement’s the big thing. Officially, everybody is supposed to be doing it. Even other universities, but are they? I don’t know.

AUDIENCE: I can think of all the teachers, professors, who are flipping now, and there are no– I mean, I’ve never seen anyone who says his captions get transcribed somewhere. It’s just kind of a missing piece.

DUSTY SMITH: Right. And especially if it’s behind passwords, who’s going to even tell besides the student? So I think a lot of the enforcement is going to be on the students or the people who actually need it.

Although I think as automatic transcriptions become more common, it’s going to maybe be easier. Instead of having somebody actually transcribe it, just having software do it. I think it will start appearing because searches are huge. If you can get your transcriptions into a search engine, people are going to start wanting to do that just for that reason alone. But yeah, for right now, I think it’s just hit or miss.

TOLE KHESIN: Yeah, just to add to that, we work with a number of customers in education. And I think the long and short of it is that it’s just very much a gray area at this point. There are very specific laws, but there are also a lot of exemptions, like for having an economic hardship. So I think a lot of accessibility laws that apply to higher ed right now, they don’t really have a lot of teeth, as Dusty is pointing out. But I think that is actually changing pretty quickly.

AUDIENCE: I’m wondering if you could give an example of doing a search where there’s no closed captioning, and then doing– I don’t know if you have access to the web from this room. But you know what? I’d like to see if you do a search, if you get a YouTube video, and it’s not– the YouTube video comes up, but nothing specific about that video comes up because it’s not closed captioned?


AUDIENCE: I’d like to see what happens with a YouTube video when it is closed captioned. How is that search information different? You know what I mean?


TOLE KHESIN: Yeah, absolutely.

DUSTY SMITH: I don’t know if YouTube searches captions. I’m not exactly sure.

AUDIENCE: Or at the College Of engineering. I don’t care, but I’d like to see kind of a comparison of the two to get a better sense of an abstract description or definition.

DUSTY SMITH: Maybe afterwards we can give you a little demo.

AUDIENCE: That would be great.

DUSTY SMITH: I’ll just find something that would actually work that way.

TOLE KHESIN: Yeah. And also, if you get a chance to come to our booth, we have a bunch of search examples we can show you.

AUDIENCE: OK. Thank you.


MODERATOR: I have a question from the virtual attendees. How much money is in the contingency fund at Wisconsin-Madison? Do individual faculty often have to pay for their own videos? Is it just either the individual or the department? How does that–

DUSTY SMITH: I actually don’t have any idea how much money is put in the contingency fund, if there is even truly a fund there. Maybe it’s just they move money around. The faculty and staff, they do have to pay for their videos. I guess it depends on what they’re doing.

A lot of our videos are done for our professional development. And so we get some funding from them. But if somebody actually needs it captioned for accessibility, we just find it in the budget somewhere. We have extra money for that kind of thing, and we don’t use it a lot. But if it’s needed, we can get it done.

AUDIENCE: And so if an individual faculty member had a need, do you think it’s very frequently that they would go and pay for it out of their own pocket? Or would they most likely come to you?

DUSTY SMITH: I think at the UW-Madison, they’re not really going to have a need unless they’re contacted by the McBurney Center. And then the McBurney Center also has some grants available. So I think that that’s probably where their funding’s going to come from. It usually doesn’t come out of the department, at least not now.

Maybe in the future, some things like that would. But at the present time, we don’t really put that into the cost. Unless it’s a special one-time production for a department or something, we will bill them for that. But for classes, we usually don’t make the departments pay. At least at this point.

MODERATOR: We have another question from online. Are there any automated transcription apps or software that you find that work well?

TOLE KHESIN: No. Speech recognition, we actually use speech recognition as part of our process. It’s the first step. So a computer goes through it and gets it to a point where it’s about 70% accurate. But then we have professional transcriptionists who will go in and take it from 70% to over 99%. And unfortunately, speech recognition is often overstated in terms of its efficacy with this type of content.

Speech recognition works really well in cases where you can train to a specific speaker, where you can train the engine on utterances of your voice. But with this kind of content, it’s sort of unpredictable who the speaker is, and the environment changes. And also, speech recognition works really well in the case where you can restrict the scope of the vocabulary to a specific domain. For example, if you’re asking the user to speak a number or a department or something like that, that works pretty well.

But in the case where the domain is completely unrestricted, and there’s a lot of esoteric vocabulary and terminology, such as the case with higher ed, speech recognition is really very difficult to use by itself. It creates a great starting place for our technology, and it brings the cost down. But in terms of that on its own, it’s insufficient.

DUSTY SMITH: I’d just like to point out one of the things we learned when we were doing our contracts for the captioning is, if you don’t get into the high nineties for percentage, it really becomes unintelligible. If it drops down to like 96%, you think, oh, that’s fine. But it really starts– words start to appear that shouldn’t be there. And it throws whole different meanings into something that you don’t want.

TOLE KHESIN: Yeah. It’s interesting. Speech recognition is kind of– the thing about it is that when it’s wrong, it’s wrong spectacularly. It completely throws you off course.

Great. So I guess we’ll wrap things up. Thanks very much for taking the time.

Interested in Learning More?