Stay up-to-date on the latest episodes of Allied

The Complexities of Live Captioning with Derek Throldahl and Josh Summers

March 24, 2023

Transcript


Welcome to 3Play Media’s Allied Podcast, a show on all things accessibility. This month, we’re excited to share an episode with Derek Throldahl, Senior Director of Realtime Services at 3Play Media, and Josh Summers, Senior Manager, Training, Development, and Technology at 3Play Media Canada, about the complexities of live captioning.

Derek Throldahl entered the media accessibility industry in 2007 as a live closed captioner for news, sports, education, and corporate events. He now heads the live captioning department at 3Play Media Minneapolis, where he oversees workflows, meets with existing and prospective clients, and pushes industry advancement to provide high-quality captioning and language accessibility for live events.

Josh Summers trained as a live and offline closed captioner in 2003, working in the broadcast television industry. He helped implement and develop 3Play Media Canada’s voice-captioning department and managed its team for several years before moving into a multi-pronged, customer-facing role focusing on live and offline workflows, production efficiencies, and best practices.

Check out this episode on any of these platforms:

Want to get in touch? Email us at [email protected]. We’d love to hear from you.

Episode transcript

ELISA LEWIS: Welcome to Allied, the podcast for everything you need to know about web and video accessibility. I’m your host, Elisa Lewis, and I sit down with an accessibility expert each month to learn about their work. Every episode has a transcript published with it, which can be viewed by accessing the episode on the 3Play Media website.

If you like what you hear on Allied, please subscribe or leave a review. Allied is brought to you by 3Play Media, your video accessibility partner. Visit us at www.3playmedia.com to learn why thousands of customers trust us to make their video and media accessible.

[MUSIC PLAYING]

Today we’re joined by Josh Summers, Senior Manager Training, Development and Technology at 3Play Media Canada, and Derek Throldahl, Senior Director of Realtime Services at 3Play Media. Josh trained as a live and offline closed captioner in 2003, working in the broadcast television industry. He helped implement and develop 3Play Media Canada’s voice captioning department and managed its team for several years before moving into a multi-pronged customer-facing role with a focus on live and offline workflows, production efficiencies, and best practices.

Derek entered the media accessibility industry in 2007 as a live closed captioner for news, sports, education, and corporate events. He now heads the live captioning department at 3Play Media Minneapolis, where he oversees workflows, meets with existing and prospective clients, and pushes industry advancement to provide high-quality captioning and language accessibility for live events. The pair have a great deal of experience in the live captioning space, and we’re thrilled to have them both join us on Allied today to discuss the complexities of live captioning in the US and Canada.

Thank you so much, Josh and Derek, for joining me on Allied today. I’m really excited to have you both here. I want to get started just kind of getting to know you both, getting our audience a little bit familiar with who is on the episode today. So to kick us off, I’d love if you could both share something about yourselves that is not in your bio. Derek, do you want to kick us off?

DEREK THROLDAHL: Sure. Yeah, I’ve got a couple of things I could go to, but I think the one for here I’ll share is that I don’t put in my bio because it’s a technology company. It doesn’t level the coworkers we have here. But I have self-trained myself on programming. And so with my kids at home, I build apps for them. We play little simple HTML games. And so it’s something I like to do in my spare time.

ELISA LEWIS: Awesome. Thank you. Josh, how about you?

JOSH SUMMERS: Yeah. I mean, call this interesting or not, but when I was a kid, my dad was in the US Navy. And so me and my sisters were Navy brats, and so we traveled the world somewhat. I’ve lived in Japan, I’ve lived in the States, obviously in the UK. That’s where I’m from. So yeah, I have been able to see a lot of the world therefore through virtue of being a Navy brat.

ELISA LEWIS: Thank you so much for sharing that.

So moving on, continuing to get to know you, but shifting into a little bit more about our topic today and talking about live captioning, I’d love to learn how you both got into live captioning. Can you each summarize the path that led you to live captioning from the beginning until 3Play, and then your current roles at 3Play Media, including your respective locations?

JOSH SUMMERS: I can jump in there. Yeah, so I joined a captioning company in the UK in London in 2003, so getting on for 20 years now, where I trained as a live and an offline captioner. That was my intro into this industry.

Believe it or not, at that time captioners, live captioners used QWERTY keyboards to write live captions. This was kind of at the point that voice captioning was taking off as a technology, as a production method. And then while I was there, I volunteered to join a pilot re-speaking or voice captioning team. The company was just kind of feeling its way into that space.

Joined another larger captioning company a couple of years later. I worked there for a few years, eventually moved into a freelance position. I reached out to National Captioning Canada, now 3Play Media Canada, in around about 2015. And they invited me to move out to Canada in 2016 with a remit to help them build out a new voice captioning department.

So I joined the company as the department of that manager. I was in that seat for about five years, and then a couple of years ago, moved into my current seat of Senior Manager of Training and Development. So that’s my current role at 3Play. My primary responsibility is for training and development of our production staff, live and offline captioners.

Like most staff at 3Play Canada, I wear different hats. I also work in the operations side of live and offline captioning, technical support. I look at production workflows, looking for efficiencies in the ways that we use our software, and of course now collaborating with friends in the two other locations at 3Play, just generally helping to integrate the Canadian operation into the wider organization.

DEREK THROLDAHL: It’s always so fun hearing Josh’s story because our backgrounds are so well paralleled together. It’s just funny coming from different backgrounds, coming together and meeting each other, and then telling our stories and how similar they are. So for me, with a lot of people at that time, probably similar experiences, didn’t know closed captioning was actually a profession that people did until somebody in the industry actually told me about it.

And so for me, it was actually a college recruiter for a school in Des Moines, Iowa. And they told me about the program. They said that this is closed captioning, where people caption television and sports programs from home. And at the time I was thinking, that sounds amazing. Like, I’m going to be at home watching baseball games and everything anyway. So if I can get paid to do that, that sounds like the perfect job for me.

And so I went to the school. I initially started as a stenographer, which is very similar to what a lot of the court reporters use with a steno machine. A lot of the captioners do the same thing. And I loved writing and feeling that just flow of hearing words and, through my fingers, just creating that content.

But as I was going through the program, they introduced a voice captioning pilot program, similar to what Josh had done as well. And so my passion for just technology in general had me pursue that. And so I graduated with a closed captioning degree specializing in speech recognition. And then I spent the next eight years fulfilling that fantasy of working as a remote captioner with a company where I got exposed to all different really cool broadcasts.

I became a fan of the Tour de France. I watched the sport of MMA evolve. I got a daily dose of business news and Los Angeles local news, even though I was living in different parts of the country. I was able to take my job from Iowa to Florida, and then I moved from Florida back to Minnesota. And it was when I got back to Minnesota that I connected with a company called Captionmax.

And so in 2015 I joined. And my responsibility at that time was to build out a real-time captioner training program and grow the team of employees on site in our Minneapolis office. Over time my role evolved, began supporting the broader needs of our live captioning department as the director. And then in February of 2022, so a year ago now, 3Play Media was– they acquired Captionmax. And so my role since then has remained relatively the same, but with some fresh new faces, exciting new technologies that we can work with, and excited to see where we can take this next with the evolving of the industry.

ELISA LEWIS: Great. Thank you both. Derek, like you mentioned, your paths really are quite parallel. Both have so much experience in this space. So with that deep experience that you both bring to the table in live captioning, particularly for broadcast TV, I think live captioning can feel a bit abstract to many people. And it’s really quite sophisticated from a production side. Can you tell us about some of the unique factors that come into play in live captioning for broadcast TV?

JOSH SUMMERS: Yeah. I mean, the first thing I would say on that is that live captioning of all content types is challenging for similar reasons. When I think about broadcast captioning, I think about– I mean, particularly the work that we do at 3Play Canada, I think about live news broadcasts and sports broadcasts, as well as live performances in the arts and music and film events and things of that nature.

Fundamentally, broadcast television can be difficult to prepare for. That’s one of the biggest challenges for our teams. Shows in the live space are being compiled right up until broadcast time, so it’s not always clear what the content is going to be that the captioner has to caption. And obviously, the role of the caption there is to accurately transcribe the dialogue, including spellings, for example, of people’s names and organization names and things like that.

And a lot of the time, the resources that we have at our disposal to prepare for live broadcasts are thin on the ground or missing altogether. And we’re using resources that anybody, any member of the public could pull from for themselves. So captioners having a really solid understanding of global and national and local current affairs, and even the history behind current affairs can be super important.

I think about components like the decoupling of audio and video, which is– not to get too deep into the technical side here, typically our captioners are listening to live– what I’d call live live audio. The video is on a delay, at least on the audience side. So the captioner is relying on audio only a lot of the time to identify different speakers.

And in broadcast programming, particularly news and sports, there can be multiple speakers that you’re listening to and trying to identify just by sound, and often each with their own specific requirement for the way that the identifiers are formatted. There are commercial breaks. That’s obviously a feature of broadcast programming, which may, at first glance, look like an opportunity to rest. And that’s true, resting the voice and the brain in periods is important.

But typically, they’re research windows, so that the captioner can start to think about the programming that is about to air after the commercial break. And so they’re looking up the spellings of people’s names. You have things like house style to think about, which the way that the captions are supposed to look, the way that the customer wants them to be formatted. And that can vary between customers.

You’ve got things like crosstalk, which is typically less prevalent than it is in virtual meeting, virtual platform meetings. So if you think about breakfast news, for instance, where you have a number of speakers kind of talking over each other. And then you have caption placement as well, so being mindful of graphical information that’s being used on the screen and ensuring that your captions aren’t obscuring any important onscreen information. And that could be graphics, it could be people’s lips, it could be other kind of significant action.

DEREK THROLDAHL: Yeah, I think Josh is really touching on here, live captioning is hard. I mean, when you’re trying to caption at speed of somebody talking, there’s so much that goes into it. And so the captioner has to be really attentive to details, the formatting, make sure they’re still connected to those encoders, that they’re mindful of the audio. Sometimes audio changes. We have to change the source of our audio.

And so it’s just all about trying to maintain accuracy at speed and not getting caught up on an error. It’s easy for an early captioner to see a mistake and just gravitate toward that and almost get so hyperfocused that they lose the train of thought for all the stuff that keeps coming. And with live captioning, it doesn’t pause for you. It just keeps on going.

And so one of the things that I tend to put into our job postings is seeking somebody that simultaneous multitasking is a skill of theirs, which sounds very much like a job posting buzzword. But when I think about live captioning, you really have to be able to simultaneously multitask. That’s listening to the audio. That’s writing with your fingers, either stenography or keyboarding, speaking if you’re a voice captioner, note taking, reading your transcript, reading messages from the coordinators, watching your captions, make sure that they’re still working. All of these things are happening simultaneously. And that just takes the right person, the right mindset to be able to do that.

ELISA LEWIS: I think that’s a really interesting point that you touch on. It’s not just being able to have multiple tabs open on a computer and completing multiple tasks. It’s really like, in action in the moment, how can you execute and function so quickly and taking in multiple things and putting them back out there. But I also think that you touch on a really interesting point of being able to just move on, just being able to keep going. It doesn’t stop.

I’m curious if you could talk a little bit more– I guess, backtracking for a second. I think a lot of people, when they’re viewing live captions, and maybe they don’t understand the process behind it– you know, obviously, lag and delays are something that people comment on quite frequently and are an important consideration of live captioning. But to help our audience better understand that, I’m wondering if you can touch on the different types of live captioning methods a bit.

I know you mentioned steno earlier on. That’s what you did earlier on in your career. But can you talk a little bit about steno and voice captioners, and the differences and benefits or advantages of each?

DEREK THROLDAHL: Yeah, so the different styles of captioning really come into how the input is. And so for the stenographers, they’re using a steno machine. They’re using shorthand, where they create briefs to write the word, usually by phonetics, or they’ll do entries that will create an entire phrase with a single combination press of their machine. It’s very similar to playing an instrument, like a piano. I’ve never played a piano, but I felt like it as I was doing some of the stenography early on in my career.

Other ways are like voice recognition, where they will re-speak what they’re hearing into a speech-to-text engine, and they will do things to control the accuracy. So it’s not the same as talking into your cell phone and having Alexa or Siri give you the text in a text message. But they’re actually saying different words to control the homonyms. They will use, we call them trigger words to trigger the different spellings of things that may not sound anything like what I said, but because I said it in that way, I know it’s going to be reliable and give me the text that I want to create for the captions.

And so again, there is ways to control it. It is different than just regular audio-to-text that you might see on software. But they have a way to manipulate it. And then there’s ways to have keyboarding, where you hand type it or do shortcuts with a typical QWERTY keyboard, like Josh mentioned his experience early on. And there’s ways to take an automated captioning solution and then edit as it’s creating the auto text.

So there’s a lot of different inputs. And I think they all have different values to them. So a voice captioner has access to their hands where they can research terms as it’s happening. Steno is much more deliberate in the key presses and what they may create. So it’s just a matter of the theory that creates the content at speed for them and what they’ve learned.

JOSH SUMMERS: And I definitely wouldn’t recommend live QWERTY keyboarding as a production method. I don’t imagine there are too many, if any, providers that are doing that anymore. Very difficult to write quickly enough to be able to keep pace with most speakers, which is one of the reasons why when I first started doing this on a QWERTY keyboard, we worked in tandem. So you had two people kind of trading sentences off of each other.

And then yeah, just on the steno voice point– and this may be simplifying things a little bit– with a steno machine and the theory that you’re using, you press a combination of keys and you can expect to get the same result each time. Whereas with voice recognition technology, there is always that kind of element of unexpected text generation, which, as Derek said, we can do a lot to mitigate. But the results can be a little bit more variable.

DEREK THROLDAHL: And I think, Elisa, to touch back on your question about latency, one of the things there is live captioning naturally inherently is going to have a delay, because we can’t write the words until we hear it. And if it’s truly live, we have no idea what’s going to be said. And so because of that, and then because of the transmission of that text, there’s going to be somewhere between, you could call it four seconds to seven seconds, like a window there, which is elastic because at times, we will write a phrase and catch up quickly. Other times, we’re going to sit back to understand what is the proper homonym, or what is the direction that the speaker is going.

And so there’s always a delay there. But when you start to experience delays that are 20 seconds of lag or 30 seconds delayed, those are always a technical related issue. It could be that the audio the captioner is listening to is maybe 15 seconds delayed itself. Captioner doesn’t know that, because they don’t know what true zero means. They’re just going off of the audio they hear, which seems to them instant.

But if they’re hearing something that does have a buffer to it, that’s only going to add 15 seconds to the 5 that they’re writing. So there’s always a delay. But it should only be within maybe 5 seconds.

ELISA LEWIS: What is typical– if there even is typical– what does that look like, the communication between live captioners and the event producers in some of these really big high profile events? Do they typically have a lot of preparation and information up front? Is that not typical? Do you have any idea of what these best practices is for that?

DEREK THROLDAHL: Yeah. And so from an accessibility stance, the ideal situation would be that the event would be recorded, and all the accessibility services would be added in a way that allows them to edit and review for accuracy and make it perfect. But there’s a lot of excitement and urgency with live events. That’s not possible.

And so there is this collaboration with the stations and the broadcasters of giving us awareness so we can prepare as best as possible. So it could be a verbatim script. It could just be a rundown of what’s going to happen. Or it could just be a list of who’s going to be appearing, and we can make our own inferences from there of what they might be performing or what they might say, or those kind of things.

But it’s really about what the station and the broadcasters are willing to collaborate with and what they’re willing to expose. We’re in a world where there’s a lot of communication, and there’s worries about leaks and trying to get as much stuff revealed in the proper timing and the proper moment and not to get it out to the internet early. And so for those reasons, we often don’t know anything more than what somebody online would know just from looking at the websites and social media.

And so we’ll use as much as we can predict. But ideally what would happen is– and this does happen with a lot of these high profile events– is we’ll join for the rehearsal, so we can kind of hear in and get a test run for our captions to see what works, what doesn’t work, and start to prepare some of our own rough scripts. Or again, they’ll provide us in content in advance.

JOSH SUMMERS: Yeah. And I think it’s– just to tack on to that, it’s illustrative of perhaps what can go wrong, what can happen if a broadcaster or a network cannot or is unwilling to provide a level of prep material that helps caption providers to mitigate these sorts of issues.

DEREK THROLDAHL: And something else we’re looking at now is, when we do know that there’s going to be a combination of languages, so if we know there’s going to be a Spanish performance, for example, or if it’s high likelihood that there could be some acceptance speeches in other languages, we will actually book multiple captioners so we can do an English captioner and a Spanish captioner, and they’ll share the connection. And when I say they share the connection, they’re both connected to where the data is being embedded into the stream. And then at the appropriate moments, the captioner will take over and actually write for their portion.

So during the performance, the Spanish captioner can take over and start to caption those lyrics in full. And then as the performance finishes, and they go back to the English portion of the event, the English captioner retakes control, and they then insert their captions.

It’s more difficult logistically. There’s a lot more planning involved. You have to have some foresight that you’re going to need other languages and what those languages would be. But as we evolve with the industry and languages become more globalized, this is something that we’re exploring and that we’re working with some broadcasters to solve.

ELISA LEWIS: Yeah, I think there’s a lot of great information that you both shared. Josh, I think you made a particularly interesting point that as you both have talked about, the live captioners are using this experience as a learning opportunity to think like, OK, how can we change our processes and be more prepared for something like this in the future? But it’s also important for the broadcasters, the event producers to think about from their end, is there a way to get more information to the captioners ahead of time? Are there different things that they can do or pieces of information that can be provided to help alleviate some of these challenges that are inevitable?

I’m curious, because I think it does give some really great insight into the process of live captioning– there’s another controversy that erupted late last year over an Elton John concert on Disney Plus where the captions displayed the words “Donald Trump” at a few points in the live stream. This was later reported to have been a technical glitch. But during the event, one tweet joked that someone would be fired for it. Can either of you touch on how something like this might happen and what kind of technical components would play into it?

JOSH SUMMERS: It’s difficult to say obviously without having the context behind the production method. But certainly, from the couple of reports or articles that I’ve read, it looked as though there was perhaps a hybrid AI caption and human captioner or editor system that was deployed for the event, which is unusual. It’s not a prevalent production method in our space. I can see it being a challenging production method.

But I mean, if this were the case, then there may have been AI captions that were generated to kind of create a base transcript that a human editor was monitoring and then fixing, correcting errors that the AI made in real time. So one thought is maybe that the editor simply failed to catch the words “Donald Trump” in time, assuming that there’s a limited buffer time that was there for editing.

Or maybe there was a trigger, like a keyboard shortcut that the words “Donald Trump” had been assigned to that the editor was punching out, and the editor didn’t realize. So either AI or human, or both potentially.

DEREK THROLDAHL: Yeah, and we see that occasionally with live captioning, because you are trying to create ways to reuse different shortcut commands, trigger words if you’re a voice captioner, or keyboard strokes or steno strokes. They have a technique they call briefs in steno. And for voice writing, we call them triggers, where you write something or you say something a specific way that by itself means nothing. But depending on your dictionary entry and what you have that correlated to, you can then turn it into something that’s utilized for that program.

That’s how we build out rosters. That’s how we build out special words. So they could have had DT as “Derek Throldahl,” “Donald Trump,” “dog trainer.” They could reuse that same combination a number of times. And I don’t know what they intended to write here, but it could have been, again, a DT phrase that their DT they thought was going to come out as something, and instead, it came out in this program as “Donald Trump.”

We saw this. And it’s really interesting how sometimes it can be really offensive, or it could spark a lot of controversy even when the word or phrase itself is not independently controversial, but in the context it’s used, it really can be. And so one that always comes to mind for me is– it was around the controversies with Colin Kaepernick, where there was a lot of tension around whether or not athletes should be taking a kneel during the National Anthem or if they should be standing, or some actually went to the locker room.

And that year during the MLB All-Star game, there was the National Anthem. It was broadcasted on TV. Everybody’s watching it. It’s a big event. And at the end of the song– those who know the National Anthem, of course, “o’er the land of the free and the home of the brave,” well, “free” was written as the word “knee.” And so it came out as “o’er the land of the knee.”

And contextually, in the time of the controversies with kneeling to the National Anthem, to have the word “knee” come out instead of “free” during the National Anthem blew up on social media, thinking that this was intentionally done. And the only person who knows if that was intentional is the captioner. But I give them the benefit of the doubt because it’s such a simple error to make with a steno machine. It is a slight misposition of the right or the left index finger to change the F to an R, and the entire word changed from free to knee. And with voice writing, those words sound very similar. So as Josh mentioned, you might say it accurately, but if the speech engine with its algorithm chooses a different likely word, suddenly it’s going to give you something different than what you intended.

And so again, I give the benefit of the doubt to the captioner there, but that didn’t lessen the controversy any more. And it went out and became a controversy, even though the word “knee” by itself would never be a word I would censor, because it’s by itself not controversial.

JOSH SUMMERS: Yeah. And again, it speaks to the complexity of the live captioning job, this multitasking that we’ve been talking about. Again, like Derek, I would be reluctant to point at anything in particular. I’d certainly be reluctant to criticize the captioner, because captioners are doing so many different things at the same time that it can be difficult to spot and rectify every mistake. And if you are leveraging ASR in a workflow, it’s even more difficult, depending on– turning on the production method anyway.

DEREK THROLDAHL: So they described it as a technical glitch. I think it probably wasn’t a technical glitch but more of an unintentional technical result, just based on how everything was layered together.

JOSH SUMMERS: Yeah.

ELISA LEWIS: That makes a lot of sense. And I think, again, people commenting on these things may not understand all that goes into it and just how easy it is to have something like that happen without really any intention behind it at all.

I think most of these instances where captions or caption issues go viral or gain a lot of social media attention actually have a positive effect on at least the visibility of accessibility services. I think some of these scenarios end up in more laughs than anger. But certainly, the trend of calling for the firing of a captioner any time that there is a transcription snafu is concerning. Are these controversies or the conversations happening around it changing the way that live captioners operate, and what are the takeaways here? How do we move through this going forward?

JOSH SUMMERS: Yeah. Sorry. Go ahead, Derek.

DEREK THROLDAHL: Yeah, I think it’s a reminder that people are paying attention. So captioners take a lot of pride in their work. I mean, they want to do a good job. And it’s rewarding to see people passionate about the quality of those captions. It’s what promotes good quality captions.

And the fact that there is an outcry for things to be accurate is wonderful because it’s going to provide the influence that broadcasters need to make sure that they’re also doing their due diligence to do things like provide prep to the captioners and to be collaborative, to make sure the captioners can write as accurately as possible. But it can also be disheartening when one mistake gets all the attention, even after you do such a good job. It’s usually that one moment where you messed up that ends up getting the spotlight.

JOSH SUMMERS: Yeah. And captioning bloopers have existed for as long as voice captioning has existed. And it’s those more significant snafus that are always at the back of the captioner’s mind, I think. And developments in the software that captioners use have progressed and have allowed captioners to handle the limitations of speech recognition software in many more ways that leads to a more accurate end product. And I think the captioners are acutely aware of the limitations of speech recognition software. They know how to mitigate against the vast majority of those errors, the errors that they can mitigate and those that are just out of their hands.

ELISA LEWIS: I think it’s really critical for viewers to understand the nuance involved with live captioning. And as we wrap up our conversation, I’d love to close with each of you touching on a couple of pieces of advice for our audience. So as kind of a prompt, what advice would you give broadcasters and networks who are live captioning their televised events?

JOSH SUMMERS: I think that most broadcasters already understand the challenges of live captioning and how to support providers. There is regulation in place that compels broadcasters to provide live captioning.

Broadcasters that are entirely new to the live captioning space, though, should vet providers when they’re looking for a caption provider. They should learn the requirements. They should understand the closed captioning regulation that exists in their geography. They should ask, I think, to see examples of work. They can inquire around things like internal training and quality assurance standards and talk about the provider’s ability to support the customer.

And yeah, I mean, as we’ve been saying, prep materials, understanding that the more prep, show rundowns, scripts that they can provide, the better the quality of the captions ultimately.

DEREK THROLDAHL: Yeah. Josh mentioned a really good point here where a lot of the broadcasters in the traditional space, they’re familiar with best practices and what the captioning companies need to do a good job. But the industry is shifting, and we’re doing a lot of work that’s not necessarily broadcast anymore.

We’re doing things that are corporate events and online sessions and webcasts and all different kinds of things. And so we do get a lot more people coming to us without that background of understanding what the best practices are. And the key things we tell them is, schedule with as much notice as possible. Allow us to be able to get the resources lined up to prep as best we can and to make sure that the event is going to be successful.

Do a test of the connections. Make sure that everything is set up to go, because the events are going to go with or without the captioning, right? This is live broadcast, live events. The timing is so critical to live events. And so making sure that everything is working in advance is part of those best practices.

And then absolutely, whatever they can provide for either direct script material or even just awareness about what the event is going to be, who is going to be speaking. Some of the content, just even broadly speaking will help the captioner get in the right mindset to do a good job.

ELISA LEWIS: Thank you. I think that’s a really great way to wrap this up with some really actionable pieces of advice. When we’re dealing with technology, there’s always a possibility of things going wrong. But preparing as best as you can, practicing, making sure that you’re setting things up for as much success as possible is really great advice.

Thank you both so much for sharing your insight into how some of these issues happen and the complexities that come along with live captioning, as well as how you can really prepare and have the most successful live captioning possible. So thank you so much for joining us on Allied, and it’s been really great chatting with you both.

DEREK THROLDAHL: Thanks for having us.

JOSH SUMMERS: Yeah, thank you.

[MUSIC PLAYING]

ELISA LEWIS: Thanks for listening to Allied. If you enjoyed this episode and you’d like to help support the podcast, please share it with others, post about it on social media, or leave us a rating and review. To catch all the latest on accessibility, visit www.3playmedia.com/alliedpodcast. Thanks again, and I’ll see you next time.


Contact Us

 

Thank you for listening to Allied! For show information and updates, visit our website. To get in touch, email us at [email protected].

Follow us on social media! We can be found on Facebook and Twitter.