« Return to video

Inclusive Design in XR and Beyond [TRANSCRIPT]

JACLYN LEDUC: So hello, everyone. Thank you for joining us today for the presentation entitled Inclusive Design in XR and Beyond. I’m Jaclyn from 3PlayMedia and I’ll be moderating this webinar today. I’m joined today by Jamie and Lion, an autistic duo who focus on adventure and play in the digital product space.

They have 20 years experience building digital products for web, mobile, game platforms, and beyond. They center their approach around understanding and removing barriers, using their expertise and experience in digital accessibility, inclusive design, and neurodiversity. They support organizations large and small to build better products and services.

They’ve worked with leading brands, such as BBC, Apple, HSBC, Sky, and many more. And with that, I will hand it over to Jamie, who has a wonderful presentation prepared for all of you. Jamie, take it away.

JAMIE KNIGHT: Thank you very much. Hello, everybody. Howdy. So I’ve gotten a slide showing up. Thank you for the introduction there. Yeah, it’s good to be here. It’s a little bit late in the UK. It’s about 6 PM, so this is right at the end of the day for me. So as a little disclaimer, my ADHD meds have worn off. I don’t have an internal filter, so I can’t predict how this is going to go. This should be quite fun.

So today, I’m going to be talking about inclusive design within XR and beyond– once these clips start changing. So you’ve already covered a great introduction of me, so I don’t really have much more to add. I’m Jamie, this is Lion. He’s fluffy, I’m not. His head is made of fluff. Mine’s kind of made of fluff, too, but a different type of fluff.

We do digital accessibility things. I’m a proper nerd. I’m just finishing up 10 years at the BBC. I’m actually going to just modify my screen slightly, because I can see myself whilst I’m talking, which turns out to be hugely distracting. So I’m just finishing up 10 years at the BBC– almost 11 years– where I worked in engineering roles, I worked on the digital accessibility team, which I went on to lead for the last year.

And I’m now doing what I call my summer of adventure– so Jamie and Lion, Limited, that’s my new company. And I’m kind of structuring my life around adventure and play. So going out and doing events, riding my bike, doing some code stuff, making things– kind of having a hell of a summer. And then I’ll see what happens at the end of the summer– maybe find another job or continue what I’m doing if I’m having fun.

So that’s kind of where I am right now. It’s kind of an interesting transition actually, to leave the corporate world and go back out into the real world, so to speak. And on that note, I need to start this with a thank you. Rhys Fowler is one of the heads of design– possibly a creative director, can’t remember– at the BBC who runs all of the user research teams.

The BBC has given me permission to talk about the bits of this project that relate to my time at the BBC. There’s a lot of stuff in here that isn’t BBC. But a big thank you to Rhys, because he didn’t have to do that. And I really appreciate it. It certainly means that I’ve got some interesting things to talk about.

And before I get too far in– and I know, it’s lots and lots of slides before I actually get started. Possibly one of the most important slides of this whole tour, which is my disclaimer. So the context of this is I’m sharing what I think and why. This isn’t the opinion of my employers.

This isn’t science. I’m sharing opinion and perspective. And the plural of anecdote isn’t data. Some of the stuff in here is research, but it’s not necessarily rigid, really strong data research. It’s research based on exploratory stuff, which we’ll get to when we get to it.

And then finally, I’m just me. I’m speaking for myself, not all autistic people. Hell, I don’t even think I speak for the Lion. I think if I say something too rude, you’ll just see him nibbling me. So yeah, just me. This isn’t science. And I’m telling you what I think and why, and how I think about things and why I think about it that way.

So the actual topic today– Inclusive Design in XR and Beyond– which is a lovely title. And I was trying to scratch my head and think about what I wanted to talk about. And there’s so many different areas I could talk about. All of these things on this screen here– so performance, standards, switch control, safety, applications, social, hardware– oh, there are so many things.

But when it really comes down to it, we have kind of one question to ask ourselves, when it comes to inclusive design, especially in the context of XR. Which is, who do we want to disable with it? So the question we have to ask ourselves is who does XR and VR disable?

Now that might seem a little bit weird. You might be looking at the word “disabled,” and say, hang on. Isn’t accessibility for disabled people? And I kind of go, well, there’s a few different lenses on that. And one of the lenses that I really like is known as the social model of disability. Because there are lots of different ways to define disability.

I think most of us are familiar with the medical model, where you have something. You have a broken leg or you have a disorder. But there’s another model, which is the social model. The idea of the social model is that it’s the mismatch between a person’s body and the environment that results in the experience of disability. So we refer to it as the impairment and the environment.

So for example, I have what I would call a “crossing the road” impairment. I get hit by cars kind of a little bit too often. So that’s my impairment, is crossing roads. And then the environment is the roads. But it’s also society. It’s also the attitudes and expectations of each other. And it’s the mismatch between the two which results in me becoming disabled– when the roads aren’t designed in a way that are safe for me, or don’t have enough crossings.

Or another example, thinking of Lion, is if– I’m British. So naturally, cups of tea are a thing I do. But I don’t drink that many– enough of tea, but I’m British. So there’s an expectation of tea. Also, I’m former BBC, so I need to mention Doctor Who. So I’ve now mentioned Doctor Who and Britishness, so I think I’m covered for that for now.

If I got up in the morning to make myself a cup of tea, and in the night, Lion had got hungry and nibbled off my legs, then I’d turn up in the kitchen and I wouldn’t be able to reach the countertops because he’s nibbled off the bottom half of my legs. Countertops would be too high for me. At that point, my impairment and my environment have a mismatch and I’m experiencing disability.

Or maybe, Lion’s gone to the hardware store– gone to B&Q– and he’s brought these really big, blue wood screws. And overnight, he’s moved all of the kitchen counters. So he’s moved them all three feet higher. The following morning, he hasn’t nibbled my legs off, but I wander into the kitchen, go to make my cup of tea and I still can’t reach the counter. Because again, I’ll be disabled by the environment.

So the mismatch between the two that matters, not the absolute values of either. And this is kind of an important thing. When we think about these things, when we think about the language. We have the impairment, which is the factual, measurable thing, and the environment, which I said was kind of the built environment, and also attitudes and expectations.

And then you have the barriers in between. The barriers are those mismatches. Now for me, as an autistic person who has a motor impairment these days, as well, this was a really important thing. Autism doesn’t disable me. I’m disabled by the barriers and the environment. And it’s a lot easier to change the environment than it is to change me.

My needs aren’t special or somehow unreasonable. I just have different needs from the environment. And it’s essentially a design flaw within the environment that causes me to experience a lot of disability. Now that’s not quite the same as identifying as disabled. That’s a slightly different discussion. But in this situation, it’s not the autism the disables me. It’s the barriers that are put in place by others.

And we can then turn this lens to XR or VR, where XR is the creation of new environments. In fact, it’s the most complete environment replacement we’ve ever had. If you put on a virtual reality headset, there’s very little of the original environment base left– smell and sound for some of them, but even they can be masked.

So with that and creation of new environments comes the creation of new barriers. And in a nutshell, that’s what I want to talk about today. Barriers are the middle of everything when we think about accessibility and inclusive design. So I’m going to talk about three areas. I’m going to talk about the lenses– about what a barrier is, anyway. I’m going to talk about the big picture, which is where the barriers can be found. And I’m going to talk about small picture of, OK, let’s say you’re getting started with inclusive design within XR or VR today– or indeed actually, inclusive design in lots of different places– how to begin, where to focus your energy and why. So that’s kind of the structure for today. You’re going to see this slide quite a lot as we go through things.

Let’s start with that thing about lenses. So one of the questions we may ask ourselves is where do barriers come from? Well, we’ve already established they’re within the environment. But how do they get into the environment? They don’t pop up out of nowhere.

Well, the idea that I have, or the way that I think about it, is it’s the assumptions that we make that essentially become the barriers within the environment. When we’re designing and making things as designers and developers, we are encoding our assumptions into the environment as part of the design.

If you don’t know about something like color contrast, when you make a thing, you add the color contrast barriers because you don’t know that you’ve assumed that everybody can see what you can see– if that makes sense. And there’s this process that the assumptions become part of the barriers. So the assumptions lead to the barriers, then the barriers ultimately lead to exclusion.

So if we want to shape who we are excluding, we need to change our assumptions. And we need to think about the bigger picture, all centered around this concept of barriers. Well, to put it another way, if we start in the middle with barriers, we can move forward to solutions. But we can also move backwards to knowledge.

And this is really key, because the more knowledge somebody has, the more aware of their assumptions they are, the less barriers they put in the environment in the first place, and the less clever solutions you need, which is really good. Because ultimately, what we’re aiming for is not a dependency on really clever solutions, but a dependency on good design that doesn’t contain the barriers in the first place.

So I’ve said that barriers arise from assumptions. But there’s a bit more to it than that. Where do the assumptions come from? How do we group assumptions? Well, principally, assumptions come from two areas. The first area where assumptions come from are the assumptions we make about the user’s body and ability.

If you’ve seen accessibility stuff before, you’re probably aware of this model. It’s often known as the User Empowerment Model. And generally speaking, we group users into four categories. There is also communication and stuff like that, but I’ll nestle them under cognitive for now.

We have motor impairment– that’s to do with your movement. So that might be strength, dexterity, precision, that sort of thing. I have a motor impairment. I have a spinal cord impairment, a spinal cord injury, which makes me quite wobbly.

Or as I sometimes put it, I kind of got a complicated relationship with gravity. Always pushing me over, bloody gravity. It’s very rude. So that’s my motor impairment. But it also affects my hands. And I get numb hands and have trouble typing, and things like that.

We then have cognitive impairments. Cognitive is all around how you receive– so cognitive accessibility model, it’s all about how you receive, process, and act on information, how you pull information out of the environment– understand it, comprehend it– think, act on it, and then go and do a thing.

We have vision impairment, which is to do with how you perceive things with your vision. Not part of my spinal cord injury, but part of what else is going on with me is that I’m having some very strange vision loss and some interesting vision experiences.

So I’m starting to experience some changes in my vision there. That’s to do with being able to see things. And that might be that you can’t see anything at all. But it might also be that you’re very sensitive to light, or that you need lots of brightness or lots of contrast.

And then the fourth group would be hearing. So the hearing group can be people who have just hearing loss, which is kind of starting to lose your hearing. We have hard of hearing, which is kind of a different identity within that. And then we have deaf with the lowercase d and Deaf with an uppercase D.

So you could say that a deaf person with a lowercase d is somebody who can’t hear. And the Deaf person with an uppercase D is somebody who is part of the Deaf community, which has its own culture, its own norms. It’s a fully established community.

As a quick word on language, you’ll hear me use identity-first language– things like Autistic, Deaf, Blind. That’s because they’re describing communities. For example, I see myself– I am autistic, but I don’t have autism. I have a Lion, but I don’t have autism. It’s just part of me. The same way that I don’t have shortness, I am just short.

I’m not that short. But shortish? Shortish-ness? Don’t know how I’d say that. So those are the four groups that we tend to think about. So when we’re thinking about the barriers that people experience, we make assumptions about users’ bodies. And those assumptions can trip us up.

The next one is assumptions about the user’s ability. So we tend to think that everybody lives on this bell curve of ability to do something, whether it is drive a car, drive a website, fly an aircraft. We tend to think that people set on this bell curve. And that’s true to a point. But people kind move up and down this bell curve, especially over the course of the day.

One of my favorite examples of this is in the UK, we have an app for doing directions on local transport within London. I’ve forgotten its name at the moment. And on a Friday night, they make all the buttons bigger because drunk people aren’t accurate. So they make everything a bit bigger and easier to use. So people’s ability changes over time.

So I have a model for this, which I refer to as ability, capability, and capacity. And this is something that I use within my autistic life to describe what’s happening for me. So to give it a bit of an explanation, my ability or my abilities is the list of everything I’m capable of doing, which for me, includes flying small aircraft. That’s one of my abilities. It is on the list.

My capability are the things that I can do today. I probably can’t get fly a small aircraft today. I’m too tired and too sore. And I don’t have a small aircraft at hand. Like, I don’t have a runway in my garden. I don’t have a garden or a plane. But if I had a plane, a garden, and a runway and I was on another day, I might be capable of flying my aircraft. But that’s not today. So the capability is that shorter list of what I’m capable of doing today.

And then capacity is how many of those things can I actually do? So for example, I might have capacity for one big thing or two small things, or I have my two mediums, one big or three smalls, or two smalls and a medium. This bit about capacity, you might sometimes hear it referred to as “energy accounting,” “pacing” or “spoons.”

So for example, I measure my energy in spoons. I’m currently starting most days with three or four spoons and I’m coming to the end of my day, I’m probably down to about 1 and 1/2 spoons here. So I’m going towards the end of my day. So that’s the area where we make assumptions about the user’s body and ability and the things that can trip us up.

But we also make assumptions about the user’s environment and situation. So for example, the user may be on a train where they’re moving around, trying to use their phone one-handed. The user may be in an emergency situation where their house is full of smoke. Or they’re on the London Underground and the train is broken, or there’s been a terrorist attack or something, and they’re desperately trying to get information.

That’s an example of an emergency. I think it was Gareth who once said to me that during an emergency, everybody becomes quite autistic in how they process things. I always found that quite interesting, that when you take away the luxury of energy and time, a lot of people behave in a way that would be considered quite autistic. So there’s a commentary on social life and the power dynamic between medicine for you.

Then you’ve also got things like the bright sun. If you’re using your phone in bright sun and you don’t have sufficient contrast, you won’t be able to read anything– especially if you’ve got a low battery, so you’ve turned the phones brightness down. That’s an example of the environment, or assumptions we make about the environment. We tend to build all of our things assuming that users are in an ideal environment.

In an XR situation, this might be that the user is in a shared environment. The dog’s running around, the cat’s running around or falling asleep on the corner. Or they might not have a smooth floor, or they might have a level floor. These might be assumptions you make about the environment.

And then we also make assumptions about things like locality and social things. We make the assumption that the user isn’t lying next to a sleeping partner, or that they’re not playing VR whilst there’s a baby sleeping in the other room. So barriers are rife in these two areas– assumptions we make about users’ bodies and ability. And then we also have barriers that arise from assumptions you make about the user’s environment and situation.

In a nutshell, that’s what I would say the difference is between accessibility and inclusive design. If it’s accessibility, we’re thinking about the assumptions we made about the user’s body and abilities and the barriers that come from that. So for example, if you had a user who had the tremors associated with Parkinson’s, then that’s a barrier. One of the barriers there is click regions and the size of things to press.

On the other hand, if you had a user using something one handed on a tray– exactly the same barrier, that things need to be bigger so you can press them, but it’s coming from an inclusive design perspective because it’s driven by the environment and the situation.

One of the reasons I draw this distinction is that we have legislation and rules around accessibility, but we don’t really have legislation and rules around inclusive design. So in some organizations, it’s important to know the difference because it depends what team will look at it and who makes choices, and stuff like that.

So that’s kind of the lenses. That’s how we look at the situation and how we think about the social model– the sorts of users that are going to be impaired, the sorts of users that are going to face barriers and the conditions they may have or the situations they may find themselves in. So now we understand a bit more about barriers and where they come from and the social model and stuff like that.

Let’s have a look at the big picture for XR– where can we find the barriers within XR, and what areas do we get to play with? Sometimes I might call this context and shape. Also, just as a note– I was really happy with this icon. It’s a big picture!

See? It’s a big picture! No? I was really happy when I came up with that. It’s a big picture, because it’s a picture that’s really big. Anyway, moving on swiftly, because I’m probably the only person who finds that funny.

Earlier, I was talking about the fact that when we’re thinking about barriers, we’ve got two different areas where we can change things. We can remove the barriers by having better knowledge, making better assumptions so the barriers aren’t there in the first place.

And then we also have where we can do solutions, which is where we can change what we’ve built so it has more capability and features so that users aren’t as impaired by the barriers or aren’t as disabled by the barriers. Users with impairments are not as disabled by the barriers. That’s where we’re going to start for this bit.

Let’s have a chat about the solution space. So very quickly, generally speaking, there are three areas within XR and VR where we can modify things to try and introduce solutions. So I generally refer to them as the hardware, the environment, and the content.

So the solution space for hardware deals with things like your headsets and your controllers, but also things like the cables and the drivers. So the headset is around things like weight and style and resolution and cost. They’re all things that affect the accessibility.

Controllers are to do with being able to hold things. When we did the BBC XR project, one of the things we very quickly tripped over was that a lot of users can’t hold the controllers. So we had to think about a better way to do that. The absence or not absence of cables– if somebody is using cables, then you need to think about making them not turn on the spot so they don’t trip over the cables.

But if somebody is wireless, then they’re probably running on battery power. So there’s a limited amount of playtime there. So there’s different solutions available in all of these different pieces of hardware when it comes to accessibility barriers.

And then the fourth one is drivers. They’re not technically part of the hardware, they’re part of the software stack. But the hardware sort of dictates which drivers you get to play with. And different drivers have different features, so I’ve included it in hardware stuff. This is one of the places where we can look to implement our accessibility solutions.

The next area is that environment, the thing that we surround the user with. So we have things like perception. So that is to do with making sure that users can actually perceive things in the environment, that there’s sufficient contrast, that things happen slowly enough– stuff like that.

We then have navigation, which is moving around the environment and knowing where you’re going. Information gathering– so being able to pull information out of the environment. So for our project, we built a little VR model of a school library. And one of the questions we asked users is, what time does the library open and close? Because that’s really important information. If you can pull that out of the environment, that’s how you’re going to understand things.

And then finally, affordances. So an affordance just a posh name of saying knowing what you can do by looking at something. So this comes from a classic book called The Design of Everyday Things. And it has a teapot on the cover, hence there’s a teapot on my slides.

Which I think is like a design, UX designer, researcher, like– I don’t know. It’s almost like when a film does the same thing over and over again, it’s not a niche. It is a– one of those– trope! There you go, it’s a design trope. I’ve got a teapot on my slides, therefore I’m a designer. Sure, that’s how that works.

So those are solutions in the hardware layer and solutions in the environment layer. We can look at solutions in the hardware layer and we can look at solutions in the environment layer, but we can also look at solutions in the content layer. These are the building blocks of the experience.

So we can look at solutions and we can have barriers that relate to narrative or the storytelling, that relate to things like gameplay and the type of gameplay. Is this a simulation in a driving game? Is this an RPG? Is this a fighting game? What sort of game is this?

It’s kind of one of those things that’s quite important, because we can do lots of different things. We can have different patterns for different types of gameplay. We can also do a lot of solutions around things like interactions. So if there’s an interaction that goes wrong, having a button that resets it. Or doing very clever things with how different people can interact different objects within the VR space.

And then the fourth area I’d say is instruction, which is making sure that users understand what the content needs them to do. What comes up next and why, and all that sort of stuff. So I would say that a really high level, that’s your solution space. In a minute, we’re going to dive into some of the specific solutions in some of the specific barriers.

But kind of bear this in mind as we go on through this talk, that we can think about things in the hardware level, where we can change our assumptions or implement new solutions. We can think about things and our assumptions about the environment, what the environment will be built. So assuming that, for example, everybody can tell which way the door is.

Or in my case, we had the school environment. And we have these white hands that were like white robot hands. And we had white walls. So for some of our low vision users, they picked the controllers up, and went, where are my hands? It’s like, they should be in front of you. They’re– oh yeah, they’re white on a white background. That won’t work.

Turn 180 degrees, there’s a dark corridor there. Look, there they are. So we made them change to yellow and stuff like that. And then finally, you can do solutions and you can try different things around content itself– so the narrative, the interaction, stuff like that. So that’s the solution side.

The other side where we can do cool things is on that knowledge, about understanding the barriers better so that we make better assumptions so that we don’t have to depend on clever solutions. So this is kind of what the project that I did mostly focused on.

We wanted to really understand the barriers so that we could help designers and developers design them out from the beginning, not make the bad assumptions that lead to the barriers in the first place. It was called the BBC VR Barriers Research, although it ended up getting published under the XR Barriers Research, because it’s a little bit broader than just VR.

Here are some of our wonderful users using different adaptive things that we ended up putting together. Now funnily enough, we didn’t actually intend to build any solutions. But some of the barriers were so huge that we couldn’t learn more about the little barriers because some of the users quite literally couldn’t walk through the door. They couldn’t hold the controllers, they couldn’t take part at all.

So we had to solve some of the problems in order to– have solved some of the big barriers a little bit in order to be able to see the smaller barriers in order to complete the research. So we built this environment. We took it to a whole bunch of places– schools, care centers, community centers, community groups– all sorts of things.

We collected a whole bunch of data, about 1,700 observations from about 100, 107 participants. And then we took that data set and we took away all the duplication. We did something called a thematic analysis and pulled it all together into the 70 most common barriers.

And then we took those 70 most common barriers and we processed them again to actually understand the barrier groups– how they come together, where they’re similar, where they are different. They’re not duplicate barriers but they all stem from the same assumption, if that makes any sense.

We found about 20 of those. But sadly, we only got to do three of our five groups because COVID kicked up. COVID was kicking up just as we were starting to work with our blind users and fully deaf users. So we had lots of low vision users and users with hearing loss or hard of hearing users, but we didn’t quite finish those groups.

By the time COVID had blown over, I was leaving the BBC. So we decided to publish what we had. So that’s kind of where this barrier information comes from. If you want to have a look at it, the project basically splits into two. So we have the barriers browser, which is that list of the most common barriers. We published 16 of them in the end because they’re the ones we had good data for.

And then we’ve also got things like deep dives and presentations and other resources. But I’ll leave you to go find that. It’s up on the website and I’ll share the link multiple times over the rest of this tour. But if you just Google BBC XR Barriers Research, it comes up as well.

So that’s where the barriers are. And we have 16 of them. And here they are, here are the barriers, which is great. But that’s a wall of text, which isn’t particularly useful to you. I’ll very, very quickly run through them for people who can’t see the slide. The motor barriers, with things like controller holding, controller orientation, input press and hold, multiple input, reach and balance.

Cognitive barriers covered things like comprehension, expectation, wayfinding, timing, focus and memory, and sensory differences. And then the low vision barriers were things like color contrast, determining direction, signage, and accessing text.

We also had a whole bunch– that’s the high level of all the barriers that we found. There’s too much left for us to deep dive into now. So we’ll come back to this in a tick and pick some specific ones. We also had lots of other research. So we had the deep dives into our switch control system. I built a performance testing robot, so that’s documented as well.

My line manager insisted that nobody could die during the research. So I had to prove that what I built was performant. And I didn’t want to sit there with a headset on going round in circles, so I built a robot to do it for me. So we have a VR testing robot. Because you know, if you’re working in a research and development department, you end up with robots. It just comes with the territory. There’s lots of videos and presentations and stuff like that.

So that’s the big picture. Those are the areas that we can explore solutions around hardware, environment, and content. And we’ve also looked at some of the areas where we can make assumptions. So assumptions around motor impairment, assumptions around cognitive impairment, and assumptions about low vision. Now I’d love to have the time to deep dive into those, but we can’t really do that.

I’d encourage you to go and look at the website because it’s all documented there. So for the rest of this talk, I think I want to start with how to get started on– like, of all those things we found, where would you start? Because the goal here is to give you today better assumptions and knowledge so that when you go to design and build things, it contains less barriers and excludes less people.

So where’s the biggest win now? Where should we start if we want to exclude the least number of people in the future? So this is what I mean by getting started. And I quite like the fact that it’s a rocket taking off, as well. Although these days, because technology is amazing, that could also be a rocket landing. Because rockets now land using their engines, which is really sci-fi.

But that’s meant to be a rocket taking off. I’ve just realized, I need to point an up arrow on it or something, just to clarify. So I just showed you this slide with all the different things up. But for this bit onwards, for the getting started, let’s focus on this subset of them. This is the motor barriers, the cognitive barriers, and the low vision barriers that I would start with.

Now this group is a terrible way to explain this, so let’s re-tidy it and let’s look at it a different way. So I want to start– when we think about how to get started and building better and more inclusive XR environments, where should we start?

Well, I think we should start by thinking about control, understanding, and navigation; or control, understand, and navigate. They’re the three areas that I encourage people to look at to begin with. So the first area is control, which relates to motor barriers and switch control.

So kind of surprisingly to us, motor barriers– or the ability to control a body with precision and accuracy and fatigue, and that sort of thing– they were really, really, really common. A lot of users were unable to hold the controller, or they weren’t able to hold the controller for long, or experiences made the assumption that the user was stood up, when actually the user might have been sat down.

So things like that, they’re all motor barriers. I’d start by thinking about that from the perspective of your game or the thing you’re building. And writing down your list of assumptions that you’re making about the user, when it comes to their motor ability.

How long do you think the user can stand up for? How far do you think the user can reach? Can the user hold the controller the right way, upright, for a very long period? And if so, how long do you think they can hold it up for? It can be quite useful to write down those assumptions. And then later on, when you’re coming to test things and coming to experiment with things, you can check against those assumptions as you go.

The other thing that kind of surprises me is about switch control. So I would have thought that building virtual realities that could be controlled with switches– so for those who don’t know, a switch is an alternative input. For this user, she has a little– what we call a ping pong button.

When she presses that, it completes some action in the environment. So it replaces a button on the controller, something like that. I thought it would be really, really hard to make a VR environment work with switch controls. But it actually turned out to be really easy. Because what you end up doing is you remap those.

You do something like you do in this case, where we zip tied the hand-held controller to the user’s headset so they still have the ability to turn the controller. And then we moved all of the actual control schemes and stuff to the buttons that they held. Or in other cases, we had bike tubes or squeeze tubes and stuff like that.

It seems like it would be really hard to do. But there’s a little bit of a cheat, there’s a little bit of a shortcut. Which is that the standard Xbox 360 controller, if you can support the standard Xbox 360 controller, then you automatically get support for something called the Xbox adaptive controller, which is the thing that all of those switches plug into.

So when we were doing our work, that’s how we did it. We had the Xbox adaptive controller appearing to the game engine, it was just an Xbox 360 controller. And then we plugged in all the different switches. We came up with different control schemes as well, so the users could move in different ways.

But we didn’t really have to do that. That was luxury on top. Because if you can do everything with an Xbox controller, then you can do everything with a switch. And that’s kind of a little bit like keyboard accessibility on the web. If you can get that right, then a huge number of people can get into your experience.

And as a minor light dev note, it also makes testing really easy. Because I don’t necessarily want to put the headset on and mess with VR every time I want to test something. Just make it work with an Xbox controller. It also makes automated testing easy, because there’s lots of tools to fake input from Xbox controllers.

So if you can support an Xbox controller– you can support the XAC, the “ZAK,” as some people call it, the Xbox adaptive controller– you can support switch users. So that’s kind of like a ground floor. From a motor disability side, you can include a lot of people simply by supporting the Xbox 360 controller.

We did lots of different things. So these are just some pages from the deep dive, showing different setups that we tried and different control schemes. I’ll leave you to go look in the deep dive for the details because I can’t go into them here. So that’s the first area to start with, which is control.

So once the user has control of the environment, brilliant. Well now, what do they do with it? Well, they need to understand. They need to know what it is they’re looking at and know what it means and know what’s possible. And this is why my second area for a get started guide would be understanding, or understand.

We need to work with the user, and making sure that they comprehend and understand what’s going on, and that we meet their expectations. So these are two other barriers from that big barriers list. So on the website there’s a more write up of these, but I’ll give a short version of them here.

So comprehension is kind of subtle and intersectional. It’s that ability to look at a thing and understand what it is, the ability to understand things that are not explained. So we split this into three areas. So this is a quote from the research. “This barrier occurs when an experience requires the user to understand unexplained aspects of the environment in order to; determine what’s possible, determine what’s interactive, and determine the next action,” or what to do next.

So this is to do with comprehension. See what I mean about the barriers being assumptions made by the designers and developers? The barrier here isn’t some users won’t understand. The barrier is that the experience requires the user to do a thing, made an assumption about the user. So this is an example of comprehension.

So being able to put on the headset and understand which doors you can go through and which doors you can’t, what objects can you pick up, and also what should you do next? What’s the next thing that you’re going to do? Another part of this is around expectation.

So expectation relates to meeting the user’s expectation of the environment. So one of my favorite combined motor and cognitive barriers that we had a user experience was walking on cobbles. So in our environment, we had a cobbled garden. And one of the users we had who used a stick to help her walk wouldn’t walk into the garden because she knew she couldn’t walk on cobbles.

Now in VR, we don’t simulate the cobbles. But she didn’t understand that. And her expectation from years of being in the real world was that she couldn’t walk on cobbles, so she didn’t walk on cobbles. So it’s things like that, it’s a combination a little bit of comprehension of what the VR is and isn’t changing, and expectations from reality.

There’s several different examples of this. And again, the research has some of the good ones. So users have an expectation to be safe. If you unexpectedly put a user in a situation that isn’t safe in the real world, they’ll tend to react quite badly. We had a 360 video that was amazing. It was about the Crossrail project. And we put people on it with a headset and said, listen, watch this video. It’s brilliant, it’s all about trains.

It starts from underneath a train. A lot of people were quite uncomfortable putting on a headset that made them feel like they’ve just got hit by a train. Now of course, me being a nerd, I was going, ooh! Axles, engines, rails, brilliant! Hadn’t occurred to me that other people might have the expectation that they’re never going to see the underside of a train, they don’t want to see the underside of a train. So, whoops!

And then finally on expectations, skirting board. Skirting board, the little bits of wood that we have around our houses. I hope the things I mean. Like, pretty sure there’s some skirting board down there, right? If you build an environment and you don’t include skirting board, people think it’s a box. If you include skirting board, people think it’s a room.

It was amazing. Very, very consistently, if I forgot to put the skirting board in or the skirting board didn’t look right, people wouldn’t buy the illusion that they were stood in a room. They’ll say, OK, where do I go in the box? Versus if I put skirting board around the edge, people go, where do I go in the room? And I just found that fascinating.

I have no idea why. I’d love to repeat it in America or in Europe and see if other people have the same expectation. But it was just really weird. So I always include it in these presentations because it’s great. So yeah, if you’re building a virtual reality environment, don’t forget skirting board.

So we’ve now talked about the getting started. So how do we get users into this, where do we start on removing these barriers? Well we’ve started by looking at controls. We’ve looked at motor barriers and switch control. Now we’ve thought about understanding, how does the user comprehend the environment, how do we meet the user’s expectations?

And then finally, how do they move around, how do they navigate? Or to put it a slightly different way, how do they go about achieving the goal of wayfinding or barriers that relate to wayfinding? So wayfinding kind of comes in two areas. You have navigating, which is kind of moving from place to place at the big scale.

So this might be, for example, teleporting across a room, or knowing how to get from one room to another room, or from one room to the ceiling– that sort of thing. That’s navigating. And then you’ve also got positioning. So positioning is about finding just the right spot.

So you need a stand on this exact spot in order to be able to reach that thing. Or you need to stand on this exact spot in order to be able to do– to be in the right place for this animation or this event. There was a lot of barriers that related to positioning and navigation. When we made the assumption that users could navigate, the assumption that users would know where they were and how to get to other places, we also made assumptions about how users could move around in the real world.

So one of our examples was you could pick up some books from a bookshelf. And it started off as a bug, but I actually quite liked it, so I ended up making it a feature– which is I forgot to turn on gravity for the books. So all of the books that were about space didn’t have any gravity applied. So you could pull them off the shelf and they’d float, which was really cool and kind of funny.

And then the other ones were philosophy books which are also really heavy. So if you pulled a philosophy book off the shelf, it would actually fall to the ground quicker than gravity would. And they had special gravity all of their own, because they’re full of philosophy, and that’s a big heavy topic. Look, I’m autistic. These are the things that happen in my head. They all ended up as Easter eggs in the environment.

But in order to interact with these books, you had to be within arm’s distance. And we had a user who’s a wheelchair user who pulled up in front of the bookcase and couldn’t reach the books. And because of their expectation that you can’t drive through the bookcase, they didn’t realize they could drive forward into the bookcase to reach the books.

So that would be an example of positioning, where we positioned something and we made some assumptions about what people could reach and what people couldn’t reach, or what was possible for a location, and if people could even get themselves into that location or not. We also had a user who was using a wheelchair that couldn’t turn on the spot.

So we had a thing where there was a little challenge which was to run around the environment and find all of the mugs. We hid a bunch of mugs. And he moved into a space in one direction and basically couldn’t turn around to come back out of it without going through the environment because of the way his wheelchair moved.

It was impossible for him to get into the spot where we’d hidden the mug with the right alignment and direction because of the physics of the room he was in. He’d crashed into one of the real world walls first. So that was another example of positioning, making sure that we’re not making assumptions about which direction things are coming from.

So that’s kind of the getting started, and in all, we’ve kind of gone through these three areas. We’ve talked about lenses, about what a barrier is, the groups that we may impact. So we find that barriers arise from assumptions, the assumptions we make about a user and their body. We make assumptions also about the user’s environment and situation.

So the assumptions we make about a user’s body and their ability, we might call accessibility. And the assumptions we make about the user’s environment and situation and culture, they’re what we might call inclusive design. We then talked about the big picture, which is where can the barriers be found within a VR environment or an XR environment.

What’s the common areas? How can we look at things like adapting hardware, the environment and content when we’re looking for solutions? And how can we adapt how we think about users’ experiences when we’re thinking about the assumptions we make to do with motor ability, essentially make about the user’s motor ability, the assumptions we make about the user’s cognitive ability and the assumptions we make about the user’s vision.

And then finally, we worked through the getting started. We thought through how we can ensure that users have control. Then how we can be sure that users can understand. And then we worked through how we can ensure that users can navigate and move around. And I think they’re the three places where I would start.

So to go all the way back to the start, XR is this creation of new environments and with the creation of new environments, we have the creation of new barriers. Now interestingly enough, I had a spinal cord injury called Cauda Equina Syndrome just as I finished the first 2/3 of this project. Everything I learnt about the motor barriers was directly applicable to my real world home after I had my injury.

It was a bit of a surprise. I sometimes joke, if I had like, 10 years to prepare for my injury, I probably would have just done this research project. It taught me so much and I learnt so much from so many different people that ended up being really useful in the real world.

I don’t know if you’ve ever heard the analogy, but you’ve got virtual reality at one end, where none of the world is real. Then you’ve got augmented reality, where it’s mostly reality with a few digital things. Or mixed reality, where it’s mostly virtual with a few real things. And then reality reality, where everything is real.

Now in my mind, that’s the worst mode. The graphics are awful and nothing cool happens in reality reality– actually, lots of cool things happen in reality reality. But yeah, reality reality, that’s one end, and virtual reality is the other, and there’s fun stuff in the middle.

If we can get this right and we can answer that question that I posed right at the start about who we’re disabling and why, we have an opportunity. Because we’re changing so much of the environment, because we’re changing so many things in so many profound ways, we get to reset things. We literally get to reset the world. When we get this right, we’re going to turn potential into reality for literally billions of people.

And it’s one of the reasons why I’m excited about XR and VR and accessibility. This is one of the areas that have such profound consequences on people’s lives, and it would be really sad if we just replicated the same barriers as exist in the real world. In my own life, accessibility is the reason I have a job. It’s the reason I have a home. It’s the reason I have the life I have.

In my case, it was software to help me speak. Or people who could help me cross roads or assistive technology to help me read my computer screen. Or I’m dyslexic as well, so things help me learn to read and write and communicate and things that fix my broken grammar– stuff like that. It’s technology that enables me to come to the world with all of my impairments, but to not become disabled by them.

So that’s it for today. I hope you have enjoyed that. I think we’ve got about 10 to 15 minutes for questions. Thumbs up? All good?

JACLYN LEDUC: Hi, Jamie. Yes, all good. Thank you so much. Hold on, I’m just going to see if I can spotlight myself.

JAMIE KNIGHT: It’s really, really weird doing presentations online. Because I’m just monologuing to my laptop. And I hope there are other people on the other end. But if they’re not, I wouldn’t know anything about it.

JACLYN LEDUC: Right. [CHUCKLES] Well, no, your presentation was great. It was very interesting. I learned a lot and I imagine our audience did, as well. All right, here we go. So we do have a couple of questions. And I want to remind the audience that as we are doing the Q&A, you can go ahead and continue to type your questions into the Q&A window or the chat window, and we’ll do our best to get to those.

So we have a couple of questions here, Jamie, and I’ll just kick it off with this one. So going back to the barriers research that you were talking about, how do you identify common barriers, and basically, what’s the general process of doing that barriers research? How do you start it? How do you round it out and finish it?

JAMIE KNIGHT: Cool. So our process, we built an environment which we wanted to go and test. So in that case, I just built it in the Unreal Engine using one of their templates. So I built a model of the school library. We then went and worked with some users, and we observed as our goal.

So we referred to it as flip-flop facilitation. So in the same way that a set of flip-flops or sandals are just enough shoe, we had facilitation that was just enough support. So funnily enough, it was one of the people who does my day to day autism support who ended up being amazing at doing virtual reality support, because he’s used to helping people who are not necessarily that aware of the environment, like I’m not.

So that’s a lot like being in VR, so that kind of translates quite nicely. We asked each user to come into the environment and complete a number of things. We ask them to navigate, interact, and pull out information. And then we observed what happened. So we built a little system where you could write notes and all the notes would get pulled together.

And every time something went wrong, we noted what the barrier was. We kept going back to groups– so different people, but in the same sort of places, so going to users with cognitive impairments, older people in care homes, things like that. We kept going until we stopped seeing new barriers emerge.

That wasn’t because we thought we got all of the barriers, but we got all of the barriers that this environment was going to turn up for us, how it was. So we kind of evolved it over time. And then we just started visiting different groups. So I think in total, we did something like 12 or 15 sessions. We were traveling all around the UK working with these different audiences.

So that’s how we did it. And at the end of that, we did that analysis process. So we pulled all the barriers together, did a thematic analysis, and then worked through identifying the common barriers and the way they overlap. So a good example of that would be– the raw barrier might be, “user dropped controller and couldn’t reach floor to pick it up.”

The barrier is that the experience expects the user to be able to consistently hold the controller. And the experience expects the user to be able to reach the floor. So if you design out the assumption that the user is always going to be able to hold a controller and we design out the assumption that the user can reach the floor, then you also design out that barrier. And it doesn’t matter anymore.

That’s different from say, having a solution where they press a button and everything jumps back on the shelf, which is always going to take developer effort or cleverness to make it work. So we try and identify barriers in a way that we can then design them out by changing our assumptions and learning more about users.

Hopefully that answers the question about the general process we took. If you go on the website, there’s links to talks that do a massive deep dive into our methodology and data set and all sorts. So there’s about four hours of content on that, I think.

JACLYN LEDUC: Wonderful. And on that same train, what were– so in that process, what were some of maybe the unexpected barrier groups you identified? Or maybe a better way to ask that is how much of the identified barriers were expected versus unexpected?

JAMIE KNIGHT: Oh, 50/50? The things were in comprehension and expectation, wouldn’t have expected those. A great one being I use these sippy cups because I can’t feel my hands very well. When I put it down in the real world, it makes a noise.

If your objects in VR don’t make a noise when you put them down, it confuses the hell out of a lot of users because you’ve broken an expectation they have from the real world. So I wouldn’t have predicted that. I knew that being able to– if you’re in VR and you’ve got a headset and you don’t have feet tracking, we had users who went to try and kick things.

And of course, their feet aren’t being tracked. And then they got really confused when they looked down and they didn’t have any feet. That was one I’d predict. But the fact that objects don’t make noises– I wouldn’t have predicted that. And I went back and I added the noises back in. Because once someone pointed out to me, it started weirding me out as well, that I’d put the mugs down, they wouldn’t make a noise.

And for users who had a hearing impairment, they didn’t know whether the noise was there and they couldn’t hear it or whether there wasn’t a noise there at all. So that’s another example. For some users with cognitive impairment or a motor impairment, they were waiting for the noise to know it was down. So if you didn’t make a noise, they just pushed the object right through the floor or right through the table.

So that was a big surprise. What else was a surprise? There’s a few of them. The one about the cobbles, that really surprised me. That, I can’t walk on cobbles, so they didn’t want to walk in the garden with the cobbles. What else is there that kind of surprised me? Yeah, quite a lot of it surprised us.

But we went into it, we didn’t speculate. And that’s kind of really important. We only documented things we saw. That was the rule. If we didn’t see it, it didn’t go in our data set. Because it’s really easy to speculate about what the barriers might be. But then you go and work with actual users and you find that a lot of those barriers are actually fine.

It’s completely different. So we know it’s not complete. We know that there’s always going to be more situations. We didn’t test multiplayer games. This is real, the basics. But it’s enough to get started and enough to understand the big pieces to then kind of dive deeper in the future, which is what we were hoping to do.

But in the end, COVID and everything else got in the way. I can see that there’s a question saying that will the pieces of research that were not completed because of the pandemic be picked up in the future for completion? I’m really hoping so. So the project is still held by the BBC. I don’t work for the BBC anymore, so I’m not going to be doing that.

But I am hoping that somebody will pick it up. There was some discussion within the BBC of doing that. But I’m no longer there, so I don’t know how that– I don’t know how that discussion ended. But I really hope they do. Whilst we missed the users who had extreme vision loss or blindness, we did have some blind users. We just didn’t have enough of a data set to be confident in it, if that makes sense.

It’s the same for hard of hearing and deaf users. We just didn’t have enough data to be confident that we had enough. That we weren’t going to say, here are the barriers, and then six months later go, we’ve got some new data. Actually, they’re all nonsense. We just hadn’t seen it all yet. So I’m hoping that people can pick that up.

It’s only probably five sessions away from being finished. But pandemics make that kind of hard, because you’re go into the places where all the vulnerable users are. They’re vulnerable. You don’t want to take a pandemic with you. And you can’t really do it remotely very easily. So that’s been a bit tricky. So I hope somebody does pick that up, yeah.

JACLYN LEDUC: Wonderful. Thank you, Jamie. We have a couple more questions. So thinking of user expectations, how and when might designers use trigger warnings to alert users to a potentially uncomfortable or unfamiliar environment?

JAMIE KNIGHT: OK. So this is beyond the scope of the research. The research was very much looking at the mechanics and the “being in the environment” element of where the barriers came from, rather than solutions. And this is more of a solutions question, of where should the warnings be placed, what should the warnings cover? We didn’t look at any of that.

However, I can speculate a little bit, based on what we did as we did it. So one thing that we learnt, which was really important, is introduce the controllers before you introduce the headset. So we worked with a young lady who was very anxious.

She worked with her support staff, and she was looking in the headset and taking it off and looking in, looking around, taking it off. It’s going really, really well. We said to her, everything and there is virtual. Nothing can hurt you. It’s all made up. It’s like TV. And Ollie, who was doing facilitation, then picked up one of the controllers, wandered across the room and handed it to her.

Now, from her perspective, something got up off the floor, floated across the room and bumped into her after we had told her everything is virtual, nothing is real. She freaked out because we had really badly communicated what was going on. So what we started doing when we were user testing was, we would have the environment open on a normal flat screen.

We’d hold the headset up, hold the controller up, pointing the headset at the controller, show the user, these are your hands in VR. This is what they will look like. So they already had that connection. They already knew what the controllers were like before we put the headset on. So that’s kind of like an order of operations thing.

When we were working with the thing, we found that some people found it quite claustrophobic. So within the project, if you flip through the project, there’s a series of posters and health information where we have a list of conditions where we’d recommend you don’t take part, or tell us about it before you take part. That was a similar sort of thing. We went through that list with all participants before they started.

They went through it once with the screener and again with the facilitator before they got into the experience. We also had the great question of– so one of the things was, if you have a heart condition, please talk to us before taking part. You might not be able to take part. And a man in his 80s said, I had a heart condition. And then they gave me a new heart from someone who was 25, so I don’t have a heart condition anymore.

And I was like, I need to phone my manager and the lawyers because I have no idea! So that was a surprise. So I guess ensuring that when you have an experience, you’re telling people what’s coming up before they go into the experience. And I also think that one of the things that we were very sensitive to was because Ollie works with a lot of autistic people, he’s really good at recognizing first signs of anxiety.

So he would be very gentle with people and help them stay calm if they’re starting to get anxious– making sure that people know that they can stop at any point, that sort of thing. So whilst I don’t have any specific insight from a research perspective, that would be my anecdotal perspective on that.

JACLYN LEDUC: Wonderful, yeah. That’s very insightful. And then we have a final question here for you is, what is your favorite part about designing digital products?

JAMIE KNIGHT: What is my favorite thing about designing digital products? That’s a fab question. I like that. I think it’s the fact that we go from nothing to something by typing. That seems like magic! I remember the first time. So I built this thing in Unreal by clicking and dragging things and adding them to the environment– so just using a computer. And I put a helmet on, and I’m suddenly in a place.

And it’s a place I made that didn’t exist before. And there was no place like it on the universe, and now there is. One of my opening projects was building myself sensory rooms because I’ve always wanted a sensory room at home and I’ve never had the space. So I was like, OK, well what if I built a sensory room that was the size of a cathedral? What would I have in it and why?

So one of my favorite things about working on digital products is until you create it, it doesn’t exist. And then you can create this whole thing. And then you can share it with other people, and they can have their iteration on it and they can do their thing. And when we get it right, when we remove the barriers, we give people these experiences that are truly unique.

One of my favorite examples was we were working with a chap who was very low vision for the– I think it was the Blind Veterans Trust. And we put the headset on. He was walking around. And he basically said, I have better vision here than I do in real life. Because we turned the brightness on the headset all the way up, and he needs as much light as possible.

His ability to focus is right in, close to his eyes. He can’t focus after about 30 centimeters. So we were putting the lenses right by his eyes, incredibly brightly. He was like, this is more accessible to me than my real life. Can I just have my house in VR, mapped 1 to 1? And I could go back to work. I could go back to working in my workshop.

I think he used to make watches. He could imagine himself with good enough technology that he could be physically holding a watch and working on it, using a VR headset, to simulate what he’s looking at in a way that he can see. Because he doesn’t need all the detail. If it was just a gray outline or a yellow outline of what it is that he’s touching, he can feel the rest with his fingers. He just can’t get his fingers close enough. That’s such a cool thing.

Another project I’ve been working on is around virtual reality physio. So rather than me sitting here doing my physio, getting sweaty and uncomfortable doing my exercises, put on a VR headset and I’m rowing a boat. Go row up an amazing river. Go explore the Amazon, whilst also happened to be doing my very boring physio. So we’ve also got opportunities like that. To me, that’s great. Digital products– you get to create the world. And it’s amazing.

JACLYN LEDUC: Yeah, it’s amazing. Thank you for sharing, Jamie. So I think that’s all– well, that’s all the questions. And also we’re out of time. Thank you so much, Jamie, for this wonderful presentation. And thank you to everyone who joined, as well. We hope you enjoyed it. Thank you again, Jamie. It was a great having you.

JAMIE KNIGHT: Bye bye! Bye bye from Lion, as well.

JACLYN LEDUC: Bye, Lion!