Disability Deep Dive Podcast
Disability & Artificial Intelligence - with Lawrence Weru, Ariana Aboulafia, and Jennifer Gray
Thursday, June 06, 2024
Today's episode features disability and artificial intelligence (AI) experts Larry Weru, Ariana Aboulafia, and Jennifer Gray. AI has been around for years, but things changed when OpenAI released ChatGPT. Since then, the news around AI has increased a lot. AI is more than just a chatbot responder; it can be used in many different ways. We talk with the guests about how AI impacts people with disabilities in their healthcare, education, and employment. The guests provide both intriguing and alarming information about the implications for people with disabilities.
Relevant Links:
- Larry Weru’s bio: https://bit.ly/3x6hCUt
- Jennifer Gray’s bio: https://bit.ly/3KvALSQ
- New Disabled South: https://bit.ly/3HzJsdg
- Ariana Aboulafia’s bio: https://bit.ly/3KBv9X7
- Center for Democracy & Technology: https://bit.ly/3RiYyJx
- New Disabled South’s Study on the Benefits and Challenges of Autonomous Workplace Technology on Disability Rights and Labor Rights: https://bit.ly/4bLRA8a
- Center for Democracy & Technology’s Project on Disability Rights in Technology Policy: https://bit.ly/458k7SV
- Automating Ableism (article): https://bit.ly/45bJXW4
- 99% of Fortune 500 companies use AI (article): https://bit.ly/4bIOFx9
- A racist soap dispenser? Critical Theory and the non-neutrality of society (article): https://bit.ly/4c1zkId

Episode Transcript
Keith Casebonne (00:00:00):
You're listening to You First: The Disability Rights Florida Podcast. In this episode, we talk with Larry Weru, Ariana Aboulafia and Jennifer Gray about Disability and AI.
(00:00:28):
Hey, everyone, I'm Keith.
Maddie Crowley (00:00:30):
And I'm Maddie. Today's episode is gonna be super interesting, as we learn from industry experts about how AI impacts people with disabilities. So AI has always been around, but things really changed, kind of within the past year when OpenAI released ChatGPT, and since then, the news around AI has increased quite a lot.
Keith Casebonne (00:00:51):
Oh, yeah, it really has. And, and so much more now than just like a chatbot responder, uh ...
Maddie Crowley (00:00:57):
Mm-hmm.
Keith Casebonne (00:00:57):
... AI is starting to be used in, in so many ways, some that you may have heard of, some you may not have heard of. Uh, I know we both learned a lot in this conversation alone about all the good and bad things that can come out of using AI.
Maddie Crowley (00:01:11):
Mm-hmm. Yeah, and we talk with the guests about how AI specifically impacts people with disabilities, um, and, you know how it impacts their healthcare, education, employment, amongst other things. And while they offer some really intriguing information, they also kind of provide things that are a little frightening about ...
Keith Casebonne (00:01:31):
Mm-hmm. (laughs)
Maddie Crowley (00:01:31):
... this topic. And, and I would say ...
Keith Casebonne (00:01:33):
Yeah, yeah, yeah.
Maddie Crowley (00:01:33):
... before you stop listening before this seems like an elevated concept, um, I'd just say that they did a great job not taking it into super technical terms and coding and all that kind of stuff (laughs) because ...
Keith Casebonne (00:01:46):
Right. (laughs)
Maddie Crowley (00:01:46):
... for myself, who's not a non-techie like science-y person, I was able to follow along and really understand things because they, they put it in the context of the impact of what it does more than like the intricacies of how it works.
Keith Casebonne (00:02:01):
Right.
Maddie Crowley (00:02:01):
So-
Keith Casebonne (00:02:01):
How it affects you and not ...
Maddie Crowley (00:02:03):
Yeah.
Keith Casebonne (00:02:03):
... how much it runs, right?
Maddie Crowley (00:02:04):
Right, right. So I hope y'all feel like you can step into this episode and not feel overwhelmed and enjoy this recording we have for you. Hey, y'all, thanks so much for being on the podcast to talk about Disability and AI. I'd love you all to take a moment to introduce yourselves, share how you got into this work, if you wanna share your pronouns and a visual description, yeah, the floor is yours.
Jennifer Gray (00:02:34):
I can go first. Um, hi, uh, my name is Jennifer Gray. I, pronoun she/her. Um, a visual description of myself, I am a youngish white woman. Um, I've got dark auburn hair. I'm wearing cream headphones and I'm wearing a white-collared shirt and in the background is my frankly chaotic bookshelf. (laughs) I am the research manager at New Disabled South, which is a disability justice organization focusing on the 14 states in the Southern United States and I am currently leading a project focused on the intersection of labor, technology in the workplace and disability.
(00:03:21):
How did I get into this work? Well, I actually originally started off in neuroscience. I have my master's in neuroscience from Georgia State University in Atlanta, Georgia and I conducted immunological research and neurodevelopmental research for many years. And I, like many people, became disabled during the COVID-19 pandemic. I have had neurological disabilities my entire life and I identify as neurodivergent. However, I really delve deep into the disability advocacy and disability justice world after being diagnosed with long COVID-induced ME/CFS and I worked on some projects in relation to science writing with New Disabled South and that's how I found them and came into this role.
(00:04:15):
And with this project, it's, we're working with the National Disability Institute and the Ford Foundation, looking at the intersection of labor and tech and a lot of that involves AI. So really excited to be here.
Larry Weru (00:04:29):
Thanks, Jennifer. I'm Larry, pronouns he/him. I am a African American male, youngish also. I have a mustache and a b-b-beard, which I've been growing out this year, but if you look at my older pics, I don't have one. I'm wearing a long-sleeve black sweater, white dress shirt. I'm someone who stutters, so sometimes my arms help me and the, t-t-the b-b-background is my hotel room, which I'm in right now, which has a nice window and some [inaudible 00:05:05] and some artwork. I currently work in the Department of Biomedical Informatics at Harvard Medical School and we're currently researching how to make technology more accessible.
(00:05:16):
In our case, the industry as a whole, biomedical informatics has a lot of tools that researchers might use, and then also if, if you're working in research and also medicine, there's a lot of research tools or clinical tools you might use and a lot of those tools are designed t-t-to be accessible and that create this workforce b-b-barrier that we're trying to just help remove by making sure that we can come up with ways to not only just address novel issues in our field, but also just in general make a lot of technology more accessible in the p-p-process. I got into this work, in the past, I ... Well, I guess I could start way earlier then. When I was 11, I learned how to code, uh, to just by reading a book. And in that book, there was one chapter on there that was basically saying, "Everything you make needs to be accessible," but that's arguably the only place that I have seen a strong focus on this in any of my education.
(00:06:30):
So after I've, I learned how to code on my own, I went to college. I studied biology and art, but on the side, I took a lot of classes on web dev and I also d-d-did a lot of client work and my clients were like companies that needed some kind of web technology applied, which oftentimes was custom, but, uh, sometimes was just out of the box. Um, and that was like my foray into working in industry. And then I spent a couple years in industry, working with a lot of different companies, maybe about 5-5-50 total, in all kinds of industries. And of those companies, only one was really interested in making their websites accessible. And, uh, like at that moment, I just grew up, I remember looking back and thinking, "Okay, I've, I've worked with managers, I've worked with designers, I've worked with developers and there's some systemic issue here that results in what was happening with the web not being as, as a-accessible as it could be."
(00:07:38):
So I decided to t-t-take a step [inaudible 00:07:41] and try to just understand what are these social forces, which I, I'm confident are more social than technological that, uh, result in technology not being as accessible as i-it could be. So that's all I got here today.
Ariana Aboulafia (00:07:58):
My name is Ariana Aboulafia. A brief visual description, I'm also a youngish woman. I have curly brown hair that I typically say a shoulder length, but it's grown far past that at this point, got ahead of me. I'm wearing a black-collared buttoned-up shirt and also a black jacket and I have glasses with gray rims. I am the policy council for Disability Rights and Technology Policy at the Center for Democracy and Technology, which I'll probably refer to throughout today as CDT. As policy council, I lead our Disability Rights Program. And our Disability Rights Program essentially focuses on, um, studying the ways in which technologies, including those used in hiring benefits determinations in the context of education, how they specifically impact people with disabilities and then to advance policy that protects the digital and civil, civil rights of people with disabilities specifically.
(00:08:55):
So that is the program within CDT that I lead. CDT, more generally, is a tech policy organization. My work essentially runs across any area that can impact folks with disabilities. How I got involved in this is I've had disabilities my entire life. I got involved in disability advocacy essentially out of my own need and, and feeling like there were need that, that weren't being met, mostly on my college campus. I'm an attorney by training. I went to law school because I wanted to be a civil rights lawyer. And while I was there, I took a course with a professor named Mary Anne Franks, who taught me the ways in which algorithms and technological systems can be discriminatory.
(00:09:42):
And, and the, the book that she assigned that really helped me think about technology in a civil rights context and as being part of an overall civil rights agenda was called Algorithms of Oppression by Dr. Safiya Noble and that sort of course and the, the readings that I had there really made me think about technology for the first time as something that could be part of a civil rights agenda or a disability rights or disability justice agenda. And after that, I served as a public defender in Miami-Dade County for just under two years and, and I wanted to do that because I wanted to help people with disabilities that are interacting with systems that are often very unfriendly to them.
(00:10:21):
But in the work that I do now, I think a lot about how technology, when combined with those systems, whether it's criminal legal system, healthcare benefits determination systems impacts folks with disabilities.
Keith Casebonne (00:10:35):
Mm-hmm. Thank you all for giving us that information. Really great backgrounds. So I'm Keith. I'm cohost of the podcast along with Maddie. I'll be the first to say that I am not a youngish person, but I've got brown hair, a salt-and-pepper beard. I'm a white male with a black T-shirt. I'm wearing black headphones and in my office with a boring beige wall behind me. Um, again, I, we're really excited and honored to have all three of you here. You have such diverse backgrounds and skillsets. I, I just, I know this is gonna be really amazing conversation and really looking forward to, to jumping in.
(00:11:11):
Real quick comment though, uh, Larry, you mentioned about that coding book focusing on accessibility. That's one more coding book than I've ever seen (laughs) that's talked about, uh, accessibility in it as well. So that's, uh, that's amazing. I didn't even know one existed. So that's ...
Larry Weru (00:11:23):
Okay.
Keith Casebonne (00:11:23):
... at least a little refreshing to know (laughs) that there is one out there somewhere that actually talks ...
Larry Weru (00:11:27):
[inaudible 00:11:28].
Keith Casebonne (00:11:27):
... about that way back. Well, at least, that, that takes sometime. That's, that's better than nothing, I guess. So before we get too nuanced into the discussion, uh, about sort of the interactions and issues, uh, with, uh, disability and, uh, AI, artificial intelligence. Can one or more of you give us a little bit of an overview of what AI is, a brief background, if there's any sort of terminology that would be good to talk about beforehand of the types of AI, how it, how is it an intelligence, like, how does it learn, some of that sort of thing, just get us started, like a little background?
Larry Weru (00:12:03):
Sure. Keith, I can hop in. This is Larry speaking. I think like, historically, you've always had this idea that technology is this thing that we can like design in a way that can mimic humans and what we can do. So how do I feel like maybe like around the 1970s, but maybe I might have the wrong era, there was this test that was kind of like developed to see if, if, what would it take to have a machine kind of like convince someone that they're interacting with a human. And the first kind of instance was that was this like chat interface where somebody could type into, have a conversation with this, with this machine. And if they were convinced that all the responses where human, then it was like, "Yeah, okay, I'm interacting with a human."
(00:12:54):
And, but we never really got to a point where we could have machines mimic everything we can do, but this pursuit of having machines do more things we can do has been known as AI. And depending on the era, it's, it's taken on all kinds of meanings and I'd say like the current era now with the ChatG-G-GPT, a lot of people have this current understanding of AI where it can have a conversation with you, yes, but it can also maybe help you write essays as something that might be able to do some of your math homework, but not really. But, but even if we go before ChatGPT, there was all this AI development that, that was happening and AI has been here for a while, but it was always more like less, less well known in a way.
(00:13:52):
Like even Siri, like when you talk with Siri and it u-understands what you're saying, that was some kind of AI that kind of took your voice and converted it into, uh, the language which it could understand and then come with a response t-to you and these responses would feel intelligent. But we've also seen AI use in all kinds of spaces like document scanning, which is maybe like an early use of it as well, GOCR like some might say is an AI, but like nowadays, there's very like s-specialized AI which might be used in certain cases. Like I mentioned Siri, where you can, like talk and it understands you, but there's also other kinds of AIs that like might recognize images and tell you what's in this image.
(00:14:46):
There's some AIs that ... Uh, and, um, i-i-if you ask those AIs to do anything else, they won't be able to because those AIs aren't trained to, to do anything else. But let's say you have this image and it's a scan of some kind of cellular tissue and you can have an AI look at that and it can tell you, "Hmm, I think this might be cancerous and that's like, a very narrow type of AI. You can't ask if like tried an essay for you, but yeah, I think I just wanted to give a g-general landscape and e-everybody else can chime in as well.
Ariana Aboulafia (00:15:21):
Yeah, um, this is Ariana. I'll, I'll, I'll add a few things there. I think what Larry is doing a really good job of, of talking about is, both, that there are different types of AI and that different types of AI can be used for different things. And I also think, Larry, your point that our conception or a common conception right now of AI is the generative AI's like the chatbots, like ChatGPT are AI. Well, that's a probably a pretty common conception just because ChatGPT has taken the, taken the world by storm to a certain extent. Generative AI is not the only kind of AI. So some kind of forms of AI, so there's machine learning, right? Which is when machines use training data to make better predictions. Predictive AI analyzes training data to make, to predict future outcomes. Generative AI creates new content based on training data
(00:16:13):
So that's, those are just some different types of AI. And one thing that you'll notice in all of those three is that there's, uh, a mention of, of training data, right? And so one of the ways to, to think about this is that AI creates outputs based upon inputs and that the i-inputs can be thought of as the, the training data. And when you think about disability, one of the biggest concerns with potentially why some of these algorithms or algorithmic systems lead to discriminatory or disproportionately negative outcomes for folks with disabilities can be traced back to underrepresented or non-inclusive datasets regarding disability that are used to create the algorithmic inputs, right?
(00:17:02):
And so part of the contribution to addressing algorithmic bias or tech-facilitated disability discrimination is a term I often use and the reason I use that term is because folks with disabilities have been facing disability discrimination for quite some time and this, what, what we see with a lot of these systems, it's disability discrimination just facilitated by technology. And in order to combat that, right, that tech facilitated disability discrimination or algorithmic bias, right, as a broader thing insofar as it affects folks with disabilities, ensuring that there are more representative datasets for folks with disabilities, including disability related data is a really important step. And I've been doing quite a bit of work on disability data as of late, but I'll, I'll stop there.
Maddie Crowley (00:17:50):
This is Maddie. Thank you both so much for that overview. So as, as you're naming all of these different components of AI and how they can impact people with disabilities, I think it's really intriguing that it's like these datasets are just information for kind of just a broader audience maybe to understand. It's just like it's working off of information that already exists, that has been not representative of the people that exist or of people with disabilities and their real like experiences and, and that, that can be applied to a whole host of things. And, and before we get into those different sectors of people's lives, could you talk a little bit about how AI, especially like facial recognition and, and things like that, how those things can also disregard disability when it comes to a limb or facial difference or a cognitive disability or, yeah, just, could we chat a little bit about that too?
Larry Weru (00:18:54):
Uh, Maddie, this is Larry. I think it's interesting. Like this morning, I needed to use my calendar to add an event. And I was on my MacBook and I was thinking, "Okay, how am I gonna accomplish this?" And normally, in 2024 a lot of folks use Siri. I'll use my example again with Siri. (laughs) In this case, you can kind of like, tell Siri, "Hey, Siri, I have an event." But I've noticed like on my Mac, I have, have it switched to where I can type to Siri because I've noticed like, oftentimes, when I'm trying to talk with Siri, it might not, it, it might assume that I stopped speaking when I was still in the middle of my command. And I think that this is really indicative that, uh, that even when you look at voice-p-powered AI, uh, the data that they're trained on might primarily include folks who have what might be considered a normal p-p-p-p-pattern of talking, but they may not have sampled from enough folks who have a stutter as one example.
(00:20:10):
And then in that case, what ends up happening is the output, in this case, this AI interface d-doesn't quite understand what I wanna do and that's just one example of how I personally do feel like Siri is not meant for me sometimes because I prefer to even s-s-switch it off and type. But, um, there's all kinds of other examples as well and I'd like to let other, other folks talk to, so I'll hop off for, for now. (laughs)
Jennifer Gray (00:20:42):
This is Jennifer and, uh, yeah, I completely agree. Thank you so much for that comment. Um, going into that a little bit, especially technology like voice control interfaces or screen readers or even the image recognition that you mentioned earlier, Larry, in the descriptions that are generated from that, all of this data, this just really goes back to what both of you said and what you touched on, uh, Ariana, is that this training data is so inherently biased. And so this bias shows up at, in so many different ways, in so many different technologies as well. One example that always sticks with me, which was, you know, really haunting, I mean, there's a lot of different examples, but I was there's this fantastic book by Meredith Broussard, um, and it's on techno chauvinism, which is essentially like the overemphasis and importance of technology over everything else and that technology is this incredibly unbiased thing and we obviously today are talking about how that's not true.
(00:21:54):
But she goes into an example in her book about this, um, thing as simple as a soap dispenser, an automatic soap dispenser in a bathroom that was trained on a dataset to recognize a hand going under it. And there was one researcher, I'm forgetting this, so excuse me, but a researcher who was a Black woman and could never get the soap to dispense on her hand. And she was like, "What? Something is going on here." And she ended up actually looking deeper into the technology and into the algorithm behind something as simple as a soap dispenser and found that this dataset was trained on lighter skin tones and couldn't r-recognize darker skin tones.
(00:22:36):
And so it's this bias that shows up in these datasets that when researchers are compiling data or doing data, um, we've mentioned here, they're missing this piece of including multiply marginalized folks that may have different facial features, may have different mobility needs, different speech needs, different skin tones. I mean, it just shows up in so many levels. Um-
Ariana Aboulafia (00:23:05):
Yeah, and this is Ariana and I'll, I'll add something very quickly. So this is a bit of an oversimplification, but to a certain extent, algorithms create outputs based on pattern recognition, right? That's how they make their determinations. And a lot of times, if the, the data has not been inclusive, and again, there are all sorts of reasons why data, disability-related data in particular, may not be, uh, inclusive or representative, but if that data does not include sufficient information about folks with disabilities and the ways in which certain individuals with certain disabilities exist outside of patterns, you're going to get outputs and results that are just as unreflective of the fact that people with disabilities exist and use technology, and even more than use technology, interact with technology when technology is incorporated into systems, that those outputs are gonna be just as unrecognizing of that fact as, as the inputs.
(00:24:06):
And an, an example that I, I give sometimes is, let's say hypothetically, there's a technology that for whatever reason wants to use retinal scans, um, for security purposes or something like that. It's really important that the folks creating that technology or deploying it have thought about the fact that there are some folks who may not have retinas, right? And I don't necessarily think, as far as where we're at right now, that those considerations are occurring, at least to the level in which I think the folks on this conversation would, would want them to, and at the very least, uh, at, at the level I would want them to, but I, I don't wanna speak for my, my colleagues. (laughs)
Keith Casebonne (00:24:51):
This is Keith. No, that's understandable. And as you were saying that, uh, I was thinking about how I personally can't even see a way that we would ever reach the point where something would be useful and understanding of all of us in the world. I mean this, what, 7-plus billion people in the world right now, each one of us having something unique about us, uh, how, how in the world would that ever happen? Uh, now I, as I say that I'm sure someone will, next thing you know tomorrow, the beauty product announced that (laughs) some things that can do that, but, but it's just, it's, it's fascinating that there's, that things are created to help us all, and yet, they're really shortsighted as to what all of us are, who we are, and how we all are different.
(00:25:38):
And it's, it's, it's almost, I don't know what the word is, it's almost like, uh, a sort of conceded concept to even imagine that such a thing can exist. But anyway, that's I, that could, I could digress on that for a while and I, I will stop myself. So let's jump into talking about some of the, the, the meat of this and, and there's a lot of different areas where we're starting to see AI used, right? We're looking at things like healthcare and education, housing, employment, people, I mean, how, how benefits are determined, things like that, which right off the, even as I say that, that just feels scary to me, that an AI is gonna help determine someone's benefit standing.
(00:26:15):
But, so there's a lot of different topics we can touch on through this conversation, so let's jump into, how do governments use AI to assess disability benefits and, in healthcare? What is the intention and, and, and what does the impact on the disability community look like?
Ariana Aboulafia (00:26:32):
This is Ariana. I'm happy to jump in in here. So I, I do some work in, in this space and there are tons of, of advocates who do really incredible work in this space. So I, I would say to answer your question, uh, about intent, generally speaking, I would say that the intent of states that incorporate algorithmic systems into their benefits determination systems do so out of an intent to reduce administrative burden and also to reduce instances of fraud, right, or potentially waste fraud and abuse or whatever term of art they would want to use, right? I don't think there are bad intentions, per se, with incorporating these these systems into benefits determinations. I think what we're seeing is, is bad outcomes.
(00:27:20):
So to answer your question about how this impacts folks with disabilities, a state may start using an algorithmic system that makes determinations as to how many hours someone may receive and what, what has occurred in certain states is drastic, very significant reductions of benefits eligibility in terms of hours for, for care and services, which significantly impacts a person with a disability's ability to sometimes live in their home, sometimes live in their community, sometimes have their, their needs met as a person with a disability. The, the impact can be very, very significant, but, and I, I will, I will say that because I think it's, it's really important, and e-essentially right, these, these systems just use algorithms to make determinations as to how many hours someone needs when previously those decisions wouldn't necessarily have been made by algorithms, right?
(00:28:27):
And what, what we've seen is, is again the, the impact really being, generally speaking, reduction of hours for folks who really do need a greater number of hours. And so I would say that, for me, I think the incorporation of AI into benefits determinations and the ways in which that impacts folks with disabilities is one of the more or most concerning potential areas, I think, for folks with disabilities, but I'll, I'll turn it over to my colleagues to add more.
Jennifer Gray (00:29:00):
So one thing that is, pertains to the work that I do is the intersection of this technology with the disabled workforce, right? So one big topic of discussion, well, there's many, um, has been algorithmic bias and just general ableism in hiring software and in general kind of HR and hiring practices. So that's something that's been a hot button issue and I believe, recently I was reading, I think like 99% of the Fortune 500 companies in the world use AI and candidate hiring software and there is very little to no human oversight and a lot of the hiring process, which is quite troubling, and we can get into how that is ableist, what does that look like.
(00:29:59):
So AI is being used, right, in hiring practices at multiple different levels and so it's being used, let's say, by LinkedIn, by ZipRecruiter, Job Bot, hiring sites in order to target candidates. And there's been previous research showing that these algorithms will unintentionally, because it's an algorithm, but, right, the input was biased, so the outcome is that they are biased against openly disabled candidates that will list that they are disabled and, uh, will be passed up for jobs even if they have similar or sometimes better qualifications than other candidates. So these algorithms, right, once you apply, they're going to scan your resume for certain keywords and they're going to, let's say, you have on there ...
(00:30:53):
I mean, this is just, this is like a basic example. This doesn't, you wouldn't find this on a, a job application, but let's say, like you were on a sports team like you would be ranked higher and seen as more desirable of a candidate than a candidate who maybe has like a mobility disability and never had that experience. So there's really like troubling uses of AI, especially ... So in a lot of job applications, they ask you to send in a video and so you might, over Zoom, be in an interview such as this. There are a lot of companies that are starting to implement AI that actually is going to analyze and track your tone of voice. It's going to track your facial expressions. It's going to track what you say, the cadence of your speech. And, Larry, you mentioned, this is going to be really ableist for someone who maybe has a stutter. This is gonna be discriminatory against someone who may have partial facial paralysis. This is going to be discriminatory against low-vision individuals or individuals who have autism spectrum disorder and it, it just, in so many different ways, these algorithms are unintentionally but very real harming the disability community.
Maddie Crowley (00:32:28):
This is Maddie. I just wanted to hop in and ask a small follow-up question. So when, I know we're gonna get into some other like sectors of life to talk about education and employment and other things. I know we, we just talked a little bit about employment. What is, what are, I know y'all are doing research and, and may not be able to give exact figures as you're still doing research or i-involved in this stuff, but what is the prevalence of ... I know you just said it was like 99% of Fortune 500 companies. What, as far as disability benefits and healthcare, do you have any indication about how many organizations or, or governments around the world or folks in the, in the United States that are using these kinds of tools to assess disability benefits or healthcare benefits?
Jennifer Gray (00:33:26):
This is Jennifer. I, so I don't have any new figures off the top of my head right now, so I don't wanna say anything. (laughs) Speaking to that, I do believe it's around 70% of companies in general in the United States that will use this kind of software and that's at the hiring stage. I actually do research currently on what happens to employees once they've been hired and the experience with technology once, you know, they're in the workforce and it's really troubling. Currently, there is no policy in place in any state, to my knowledge, besides New York, that requires companies to report on the use of certain, let's say, surveillance technologies. So I don't have figures on that. I'm not sure, Ariana, if you have any, um, thoughts on this.
Ariana Aboulafia (00:34:29):
Yeah, so something I'll say about the hiring tools in particular that, that you're mentioning, Jen, is, is we, we don't know how common these tools are because there is no policy that requires transparency or disclosure, right? And because there's not any sort of across-the-board or nationwide or anything like that policy that says, "You have to disclose when these tools are being used," it's really, really difficult to do accurate data gathering, although from estimates that I've seen, it, it seems pretty high. That's what I would feel comfortable saying. And, uh, I'll speak a little bit as I'll put on my, my lawyer hat for a second, right? One of my concerns with the lack of transparency is that, when folks with disabilities don't know that they're coming face to face with a hiring technology, they may not know to even ask for an accommodation, right?
(00:35:33):
And the Americans with Disabilities Act, which provides, uh, the right to accommodations, um, you, you do have to ask for them, but if folks with disabilities don't know that they might need it because they don't know that a tool is being used, a, because, again, they're not told, or even secondarily, and this is a little bit more nuanced, but let's say they're told, "We use a hiring tool," but they don't necessarily know that that tool, let's say, monitors vocal cadence in a way that they specifically would need an accommodation or monitors eye contact in a way that they would specifically need an accommodation, right? Uh, I, I definitely, I, I agree with you. I, I think estimates I've seen put these numbers at, "This stuff is very common," and, uh, I, I think there, I haven't seen evidence that points to the contrary, but insofar as actual hard data, that's really difficult in a context of no required transparency, which is secondarily its own issue for, for disabled workers.
Maddie Crowley (00:36:44):
So since we kind of got into the conversation about employment and hiring processes and, and talking a bit about these AI tools, was there anything else that y'all wanted to highlight in the realm of AI in employment or, or even just, "Yes, there's, there's these negative impacts," but at the end of the conversation, we'll talk a little bit about how AI and disability can work together and then folks with disabilities can use AI-powered tools and different things to live their lives and live independently and all those good things. Is there anything else that like specifically in the realm of employment that y'all want to discuss before we get into talking a little bit about education?
Larry Weru (00:37:27):
Larry here. The one thing I wanna, uh, hop, hop in on and maybe it's also useful for a later topic is when it comes to like technology and e-employment, even beyond hiring, just like the technology that a, a company might use, for example, and a company might then require its e-employees t-to use as well, I just, I think it's important for companies to just look into the software that they're requiring everybody to use because they might be creating b-b-b-barriers that result in maybe somebody appearing to underperform when it just might be that the software that you're requiring them to use is not accessible to them.
(00:38:17):
Um, this might not necessarily have to, to do with AI specifically, but let's say you have someone who either can't use a mouse or has a hard time using a mouse and would like to use just the k-k-keyboard and any software tools that you require your e-employees to use. If it's not usable in that way, it can just make things harder. And oftentimes, you might even list a certain software tool as a job requirement and then you say, "If you can't use this thing, we won't hire you," when you could have also just looked at, "What are all the options out there? Is this the best one for us to use or are there other, are there other alternatives as well?"
(00:39:07):
Like I love to toss some examples in. I, I think there was a point during COVID when we all switched to remote work and Zoom didn't have auto captions, which led to a lot of issues for folks who otherwise could understand what was happening in a meeting, but something about Zoom just made it hard to follow along with, without auto captions and other things as well like how accessible is Zoom, as just one example. And for a while, there was no good solution either. Microsoft Teams was not accessible. There was like no industry standard that was, but still, the choice was, "Let's all switch to remote. Let's all use Zoom." We didn't really think about, "Who are we gonna leave out with this as well and how to accommodate that?" Just like getting companies to think more about the software tools that they're requiring everybody to use because, let's say, they did look into it early and saw that there is no tool that's accessible for r-r-remote conferencing, then it wouldn't have taken both Microsoft Teams and Zoom a year and a half into COVID until they finally added up just something as simple as auto captions. Maybe that would have been something that was already there and it wouldn't have left out one group early on.
Ariana Aboulafia (00:40:33):
And this is Ariana. This is a really good point, Larry, and, uh, Jen, I think you also, you alluded to this right? It's definitely not ... In, in the context of employment, the only issue is not hiring tools, right? Hiring tools, but then once folks get jobs, there's all sorts of, uh, as Larry mentioned, right, there's the accessibility concerns, but secondarily, there's also concerns of other sorts of tools that employers use like surveillance tools and how those can also disproportionately impact workers with disabilities, right? So an example of that could be, let's say, a, a tool that monitors keystrokes or a tool that monitors mouse movement or a, a tool that generally surveils, let's say, a worker's computer, but a worker with a disability who may, as an example, need to eat a few smaller meals throughout the day instead of one lunch or a worker who may have a disability of their gastrointestinal system where they may need to use the bathroom for, for multiple times throughout a day, right?
(00:41:36):
Those sorts of worker surveillance tools may be disproportionately impacting folks with those sorts of, of disabilities when they are then used to contribute to performance evals and, and that sort of thing. Um, and I'll say this as well, right, as I, I mentioned in the, in the benefits context, right, like I think it's really important to ground all of these and talk about at least momentarily, why it's important, right? Employment almost, it almost goes without saying, but if an AI tool is a dispositive factor as to why a person with a disability doesn't get a job or can't keep a particular job that they are otherwise qualified to do, it can have really significant impact on that person's life and financial and personal and wellbeing, right?
(00:42:25):
And so I, I think the context is really important because employment is another one of those areas where the incorporation of these algorithmic tools can really have significant impact on people and their lives.
Jennifer Gray (00:42:42):
Yeah, that was exactly, I mean, I was just thinking about surveillance technology. So thank you, Ariana. And just as an example, with my research, we look a lot at the retail shipping logistics supply sector and gig workers as well, and in particular, focus on a certain shipping and supply conglomerate. (laughs) And I mean, there's so many different ways in which surveillance technology is implemented that, when some of it is alleged, and again this goes back to companies not being required to report on this and many employees and people that we've been talking to within our, uh, research as well are not aware that these surveillance systems are in existence. And if they are, to what extent they're being used.
(00:43:40):
Let's say, for example, in a warehouse, there's, again not only for office jobs, there's this surveillance technology that happens on the computer. This stuff can happen for more manual labor jobs and retail jobs where your movements are being tracked, whether that being, you know, worn like a vest or a bracelet that tracks your movements throughout a floor plan, um, scanning certain boxes at a certain rate and that's measuring your time on or off task. And again, like you mentioned, this can be really discriminatory against folks that may need to use the bathroom. Um, pregnant women are experiencing a lot of issues with this as well. Um, folks with mobility issues, rheumatoid arthritis, which I myself have.
(00:44:24):
So not only in office jobs are we seeing this, but we're seeing this through thread all the way down to gig workers and more m-manual labor jobs. So this is a problem that is relevant to everyone and to all workers, disabled and nondisabled alike.
Keith Casebonne (00:44:45):
Yeah, this is Keith. That, that, it's incredible to hear about all these different aspects of it. I mean, I've known about some, but some of the things you mentioned are, uh, very interesting and things I had not considered or, or, or heard about. So it's fascinating how pervasive the, the AI revolution is in all these areas for (laughs) better and worse. Let's talk about AI and education. So what is student activity monitoring and how does that impact students with disabilities as, as well as teachers and professors that have disabilities as well?
Ariana Aboulafia (00:45:17):
Uh, hi, this is Ariana. I, I can talk a little bit about some of the education work. This is, this is another area that CDT works on quite a bit. We have a team that works on education, um, and education technologies and how they impact students. And I, I try to provide some of the disability, uh, perspective there. So student activity monitoring refers to technologies, and they're, they're often part of school or in school-issued laptops or iPads. Um, they're essentially softwares that monitor what students are doing on either those school-issued devices or sometimes I believe the school Wi-Fi, but it, it can also be, again, on the device, meaning if the student takes the device home and works on it, that that software would still be running.
(00:46:05):
And that software, it uses algorithmic systems to, and it will have a list of certain terms to flag for educational administrators. So that could be teachers, but it also could be higher up like principals or something like that. Uh, and there's a few reasons why these systems were originally put on school devices, right? So one is safety, right? One is safety of students. One is also to help students during the, the COVID pandemic who were suddenly gonna be doing a lot of work removed from the traditional, uh, classroom environment, right? How it can potentially be problematic for students with disabilities and some colleagues at CET and I published a, a paper on this, but let's say hypothetically that some of the words that are considered flags for student discipline or to pull a student out of class are words related to depression, right?
(00:47:07):
And let's say that there is a student who has a disability like major depressive disorder and is looking at or searching words or terms related to that disability, but then they're being flagged again for, by the software, which is then getting reported out to the teachers or the administrators for purposes of discipline or, or for purposes of conversation such that it requires removal from the class environment. If this happens repeatedly, it can be an, an interruption to a student's everyday experience in school on the basis of a disability. And, and there's all sorts of like equity things to think about here, right? Because when these softwares are on school-issued devices, it's important to, to think about and to ask the question of, "Which students use school-issued devices at home and which are able to afford, or more accurately, their families are able to afford other devices for them to use at home?" which would reduce the window of time whereby that, that activity monitoring has access to you.
(00:48:14):
Something else that I'll mention, this is a little bit outside the context of disability, but it's important for purposes of multiple marginalization. These softwares have also, some of them, not all of them, will flag terms related to LGBTQ+ identity, which can lead to outing of LGBTQ+ students, which is extremely problematic for I would hope. And, and this goes back to something that, that, Jen, was talking about in the context of employment, right, is that these tools hurt disabled folks, but they also hurt folks without disabilities. Uh, and so when I think about these, how to design or how to create better technologies that maybe don't have these effect, a lot of times in my mind I go back to these initial precepts of inclusive design, right? The idea that you can design things, whether it be technologies or something else, in a way that benefits people with disabilities but also benefits a lot of other people.
(00:49:07):
And I, I think that that's a helpful framework for some of the things that we're talking about here. And, and, Jen, it was really great to hear that from you because it is 100% true that these things are having effects on not only folks who are marginalized in other ways, not only folks who are multiply marginalized disabled people, but also folks who may not be disabled at all and, and who just are interacting with these technologies in ways that are not beneficial and potentially harmful. So I hope that, that helps with some of the information on education and activity monitoring.
Maddie Crowley (00:49:46):
I think that's such a great point, and as, as all of y'all named or referenced to when you're thinking about disability, disability is such a, there's such a huge population around the world, but specifically in the US, it's maybe one in four, maybe more than that, around that number, right? That, that is easily gonna intersect with these other identities that people hold, privileged or marginalized, that will change how they navigate interacting with tech and their access to tech in general. So I think naming issues like internet access and/or not owning a device and therefore having to use what's provided by the school that may have this information or AI like tracking software that Ariana was talking about, I think naming how these things impact one another is really, really important because it's the only way we can move forward through this space to bring a more equitable approach to how these things are implemented and how people, how we can create policies or some kind of oversight to ensure that there is some recognition over how these things are impacting folks.
(00:51:04):
So with that, I know we talked a little bit about techno chauvinism, which was, I was excited to learn a little bit more about that and I think we'll discuss it a bit more. Could you speak a little bit to that, that is if, if it's the same as techno solutionism? We had a podcast a few months ago on techno ableism, really diving deep into the intersection of tech and disability discrimination. So could you all speak a little bit about that? And since we already covered it a little bit, if we wanna take it a step further, how can collaboration between AI developers and the disability community or AI developers with disabilities make and create more inclusive tools?
Larry Weru (00:51:55):
Sure, I can hop in. So Larry speaking. So I think I'm in an environment that highly values technology, not necessarily just where I work because we are working with how to incorporate technology into how we explore and learn from b-b-b-biology and medicine, but even in the area that I'm in. So I'm currently, I live like maybe a 10-minute walk from MIT and it's this area called K-K-Kendall Square, and there, there's a lot of big companies who have historically done a lot of work with technology. G-Google has headquarters there. There's Akamai and some other older companies as well. But also it's like one of the current centers of this AI revolution as well. You'd take any week out of the past couple months and there's been some kind of AI-related event or conference or something. (laughs)
(00:53:05):
And so like I'm in this space where I, uh, I get to see where the current, like when people are creating startups, they are usually trying to, they're taught to solve a problem and then create a company that can address that. And I'm seeing that we're currently, I mean, so there's what you should do and what actually happens, (laughs) I guess, where I'm g-g-getting at. I think like, right now, we're in this moment where we're really starting to see the value out of AI even though it's been around for a while, I think, like once ChatGPT had its moment, a lot of people have been creating AI-related companies or finding out ways to incorporate AI into their technologies.
(00:53:57):
So we're taking this perspective that this, this new technology out here and it can address all of the issues that we have or how can it and we're really trying to sometimes fit like a square peg into a round hole because I think that we have some problems that do not need AI to be solved. We have problems that arguably don't even need technology at all to, to be solved, but we tend to go towards technology as the solution. I mean, I say this as somebody who learned to code when he was 11 and I have a tech job and I'm using technology to have this conference. So I, I like technology, but I, I like technology when it's actually solving problems. And I like solutions to problems more than I like technology.
(00:54:47):
And I think a good example of this would be, a couple weeks ago, I was at a hackathon. The hackathon was cosponsored by a community organization that did a lot of good work for folks who were blind. And the event was fantastic. The outcomes of the winners of the event, I felt, were meaningful, but only because at the event there were people who were deaf, for example, who were mentoring and even somewhat like checking people as they were coming up with these solutions that weren't really solving any real problems. But the part that is most memorable about that, that, that event was when I had lunch and sat down next to a group. Everybody in the table was blind, except for me and the person sitting across f-from me and the person who was blind used or one of them is also someone who has dia b-b-b-betic which is actually a co, cooccurrence. For example, if you have glaucoma, you might lose eyesight.
(00:56:06):
So it's fairly common for someone who is blind to maybe use a blood g-g-glucose monitoring app. This person was showing me his app, which he navigates using a screen reader and he was asking me, "Can you tell me what this g-g-graph says on this view?" Because this view of this blood g-g-glucose monitoring app just had one graph, or actually like he said, 20% of the graphs that are on this app aren't usable for him and his s-screen reader. And that is a problem that could have been solved 20 years ago. There's no new technology that needs to exist for that. It just needs people realizing that they have users who, in this case, are blind, and therefore, we should design our software to, to be usable by folks who are blind.
(00:57:05):
But I don't know, like it's frustrating to see that outside of that conference, I mean, outside of that hackathon when I attend a lot of AI-related events, just to see what are the new startups, there are a lot of startups that could potentially make a meaningful impact if like ... Like, I mean, t-there's s-solutions that are really not caring about this audience. And I think this concept that, as long as you do not consider people who are d-disabled as people who are navigating this world and using technology and other p-p-p-parts of this world, even outside of technology, you're not gonna have good technology s-solutions and you're also not gonna just come up with solutions that don't even need technology. Like you just might not realize that you're hosting an event without an elevator and you're telling your body to go to the top floor f-f-for a reception and then like that's not a technology problem. That's just make sure that you choose the right location to include everybody. Uh, I felt like this long-winded, but just wanted to drop t-t-that in.
Ariana Aboulafia (00:58:26):
Yeah, this is Ariana and I'll, I'll go back to the, the question about techno solutionism and, and techno ableism. One of the ways I, I think about this is that I think, in a lot of ways, techno ableism is a combination, right, is exactly what it sounds like techno solutionism, right? So techno solutionism is the idea that technology can solve any problem, and also secondarily, that technology is the best way to solve every problem, right? That's, that's techno solutionism and, uh, and again, it's a bit oversimplified. When that is combined with ableism and part of ableism being that disability itself is a problem that needs to be solved, so the idea, one of the ideas of ableism being that disability is a problem that needs to be solved, one of the main tenants of techno solutionism being that technology can solve every problem and that it is the best way to solve problems. When you have those two things combined, I think that's a good way to think about techno ableism.
(00:59:26):
And I'm sure, I, I think you had the, the creator of that term on your podcast. I'm sure she said much more than that, but that's the way that, that I think about it, right? And to go off of what, what Larry was saying, there are a few ways to go about that in the context of these systems that we're talking about, right? And one would be to say, "Actually, disability is not a problem to be solved," right? That's, that's the first, that's the first thing that, yes, people with disabilities may face certain struggles, but maybe those struggles are more a, a consequence of systems than of disability itself, right? And that disability doesn't need to be solved and that, if given the choice, no people with disabilities would not necessarily all choose to not have disabilities, right, or to not be disabled.
(01:00:13):
So that, that's, that's counteracting some of the ableism side of things, right? And then there is counteracting some of the techno solutionism side of thing. And this is also part of what Larry was saying, right, which is to say, "And also technology is not the best way to solve that particular problem because there are other ways to solve." So I think that use, those sorts of frameworks can be really beneficial in when we're thinking through and talking through some of these systemic examples of ways in which AI and technologies impact folks with, with disabilities. But, Jen, I, I wanted to turn it over to you because you were the one who brought up techno chauvinism.
Jennifer Gray (01:00:53):
Yeah. Thank you, Ariana and Larry, for your comments. So techno chauvinism is kind of similar, you know, umbrella term that again I learned through Meredith Broussard. Techno chauvinism essentially is saying that technology is the best way to go about these things, that technology and computers themselves are inherently better, more efficient, smarter than human solutions. And this is implying that computer science, that math, these algorithms behind this technology is unbiased and is always correct, which throughout this whole podcast, we have found to not be the case. So I think really, and you've both talked about this, Larry and Ariana, but essentially during the design process and going back to us talking about these bias training sets that AI technologies use, it is including disabled people in the design process, in the research and development process, having disabled people be in the room leading and contributing to this design and a common rallying cry in the disability community is, "Nothing about us without us."
(01:02:19):
If this technology is not only going to impact everyone, but is going to impact specifically disabled people, disabled people need to be leading the way on this research, on the research collecting the data, on designing the tech. There's just such a lack of disability representation at so many levels before the technology is even created, that this bias exists. And it's so difficult to retroactively go and change these datasets or try and remove bias. The bias is already ingrained and exists in something like Google's algorithm or ChatGPT. And I, I think you both have touched on this, but so many very simple solutions would have been created and implemented that not only help disabled people, but help nondisabled people and this could be seen in like closed captioning.
(01:03:15):
I know so many people who are not disabled or do not identify as disabled that use closed captioning and it's super helpful. So I find that if something is beneficial for the larger disability community, it's usually beneficial for everyone. And so having disabled people within, um, the design process, and I know I've jumped away from talking about techno solutionism and techno ableism, but one, including disabled people at every portion of the, um, creation process. And then two, of course, realizing that, yeah, exactly what Ariana said, disability is not a problem to be fixed. It is really society (laughs) that needs to be fixed in these barriers that exist. And technology is not always the correct way to solve so many issues. And our overreliance on tech, especially tech that is underdeveloped and underresearched, can be super harmful and we see that a lot with surveillance technology, for example, um, but I'll stop there.
Keith Casebonne (01:04:35):
This is Keith. Those are really great points, and yeah, it's not that technology is bad, it's the overreliance on technology that can be, that can lead to, to problems. But you did mention a couple of good things like closed captioning and things like that and so we'll use it as a good segue to pivot into another side of things, because we've been talking for the most part about the problems with AI related to people with disabilities, but there are some good things. So let's talk a little bit about how AI can be helpful for people with disabilities. Some of the things that come to mind are generative alt text, some aspects of assistive technology. Let's, if, if y'all could expand on that a little bit, talk about what some of the positive developments have been as far as AI and, and disability.
Jennifer Gray (01:05:15):
I can hop in here really quickly and then turn it over to my colleagues. As I mentioned earlier, I identify as neurodivergent and I have multiple chronic illnesses and, and brain illnesses that I, I find certain technologies to be really helpful with, so one of them being actually AI. So I love having an AI notetaker that will join me on some meetings, especially if I'm having a particularly bad brain fog day. Sometimes it can be difficult for me to take notes as quickly as I need to and something as simple as having an AI notetaker that can record and synthesize information for people with intellectual and developmental disabilities, folks who are neurodivergent or a whole host of other disabilities, these tools can be super helpful and really awesome. That's just one example.
Larry Weru (01:06:10):
This is Larry speaking. Yes, earlier, I was talking about v-v-v-voice recognition software and how I don't feel like it hits the mark for me all the time, but that technology has been something that has really enabled certain folks to work in spaces where the current infrastructure would otherwise not s-s-support their work. Like as one example, we know that software engineers, it's a field where you can make a lot of money in which is important, because earlier, we were talking about e-employment and how employment has an impact on q-q-quality of life. And we know historically, that those who are disabled experience a lot of discrimination in employment, um, and are underemployed.
(01:07:08):
And it's really interesting to me that, in 2024, you can be a software engineer and you can code using v-v-voice recognition, which is really something that can open up a lot of work for somebody who might have a hard time using k-k-k-keyboards or a mouse. And that's one example where like we do need even more improvements of voice recognition technology, because as of today, it's still not perfect. You still have to hassle around with it to get it to understand what you're trying to say. And if it can b-b-better understand what somebody is trying to say, then that's one example of where it can reduce barriers that currently exist in employment f-f-for one group in one role.
Maddie Crowley (01:08:05):
This is Maddie, just as a side note, Jen, I wanna know what notetaking software you use and also just, I don't know if this is out of the scope of the question, but also, how are, how do you and like how do people with disabilities balance something like an AI notetaker with, say, making sure that i-information stays private and safe and to the intellectual property or like of an organization or how does that information stay secure?
Jennifer Gray (01:08:31):
Yeah, yeah. Thank you so much, Maddie. This is Jen speaking, so I use Otter.ai. It's called an OtterBot and it's, it's quite common, I've actually met other people from other organizations that like it as well. So in terms of how we balance privacy, first and foremost, if I go into a meeting, I always, participants will know if I am using the software or not, and if they don't want to be recorded because inherently what it does is it, um, voice records and then generates a transcription of the meeting, and then from there, will, um, create notes, AI-generated notes with different sections, um, which I myself will go through and check for accuracy, but generally it does tend to be quite accurate, which is nice.
(01:09:26):
In terms of privacy, it's kept on a secure server, and to my knowledge, OtterBot does not share any of your data or use your data. It's a subscription service and you can use it as a business-wide subscription and only people associated with your organization. For example, New Disabled South have access to this tool, and then within that, only you have access to your meetings and your transcriptions. And of course, if there is a meeting that is either gonna have very sensitive information or someone doesn't consent to being recorded, then I will not include the OtterBot notetaking.
Maddie Crowley (01:10:11):
This is Maddie. Thank you so much. Yeah, I, I didn't realize that that could be a really important like nuance of this conversation and like how we balance people, like the individual with disabilities, like privacy as to what they utilize and how much they wanna share with others about what tech they utilize in the workplace, but also for folks who are listening, maybe they're like, "Hey, what is, I want that. Like that will help me," which is like resonating with me right now, which it's funny, I did not plan this, but segues really well into our next question, which is, how can people do their best to protect themselves from these various issues we've talked about and be informed of their data rights and access-related rights, which I think we've been seeing a lot in the news lately and I think could be a really great, I don't know, conversation I'd love to hear from y'all?
Larry Weru (01:11:04):
Oh, Larry here. So I think that this is an interesting question, uh, because there's, there's historically been this issue where companies collect your data and essentially they end up selling your data as part of like how they make money. And then there's other, uh, types of companies which don't directly sell your data, but they leverage it in order to make money. And like in the past, it almost felt like that was something that didn't really get checked a lot and it was nice to see movements in Europe like with G-G-G-G-GPT PR, which essentially was the, the government saying, "Hey, companies, you need to do more to protect the data of our citizens, because ultimately, p-p-privacy is a right that somebody should have," and unfortunately, I haven't seen that same level of like involvement with the g-government in the US and I'd love to see the g-government like really try to protect t-the privacy of the consumers because I feel like, while there are things that individuals can do to try to shield themselves, it's, it almost always feels like a losing t-t-t-t-the ...
(01:12:46):
Sorry, it almost feels like it's something that ultimately won't really change a lot of things if you don't have a systemic change that results in people not having to take the onus of their own self-protection. But I guess outside of that, outside of that like concept, I think there still are things you can do. For example, if you wanna chat with a friend, you don't necessarily have to use iMessage or any other chat tool. You could go out of your way to ensure that you're using something that has end-to-end encryption as, as one example. And when you kind of like send this signal, signal into the market that this is something you want, that's one way that you can also get other companies to start including those features.
(01:13:40):
For example, if you say, "I'm not gonna use your service if you sell my data," and you actually can show companies that that is something that is going to impact like how their tools get used, or if their tools get used, then at scale, that's one way, but, uh, it still requires more than one person or people acting indiv-v-vidually. I think like one thing that somebody can do for sure is advocate for new laws, and those laws are how you can e-ensure that what you are using is safe because corporations usually won't try to advocate for more than they are required, unless they're saying, kind of like what Apple did, like, "Hey, we're going to make sure our phones are secure." I'm not sure if this is still the case in 2024 but unless you're like one or two companies that says, "This is what we do because we think this is what some of our customers will want and this is how we can differentiate ourselves," which is not gonna be something that would get the average company.
Ariana Aboulafia (01:14:46):
Yeah, this is Ariana. I, I just wanna jump on one of Larry's points that I think is really important. I agree with him. I, I think it's extraordinarily difficult for any individual with or without a disability to be able to adequately protect their privacy with the landscape the way that it is right now. The, the thing that I'll add there is that for folks with disabilities, it may be particularly difficult, because they may not necessarily have choices to use all of the products, right? Because some products, like we've talked about earlier, right, some products are more accessible than others. And so for some folks, it, it, it can be a potential tradeoff, accessibility versus privacy, right? And that's not a, a, a choice that is being made per se in my opinion. It's, it's barely a choice at all if, if the decision is between a technology that, as an example, helps someone, um, live their daily life, whether that's go grocery shopping or across the street or, or, or anything, or work, right, in, in the course of their employment.
(01:15:56):
The, the "choice" between using those, even If they may or may not be privacy protecting, and not using them if a better alternative is not available that is privacy protecting, isn't really a choice and I think it's really important to call that out, right? Both of those things. Both from the jump, as, as, as, as my colleague said, that it's really difficult for anyone, but then secondarily, not just that, that the folks with disabilities may have these particular considerations that make these choices even less, even less of a choice. And, and also I'll say that the type of data that folks with disabilities may be disclosing can be particularly sensitive, right? And, and all of those things combine to make some very specific concerns when it comes to folks with disabilities and privacy that I don't necessarily think can be solved on an individual level.
Keith Casebonne (01:16:58):
Well, that's interesting and I, I think it segues into sort of a wrap up as to, how can people learn a little bit more about AI and, and all of this stuff and just get a little better educated about what they're getting into when they use some of these tools and whatnot and any other closing thoughts y'all may have just about the subject?
Larry Weru (01:17:15):
Mm-hmm. Larry here. Uh, it's interesting, because like right now, AI is having this moment in the media. It's almost in some ways hard not to just hear about it, because I'm walking down the street, I'm in the subway and I'm looking over on somebody's phone and they're using ChatG-G-GPT as just one common example. So like sometimes I, I wonder if, I feel like there, there, there might be a hurdle now that didn't exist before in kind of like teaching people about what AI is because there's only one part of it that's really in the mainstream right now. And also now, if you go to search on the web, there's a lot of marketing around it as well that didn't exist before.
(01:18:09):
Like I remember, 10 or so years ago, I was in this, there's this website called Hacker News or like news.ycombinator.com. It's basically a Reddit for a lot of tech people to talk about current events in tech. And back then, you could really see a lot of discussions that went into all the nuts and bolts of machine learning. You can learn about all these new tools like TensorFlow, at the time, was the one of the first like mainstream tools that, that you could use to go on learning, machine learning. And I'd say even then, it was still very hard to understand what is all of this because you would have to piece together all of these different things and it's something that was changing all the time.
(01:19:03):
And back then, the disad-v-v-v-vantage was nothing was really mature enough and also there was no common example. Like today, I can just say ChatGPT to a random person and they'll understand what AI, they understand like an aspect of it from that. Um, so I guess what I'm trying to say is, if you want to learn about AI, there are groups you can find online that do get more into the details of what it is. It's something that has been in discussion, even in this current form with generative AI as, as one example. That's something that's been in conversation at least over the past 10 years. So if you want to kind of like catch up with what's been happening you. I would start within this 10-year frame. Um, I'm sure there's gonna be somebody who does a YouTube channel on this as well. Um, YouTube is an excellent way to learn.
(01:20:11):
But I guess t-t-t-t-the main concept is it's more than just large language models. There's this current discussion right now as to whether small language models might be useful as well because kind of like t-t-t-the main concept is large language models were created towards this pursuit of generative AI, I'm sorry, general (laughs) intelligence where you can like throw anything at one thing and it will give you the solution which is contrary to more historical AI, which was very trained for one task. So there's this current discussion on small language models, which might be things that don't need a large infrastructure to run. Like maybe they can run locally on your device. Then in that way, maybe they might be more secure.
(01:21:09):
If you can get Otter.ai on your phone without it needing the internet, it could be more secure, um, but I'd say, just look into all the current terms. We've mentioned a couple of terms in this d-d-discussion. I personally tried to avoid using terms because they tend to mean different things to different people, but just understand that it's broad. It's b-b-been around for decades and the current iteration of it is one form and there might be things from a couple decades ago that might be applicable as well a-as you're learning it.
Ariana Aboulafia (01:21:46):
And, and this is Ariana. The one thing I would add to Larry's point is I, I think, if I were talking to folks who are interested in this topic and wanna learn more, particularly folks with lived experience with disability, I would say just to recognize that your experience as a person with a disability is incredibly valuable in thinking about the ways that AI can impact disability, right? You don't need to be an engineer. You don't need to be a coder, right? Like I have no idea how to code. Larry, I'm sure if I, uh, ever encountered you coding, I would be scared because (laughs) its, it's not a thing that I know what to do.
(01:22:29):
I'm not an engineer, I'm not a computer scientist, I'm an attorney, so I, I have a, a civil rights bend to this work, but more importantly, right, I'm a person with disabilities, multiply marginalized person with disabilities who has a lived experience of engaging with all of the systems that we, we come into contact with, right? And that experience is valuable. You don't necessarily need to be a technologist, a coder to both, a, be interested in, in this, but also secondarily, to really be able to make an impact here. And so I think those would be, that would be a, a message that I think would be important for folks who may be listening and, and it, this piques your interest, but, but you think I'm, I don't actually understand that these technologies work, that, that you don't need to, right?
(01:23:20):
We definitely need folks, preferably folks with disabilities, who do understand how these technologies work, but it doesn't necessarily have to be you, because you may bring something completely different from your lived experience that can be really helpful. So I, I would probably end in there and turn it over to Jen.
Jennifer Gray (01:23:38):
Yeah. Thank you so much, Ariana. And, Larry, to your point, I think you, you guys covered that beautifully. I mean, honestly, yeah, going back to as someone who did not go to school for this, as I mentioned earlier, I actually studied neuroscience and I have a molecular biology background, technology was something I was just personally interested in, especially AI. And a lot of what I learned has been from YouTube. And also to Ariana's point, there's so much innovation that comes out of necessity and need and gaps in the current technology that disabled individuals are at the forefront of, of using technology in these really creative ways and creating loopholes for themselves and shortcuts to make things in their life easier.
(01:24:31):
So I feel like a lot of what I've learned about so many things, not just technology, but technology especially has been from just following other disabled creators on social media, on websites, blogs, YouTube, watching where the gaps are in tech and what disabled folks need and then how they're using technology to their advantage. And what I really ultimately want to work with my colleagues on and achieve is this narrative change that, well, yes, technology can have these really negative impacts and be quite ableist. Us, as disabled people, we can use technology to our advantage and it can be an incredible and empowering tool. And again, it just goes back to working to dismantle these systemic barriers that exist for disabled and multiply marginalized folks with technology and, yeah, empowering us in technology. Yeah. (laughs)
Keith Casebonne (01:25:38):
Well, well, we really wanna thank all three of you for being our guest today. This was a really interesting discussion, touching on so many different areas, uh, of, of AI that, uh, I'm, I'm sure most of the listeners, I know my, uh, I myself, we're, we're not aware of a number of these things. So many things out there I think we've all learned a lot today. Just thanks again. Um, we really appreciate, uh, you taking the time to discuss this topic in depth. It's a very important topic. And again, thank you so much. We're really honored to have all three of you as our guest today and we appreciate you.
Jennifer Gray (01:26:08):
Thank you for having us.
Ariana Aboulafia (01:26:09):
Thank you so much.
Larry Weru (01:26:09):
Thanks. I really enjoyed the convo.
Keith Casebonne (01:26:11):
Hey, this is Keith – before we move to our closing, I just wanted to quickly interject with something rather ironic that happened during the post-production of this episode. So after editing our podcast episodes, I run them through a program that does some final post-production, checking levels, removing noise, that sort of thing. It’s an automated process, and guess what, it uses AI to do those tasks. Well as you just learned, AI can be quite ableist in its actions. You also heard in the episode that Larry, one of our guests, speaks with a stutter.
(01:26:41):
Well … the AI doing the audio processing removed much of Larry’s stutter from the audio, treating it as if it were background noise that it needed to remove. Yeah, that’s right: in the production of an episode about AI being ableist, an AI did an ableist thing and removed spoken audio from a guest with a disability. We couldn’t help but add this little anecdote to the episode; it was just too ironic and “on the nose,” I suppose. Uh, I reprocessed the audio and restored our guest’s natural voice. Needless to say, I also let the vendor know about this in the hopes that maybe the awareness will add some training to their AI tools, but we’ll see. Anyway – thanks for listening, and on to our closing!
(01:27:22):
Thank you so much to Ariana, Larry and Jennifer for being on the podcast. Uh, I truly learned so much and appreciate y'all's time.
Maddie Crowley (01:27:30):
Indeed. Make sure you check out the show notes to learn more about the other things we mentioned in this episode. We'll make sure to link to various things right there for you.
Keith Casebonne (01:27:41):
Yeah, and please take a moment to hit subscribe wherever you're listening to the podcast, so you'll get notifications when new episodes drop. Also feel free to leave comments, ratings, reviews. That's always appreciated. We are on all the major podcast platforms and you can also listen to read the transcript of each episode on our website at disabilityrightsflorida.org/podcast.
Maddie Crowley (01:28:04):
Thanks so much for listening, and as always, please email any feedback, questions or ideas about the show to podcast@disabilityrightsflorida.org.
Announcer (01:28:16):
The You First podcast is produced by Disability Rights Florida, a not-for-profit corporation working to protect and advance the rights of Floridians with disabilities through advocacy and education. If you or a family member has a disability and feel that your rights have been violated in any way, please contact Disability Rights Florida. You can learn more about the services we provide, explore a vast array of resources on a variety of disability-related topics and complete an online intake on our website at disabilityrightsflorida.org. You can also call us at 1-800-342-0823. Thank you for listening to You First: The Disability Rights Florida Podcast.





