Featured Post

social media

When AI Can’t Hear You, It’s Not Neutral — It’s Designed That Way

 When AI Can’t Hear You, It’s Not Neutral — It’s Designed That Way


I’ve been thinking a lot about who gets heard by AI—and who doesn’t. We tend to talk about artificial intelligence as if it’s neutral. Objective. Just math and data. But for many autistic people—especially those who are minimally speaking or nonspeaking—AI systems don’t just fail sometimes. They quietly shut people out. That’s what my paper (currently under peer review) is about: something I call engineered exclusion

What do I mean by “engineered exclusion”?


Engineered exclusion is when technology predictably leaves certain people out—not because of a bug, but because of how the system was designed from the start.
Most AI communication tools assume a very specific kind of user:
  • Speaks fluently
  • Speaks quickly
  • Uses “standard” English
  • Communicates in neat, predictable ways
If that’s not you, the system often decides—without saying it out loud—that your communication doesn’t count. For many minimally speaking autistic people who use AAC (augmentative and alternative communication)—text to speech, letterboards, gestures, partial speech—this shows up everywhere:
  • Voice assistants that don’t recognize their speech at all
  • Text-to-speech voices that mispronounce basic words or names
  • Systems that require extra labor just to be understood
  • Interfaces designed more for caregivers than for the user themselves
The exclusion isn’t random. It’s built into the pipeline.

“Nonspeaking” doesn’t mean “no language”


One thing I want to be very clear about: Nonspeaking is not the absence of language. Many nonspeaking and minimally speaking autistic people have rich, complex thoughts and communicate in multiple ways, often depending on:
  • Fatigue
  • Anxiety
  • Sensory overload
  • Motor planning demands
  • Environment and predictability

AI systems, however, tend to flatten all of that variation into a single question: Does this look like typical speech or not? If the answer is no, the system often treats the user as noise.

Why this keeps happening


AI systems learn from data—and the data overwhelmingly comes from:
  • Fluent speakers
  • Neurotypical communicators
  • Majority-language users
  • Western norms of “clear” expression

Then we evaluate those systems using benchmarks that reward speed, fluency, and predictability. So when a system fails to understand a nonspeaking autistic user, the problem isn’t labeled exclusion. It’s labeled error. And the burden to fix it gets pushed onto the user—who has to type things phonetically, add extra spaces, reword sentences, or give up altogether. From the system’s perspective, everything looks fine. From the user’s perspective, communication becomes exhausting.

Designed dignity: a different way forward


The paper doesn’t just critique what’s broken. It proposes a shift in how we think about accessibility. I call this designed dignity. Instead of asking, “How do we retrofit accessibility after the fact?” Designed dignity asks, “What if we treated human variation as expected from the start?”
That means:
  • Valuing expressive access as much as input accuracy
  • Designing for communication that changes over time and state
  • Measuring whether people can be heard, not just whether the system performs well on average
  • Including nonspeaking autistic people (and their families) as co-designers, not edge cases

Accessibility isn’t a bonus feature. It’s part of whether AI can honestly claim to be fair.

Why I wrote this


AI is rapidly becoming the middleman for how people communicate—at school, at work, in healthcare, and in public life. If we don’t question whose communication counts now, we risk hard-coding old forms of ableism into the infrastructure of the future. This paper is my attempt to slow that down and say: Let’s design systems that don’t just listen—but listen on human terms.


No comments:

Post a Comment