Featured Post

social media

A map of multisensory illusions—and what they reveal about autism and ADHD

I recently wrote a Psychology Today piece [Why Perception is Not Just What We Sense] about a simple idea: perception isn’t something we receive. It’s something the brain builds. I used a few familiar illusions—the McGurk effect, the stream–bounce illusion, the sound-induced flash illusion, and the parchment-skin illusion—to show what happens when the building process becomes visible.

What I couldn’t fit into that article is the part I think about most: illusions aren’t one category of party trick. They’re a toolkit. Different illusions probe different “decisions” the brain has to make—about timing, about cause, about whether signals belong together, about what counts as part of the body, and about how much certainty is “enough.”

And once you see that, a lot of so-called sensory problems start to look less like raw sensitivity and more like sustained interpretation.

The brain’s job isn’t accuracy. It’s a usable story.

In the PT article, I described how a click sound can make two ambiguous moving dots feel like they bounced rather than streamed through each other. Or how two quick beeps can make a single flash look like two flashes. These aren’t failures. They’re examples of something the brain does constantly: it takes incomplete data and tries to settle on a stable explanation.

Most people never notice the settling. That’s because in everyday life, the senses usually cooperate. The world is redundant. Events line up. The brain can lean on shortcuts.

But when signals are noisy, conflicting, or slightly out of sync, the shortcuts matter more. Some nervous systems rely on those shortcuts heavily. Others apply them more cautiously. Either way, the same problem exists: the brain has to decide what belongs together.

That decision is where the work is.

A quick map: what different illusions are actually testing

If you want to understand multisensory processing—especially in autism and ADHD—it helps to sort illusions by what kind of decision they stress-test.

1) “Where did it happen?” (spatial capture + recalibration)
Classic example: the ventriloquism effect, where vision pulls perceived sound location. With repeated exposure, the brain can even recalibrate (“ventriloquism aftereffect”).
What it tests: cue weighting (which sense do you trust more for location?) and the brain’s willingness to “retune” when cues conflict.

2) “When did it happen?” (temporal binding + temporal recalibration)
This includes the sound-induced flash illusion, and a whole family of tasks measuring how wide or narrow someone’s binding window is.
What it tests: temporal grouping rules—how strict your brain is about what counts as “the same moment.”

3) “What happened?” (causal structure under ambiguity)
The stream/bounce illusion is the cleanest example: same visual data, different interpretation. A click nudges the brain toward “collision.”
What it tests: causal inference—is this one event or two? Did they interact? Did something cause something else?

4) “What am I looking at?” (identity and speech binding)
The McGurk effect lives here. It’s not about where or when—it’s about what the percept is.
What it tests: how strongly the brain fuses cues into a single identity when they disagree.

5) “What is my body?” (body ownership + body schema)
Think rubber hand illusion–type paradigms.
What it tests: which signals win when defining “mine”—vision, touch, proprioception, agency.

6) “What does it feel like?” (material and surface properties)
The parchment-skin illusion is a great example: sound changes perceived texture/dryness.
What it tests: how the brain constructs material qualities—often from cross-sensory priors about what roughness should sound like.

This map matters because it shows something subtle: “sensory issues” aren’t one thing. You can struggle with timing but not localization. Or be great at spatial integration but get wrecked by causal ambiguity. The word “sensitivity” flattens all of that.

Autism, ADHD, and the cost of unresolved decisions

Here’s the extension I wanted to make more explicitly.

A lot of the exhausting moments aren’t “too much input.” They’re moments where the brain can’t quickly settle: Is that voice and that face one event, or two? Is that sound part of this movement, or background? Is this touch coming from me, or from something external? Is this mismatch important, or ignorable? Do these signals belong together enough to fuse?

When those decisions resolve quickly, you get smooth perception: one coherent world.
When they don’t, you get something else: attention sticks. Not because you’re dramatic, but because the brain is still doing the job.

This is where autism and ADHD can look similar—both can involve distractibility, overload, and fatigue—but for different reasons. In broad strokes:

  • Autistic perception is often described (in some lines of research) as more cautious about fusing when cues conflict—less reliance on “automatic unity.” That can preserve fidelity, but it can also make the world feel less forgiving when signals don’t match.

  • ADHD can involve instability of attention and salience, where the system has trouble holding one interpretation steady long enough for it to become background.

Those are not diagnoses in a sentence. They’re just a way of naming what many people recognize: the brain is not only sensing; it’s negotiating. And the negotiation has a metabolic cost.

The real accessibility lever: alignment, not elimination

If perception is “work under uncertainty,” the goal isn’t to remove all stimuli. That’s impossible, and it’s not even always desirable. The lever is simpler:

Reduce unnecessary conflict. Reduce forced decision points.

That can look like: Better audio-video sync (even tiny lags matter). Cleaner acoustics (less masking and competing streams). Predictable rhythms (consistent pacing in speech, predictable transitions). Fewer simultaneous demands (don’t pair complex listening with complex navigation). Environmental design that minimizes “sensory disagreements” (e.g., harsh lighting + echo + crowd movement is a perfect storm)

Sometimes the most supportive change isn’t dimmer lights or quieter rooms. It’s coherence. Less mismatch. Less ambiguity. Less “invent reality just to keep going.”

A different reframe

The point of illusions isn’t that we’re easily fooled. It’s that the brain is always choosing between interpretations, and it usually chooses the one that keeps the world usable.

So when someone says, “I’m sensory,” I increasingly hear: “My brain is doing more interpretation, more often.” And when someone looks “overwhelmed,” I don’t assume weakness. I assume workload.

Sometimes, the illusion isn’t the problem. It’s the clue.

When “Just Try Harder” Isn’t the Problem

We tell students this story early and often: If you work hard enough, you can get there.

That message—usually called growth mindset—has helped a lot of people. It pulls us away from “I’m just not good at this” and toward “I can learn.”


But there’s a quieter question that doesn’t get asked nearly enough: What if I am trying—and the system still doesn’t move? That question is what my new paper is trying to take seriously


Preprint link: https://doi.org/10.31234/osf.io/x7jru_v1 

Why growth mindset sometimes falls short


Growth mindset focuses on whether abilities can change. That’s important—but it’s only part of the picture. For many disabled and neurodivergent learners (including many autistic students), effort alone doesn’t reliably remove the biggest barriers:


  • Sensory-hostile classrooms
  • Rigid pacing and participation rules
  • Unreliable accommodations
  • Narrow definitions of what “counts” as learning or participation


In those situations, telling someone to “keep trying” can quietly turn into pressure to push through environments that aren’t actually workable. The problem isn’t motivation.
The problem is whether there’s any real path forward in that setting.


Introducing: Possibility mindset

Instead of asking only “Can I get better?”, possibility mindset asks a different question:

Is there room to move here—for someone like me? Possibility mindset isn’t meant to replace growth mindset. It builds on it. But it adds two missing pieces that matter a lot when constraints are real and persistent. In simple terms, possibility mindset is about whether a future feels realistically open, given three things:

  1. Can I change?
    (Can I develop skills or strategies that matter here?)
  2. Will the environment change?
    (Will this classroom, program, or institution actually adapt in practice?)
  3. Are there legitimate pathways?
    (Are there multiple acceptable ways to succeed—or only one narrow route?)

Motivation depends on how those three beliefs line up.


Why misalignment matters


Here’s a pattern that shows up again and again, especially for autistic and disabled students: Someone genuinely believes they can learn and grow. But they’ve also learned—through experience—that:


  • accommodations are unreliable
  • flexibility exists “on paper” but not in practice
  • only one participation style is treated as legitimate


When that happens, disengaging isn’t a failure of mindset. It can be a rational response to a system that doesn’t bend. Possibility mindset helps explain why someone can believe in growth and still walk away.


This isn’t about blaming the environment (or the person)

A really important point: Possibility mindset is not saying “the environment is always the problem,” or “effort doesn’t matter.” It’s saying that motivation lives at the intersection of:

  • what a person can change
  • what the system will change
  • which paths the system actually recognizes

When those are aligned, persistence makes sense. When they’re not, asking for more grit can backfire—by increasing self-blame without increasing opportunity.


Why neurodivergence makes this visible

Autistic and other neurodivergent learners aren’t a niche case here—they’re a revealing one. When sensory overload, communication differences, health fluctuations, or access friction are part of daily life, the question “Will this system respond?” becomes impossible to ignore. These contexts make something visible that exists everywhere but is often hidden: Motivation isn’t just about belief in yourself. It’s about belief in the path.


What this changes

If we take possibility mindset seriously, it shifts how we interpret “low motivation.”


Instead of asking only: 

  • Do they believe they can improve?, 

We also ask:

  • Do they see any legitimate way forward here?
  • Have they learned that effort pays off in this setting—or not?


And it changes what good support looks like. Not just better messages—but credible, visible flexibility. Not just encouragement—but routes that actually work.


Why I wrote this

Possibility mindset is my attempt to give language to felt experience—and to remind us that sometimes, the most humane question isn’t “Why aren’t you trying harder?” It’s: “Is there room to move here—and if not, what would it take to create it?”

Neurodiversity 2.0. Contemporary Research Evolving Frameworks and Practice Implications

Thanks, NIEPID for hosting and to everyone who joined the conversation today. Lovely to see so many MPhil students joining from all over India. Recording at. https://youtu.be/q0ctpgproS4




Breaking the Either Or Trap. Why Autism needs nuance not extremes

Thanks, Chico State for hosting and to everyone who joined the conversation on nuance in autism. Recording at  https://youtu.be/h70I6msB7rA




When AI Can’t Hear You, It’s Not Neutral — It’s Designed That Way

I’ve been thinking a lot about who gets heard by AI—and who doesn’t. We tend to talk about artificial intelligence as if it’s neutral. Objective. Just math and data. But for many autistic people—especially those who are minimally speaking or nonspeaking—AI systems don’t just fail sometimes. They quietly shut people out. That’s what my paper (currently under peer review) is about: something I call engineered exclusion




What do I mean by “engineered exclusion”?


Engineered exclusion is when technology predictably leaves certain people out—not because of a bug, but because of how the system was designed from the start.
Most AI communication tools assume a very specific kind of user:
  • Speaks fluently
  • Speaks quickly
  • Uses “standard” English
  • Communicates in neat, predictable ways
If that’s not you, the system often decides—without saying it out loud—that your communication doesn’t count. For many minimally speaking autistic people who use AAC (augmentative and alternative communication)—text to speech, letterboards, gestures, partial speech—this shows up everywhere:
  • Voice assistants that don’t recognize their speech at all
  • Text-to-speech voices that mispronounce basic words or names
  • Systems that require extra labor just to be understood
  • Interfaces designed more for caregivers than for the user themselves
The exclusion isn’t random. It’s built into the pipeline.

“Nonspeaking” doesn’t mean “no language”

One thing I want to be very clear about: Nonspeaking is not the absence of language. Many nonspeaking and minimally speaking autistic people have rich, complex thoughts and communicate in multiple ways, often depending on: Fatigue, Anxiety, Sensory overload, Motor planning demands, Environment and predictability

AI systems, however, tend to flatten all of that variation into a single question: Does this look like typical speech or not? If the answer is no, the system often treats the user as noise.

Why this keeps happening


AI systems learn from data—and the data overwhelmingly comes from:
  • Fluent speakers
  • Neurotypical communicators
  • Majority-language users
  • Western norms of “clear” expression

Then we evaluate those systems using benchmarks that reward speed, fluency, and predictability. So when a system fails to understand a nonspeaking autistic user, the problem isn’t labeled exclusion. It’s labeled error. And the burden to fix it gets pushed onto the user—who has to type things phonetically, add extra spaces, reword sentences, or give up altogether. From the system’s perspective, everything looks fine. From the user’s perspective, communication becomes exhausting.

Designed dignity: a different way forward


The paper doesn’t just critique what’s broken. It proposes a shift in how we think about accessibility. I call this designed dignity. Instead of asking, “How do we retrofit accessibility after the fact?” Designed dignity asks, “What if we treated human variation as expected from the start?”
That means:
  • Valuing expressive access as much as input accuracy
  • Designing for communication that changes over time and state
  • Measuring whether people can be heard, not just whether the system performs well on average
  • Including nonspeaking autistic people (and their families) as co-designers, not edge cases

Accessibility isn’t a bonus feature. It’s part of whether AI can honestly claim to be fair.

Why I wrote this


AI is rapidly becoming the middleman for how people communicate—at school, at work, in healthcare, and in public life. If we don’t question whose communication counts now, we risk hard-coding old forms of ableism into the infrastructure of the future. This paper is my attempt to slow that down and say: Let’s design systems that don’t just listen—but listen on human terms.

About That Autism Barbie and the Headphones

A few weeks ago, there was a lot of social media posts on something that was being widely celebrated online: a new Barbie meant to represent autism.

It had noise-canceling headphones. It had an AAC device. It had flexible hands for stimming.


And I felt… conflicted.


That moment is what eventually became my new Psychology Today .


Representation can be good—and still incomplete


Let me be clear upfront: AAC matters. Assistive technology matters. Seeing communication differences reflected in a mainstream toy does matter.


But I paused when I saw the headphones. Not because headphones are bad—they’re not. Many autistic people use them, including me at times. But because headphones have quietly become a shorthand for autism itself.


As I wrote in the article, tools meant to support autistic people are increasingly being treated as symbols that define them. That’s where things get tricky.


When autism is visually reduced to one object, it subtly tells a story: this is the fix. Put the headphones on, problem solved.


And that story just doesn’t match reality.


Headphones don’t “fix” sensory processing

One of the biggest myths about sensory differences in autism is that they’re just about loudness. They’re not.


Sensory processing involves:

  • unpredictability,
  • timing,
  • filtering,
  • body awareness,
  • and how the nervous system anticipates what comes next.


Noise-canceling headphones can help with certain kinds of sound, in certain contexts, for certain people. But they don’t:

  • prevent sudden sensory intrusion,
  • resolve auditory–visual mismatches,
  • stop cumulative overload,
  • or regulate the nervous system on their own.


In the article, I put it this way: Headphones can reduce input, but they don’t restore control. That distinction matters—especially for parents, educators, and designers who genuinely want to help.


Sensory experiences aren’t accessories

Another reason I wrote the piece is that autism is so often discussed through what it looks like from the outside. Headphones are visible. AAC devices are visible. Stimming is visible.


But sensory experience itself is mostly invisible.

Two autistic people can wear the same headphones and have completely different experiences:

  • One feels relief.
  • Another still feels overwhelmed.
  • A third finds the pressure uncomfortable.
  • A fourth only benefits in very specific environments.


When representation collapses all of that into a single image, it unintentionally flattens autistic experience. Or as I wrote - When support tools become symbols, we stop asking who they work for—and when they don’t.


Why the Barbie moment mattered 

It gave me pause because it reflects a broader pattern: good intentions paired with shallow understanding. We’re getting better at saying “autism exists.” We’re still struggling with understanding how autism actually works—especially at the sensory and nervous-system level. That’s why I wanted to write something that didn’t attack representation, but complicated it.


Because real inclusion isn’t about having the right objects on display. It’s about designing environments, expectations, and supports that don’t assume one solution fits everyone.


What I hope readers take away

If there’s one takeaway I hope sticks, it’s this:

Headphones are a tool, AAC is a tool. But Neither is autism.

Autism lives in how a nervous system senses, predicts, and responds to the world—often beautifully, sometimes painfully, always uniquely.


If we want better representation, we need to move beyond symbols and toward understanding.