Featured Post

social media

Lonely in a Crowd: When Being There Still Isn’t Belonging

Loneliness is usually imagined as being alone. But many autistic people describe something different—and harder to explain: being surrounded by people and still feeling profoundly lonely. That paradox is what my paper tries to make sense of


Preprint Link: https://doi.org/10.31234/osf.io/rjeus_v1 

.

The puzzle: lonely, but not isolated


Autistic adults consistently report high levels of loneliness. That part isn’t new.

What is puzzling is this: many of the people reporting loneliness aren’t socially isolated at all. They may be:

  • in classrooms
  • in workplaces
  • in families
  • in social or clinical settings
  • interacting with others every day


And yet, the loneliness doesn’t go away.

Most research has treated loneliness as a side effect of:

  • limited social contact
  • social skills differences
  • anxiety or depression


But those explanations don’t fully explain why loneliness can persist even when social contact is present. So the paper asks a different question: What if loneliness isn’t about absence—but about the conditions of presence?


Presence without belonging


The central idea I propose is called Presence Without Belonging. It names a situation many autistic people recognize immediately: you’re included on paper, but not received in ways that feel livable. You can show up. You can participate. You can comply. But something essential is missing. Belonging, in this framework, requires three things—not just contact.


The three conditions that turn presence into belonging


1. Recognition
Recognition means being understood as a legitimate social subject—not constantly misread, corrected, infantilized, or doubted. Many autistic people describe:

  • having their emotions misinterpreted
  • being spoken over
  • having their self-reports discounted
  • being treated as less credible than others


When recognition is unreliable, interaction becomes fragile. You can be present—but never quite trust that you’re being seen or believed.


2. Access

Access isn’t just about ramps or captions. Social interaction itself has access conditions. For autistic people, access can be blocked by:

  • fast conversational pacing
  • sensory overload
  • reliance on implicit social rules
  • limited tolerance for alternative communication (including AAC)

Even when participation is technically possible, it may require extraordinary effort to keep up. That effort adds up.


3. Sustainability

Sustainability asks a question that inclusion efforts often forget: Can this level of participation be maintained over time without breaking someone down?


Many autistic adults describe the cumulative costs of: masking, repairing misunderstandings, constantly adjusting and repeatedly negotiating accommodations When participation is only possible by enduring these costs, withdrawal isn’t a failure. It’s self-protection.


Why more social contact doesn’t fix this


This framework helps explain something research has already hinted at: simply increasing social contact doesn’t reliably reduce loneliness. If interaction continues without recognition, access, or sustainability, loneliness can actually deepen. Being present but not received can hurt more than being alone. That’s why reassurance (“You’re not alone!”) or pressure to socialize more often falls flat. It treats loneliness as a personal feeling to manage, rather than a signal that something about the environment isn’t working.


Loneliness isn’t a personal failure

One of the most important shifts this paper argues for is this: Loneliness in autism is often not a failure to connect. It’s a predictable response to systems that allow presence without supporting belonging.


This reframing matters because it:

  • challenges deficit-based explanations
  • explains why socially active autistic people can still be lonely
  • makes sense of why autistic-majority or norm-flexible spaces often feel less lonely—even when social networks are smaller

In those spaces, recognition is more reliable, access is built in, and participation is more sustainable.


Why I wrote this

If we want to reduce loneliness, the question isn’t just:
How do we get people into the room?

It’s: What would it take for being there to actually feel like belonging?

A map of multisensory illusions—and what they reveal about autism and ADHD

I recently wrote a Psychology Today piece [Why Perception is Not Just What We Sense] about a simple idea: perception isn’t something we receive. It’s something the brain builds. I used a few familiar illusions—the McGurk effect, the stream–bounce illusion, the sound-induced flash illusion, and the parchment-skin illusion—to show what happens when the building process becomes visible.

What I couldn’t fit into that article is the part I think about most: illusions aren’t one category of party trick. They’re a toolkit. Different illusions probe different “decisions” the brain has to make—about timing, about cause, about whether signals belong together, about what counts as part of the body, and about how much certainty is “enough.”

And once you see that, a lot of so-called sensory problems start to look less like raw sensitivity and more like sustained interpretation.

The brain’s job isn’t accuracy. It’s a usable story.

In the PT article, I described how a click sound can make two ambiguous moving dots feel like they bounced rather than streamed through each other. Or how two quick beeps can make a single flash look like two flashes. These aren’t failures. They’re examples of something the brain does constantly: it takes incomplete data and tries to settle on a stable explanation.

Most people never notice the settling. That’s because in everyday life, the senses usually cooperate. The world is redundant. Events line up. The brain can lean on shortcuts.

But when signals are noisy, conflicting, or slightly out of sync, the shortcuts matter more. Some nervous systems rely on those shortcuts heavily. Others apply them more cautiously. Either way, the same problem exists: the brain has to decide what belongs together.

That decision is where the work is.

A quick map: what different illusions are actually testing

If you want to understand multisensory processing—especially in autism and ADHD—it helps to sort illusions by what kind of decision they stress-test.

1) “Where did it happen?” (spatial capture + recalibration)
Classic example: the ventriloquism effect, where vision pulls perceived sound location. With repeated exposure, the brain can even recalibrate (“ventriloquism aftereffect”).
What it tests: cue weighting (which sense do you trust more for location?) and the brain’s willingness to “retune” when cues conflict.

2) “When did it happen?” (temporal binding + temporal recalibration)
This includes the sound-induced flash illusion, and a whole family of tasks measuring how wide or narrow someone’s binding window is.
What it tests: temporal grouping rules—how strict your brain is about what counts as “the same moment.”

3) “What happened?” (causal structure under ambiguity)
The stream/bounce illusion is the cleanest example: same visual data, different interpretation. A click nudges the brain toward “collision.”
What it tests: causal inference—is this one event or two? Did they interact? Did something cause something else?

4) “What am I looking at?” (identity and speech binding)
The McGurk effect lives here. It’s not about where or when—it’s about what the percept is.
What it tests: how strongly the brain fuses cues into a single identity when they disagree.

5) “What is my body?” (body ownership + body schema)
Think rubber hand illusion–type paradigms.
What it tests: which signals win when defining “mine”—vision, touch, proprioception, agency.

6) “What does it feel like?” (material and surface properties)
The parchment-skin illusion is a great example: sound changes perceived texture/dryness.
What it tests: how the brain constructs material qualities—often from cross-sensory priors about what roughness should sound like.

This map matters because it shows something subtle: “sensory issues” aren’t one thing. You can struggle with timing but not localization. Or be great at spatial integration but get wrecked by causal ambiguity. The word “sensitivity” flattens all of that.

Autism, ADHD, and the cost of unresolved decisions

Here’s the extension I wanted to make more explicitly.

A lot of the exhausting moments aren’t “too much input.” They’re moments where the brain can’t quickly settle: Is that voice and that face one event, or two? Is that sound part of this movement, or background? Is this touch coming from me, or from something external? Is this mismatch important, or ignorable? Do these signals belong together enough to fuse?

When those decisions resolve quickly, you get smooth perception: one coherent world.
When they don’t, you get something else: attention sticks. Not because you’re dramatic, but because the brain is still doing the job.

This is where autism and ADHD can look similar—both can involve distractibility, overload, and fatigue—but for different reasons. In broad strokes:

  • Autistic perception is often described (in some lines of research) as more cautious about fusing when cues conflict—less reliance on “automatic unity.” That can preserve fidelity, but it can also make the world feel less forgiving when signals don’t match.

  • ADHD can involve instability of attention and salience, where the system has trouble holding one interpretation steady long enough for it to become background.

Those are not diagnoses in a sentence. They’re just a way of naming what many people recognize: the brain is not only sensing; it’s negotiating. And the negotiation has a metabolic cost.

The real accessibility lever: alignment, not elimination

If perception is “work under uncertainty,” the goal isn’t to remove all stimuli. That’s impossible, and it’s not even always desirable. The lever is simpler:

Reduce unnecessary conflict. Reduce forced decision points.

That can look like: Better audio-video sync (even tiny lags matter). Cleaner acoustics (less masking and competing streams). Predictable rhythms (consistent pacing in speech, predictable transitions). Fewer simultaneous demands (don’t pair complex listening with complex navigation). Environmental design that minimizes “sensory disagreements” (e.g., harsh lighting + echo + crowd movement is a perfect storm)

Sometimes the most supportive change isn’t dimmer lights or quieter rooms. It’s coherence. Less mismatch. Less ambiguity. Less “invent reality just to keep going.”

A different reframe

The point of illusions isn’t that we’re easily fooled. It’s that the brain is always choosing between interpretations, and it usually chooses the one that keeps the world usable.

So when someone says, “I’m sensory,” I increasingly hear: “My brain is doing more interpretation, more often.” And when someone looks “overwhelmed,” I don’t assume weakness. I assume workload.

Sometimes, the illusion isn’t the problem. It’s the clue.

When “Just Try Harder” Isn’t the Problem

We tell students this story early and often: If you work hard enough, you can get there.

That message—usually called growth mindset—has helped a lot of people. It pulls us away from “I’m just not good at this” and toward “I can learn.”


But there’s a quieter question that doesn’t get asked nearly enough: What if I am trying—and the system still doesn’t move? That question is what my new paper is trying to take seriously


Preprint link: https://doi.org/10.31234/osf.io/x7jru_v1 

Why growth mindset sometimes falls short


Growth mindset focuses on whether abilities can change. That’s important—but it’s only part of the picture. For many disabled and neurodivergent learners (including many autistic students), effort alone doesn’t reliably remove the biggest barriers:


  • Sensory-hostile classrooms
  • Rigid pacing and participation rules
  • Unreliable accommodations
  • Narrow definitions of what “counts” as learning or participation


In those situations, telling someone to “keep trying” can quietly turn into pressure to push through environments that aren’t actually workable. The problem isn’t motivation.
The problem is whether there’s any real path forward in that setting.


Introducing: Possibility mindset

Instead of asking only “Can I get better?”, possibility mindset asks a different question:

Is there room to move here—for someone like me? Possibility mindset isn’t meant to replace growth mindset. It builds on it. But it adds two missing pieces that matter a lot when constraints are real and persistent. In simple terms, possibility mindset is about whether a future feels realistically open, given three things:

  1. Can I change?
    (Can I develop skills or strategies that matter here?)
  2. Will the environment change?
    (Will this classroom, program, or institution actually adapt in practice?)
  3. Are there legitimate pathways?
    (Are there multiple acceptable ways to succeed—or only one narrow route?)

Motivation depends on how those three beliefs line up.


Why misalignment matters


Here’s a pattern that shows up again and again, especially for autistic and disabled students: Someone genuinely believes they can learn and grow. But they’ve also learned—through experience—that:


  • accommodations are unreliable
  • flexibility exists “on paper” but not in practice
  • only one participation style is treated as legitimate


When that happens, disengaging isn’t a failure of mindset. It can be a rational response to a system that doesn’t bend. Possibility mindset helps explain why someone can believe in growth and still walk away.


This isn’t about blaming the environment (or the person)

A really important point: Possibility mindset is not saying “the environment is always the problem,” or “effort doesn’t matter.” It’s saying that motivation lives at the intersection of:

  • what a person can change
  • what the system will change
  • which paths the system actually recognizes

When those are aligned, persistence makes sense. When they’re not, asking for more grit can backfire—by increasing self-blame without increasing opportunity.


Why neurodivergence makes this visible

Autistic and other neurodivergent learners aren’t a niche case here—they’re a revealing one. When sensory overload, communication differences, health fluctuations, or access friction are part of daily life, the question “Will this system respond?” becomes impossible to ignore. These contexts make something visible that exists everywhere but is often hidden: Motivation isn’t just about belief in yourself. It’s about belief in the path.


What this changes

If we take possibility mindset seriously, it shifts how we interpret “low motivation.”


Instead of asking only: 

  • Do they believe they can improve?, 

We also ask:

  • Do they see any legitimate way forward here?
  • Have they learned that effort pays off in this setting—or not?


And it changes what good support looks like. Not just better messages—but credible, visible flexibility. Not just encouragement—but routes that actually work.


Why I wrote this

Possibility mindset is my attempt to give language to felt experience—and to remind us that sometimes, the most humane question isn’t “Why aren’t you trying harder?” It’s: “Is there room to move here—and if not, what would it take to create it?”

Neurodiversity 2.0. Contemporary Research Evolving Frameworks and Practice Implications

Thanks, NIEPID for hosting and to everyone who joined the conversation today. Lovely to see so many MPhil students joining from all over India. Recording at. https://youtu.be/q0ctpgproS4




Breaking the Either Or Trap. Why Autism needs nuance not extremes

Thanks, Chico State for hosting and to everyone who joined the conversation on nuance in autism. Recording at  https://youtu.be/h70I6msB7rA




When AI Can’t Hear You, It’s Not Neutral — It’s Designed That Way

I’ve been thinking a lot about who gets heard by AI—and who doesn’t. We tend to talk about artificial intelligence as if it’s neutral. Objective. Just math and data. But for many autistic people—especially those who are minimally speaking or nonspeaking—AI systems don’t just fail sometimes. They quietly shut people out. That’s what my paper (currently under peer review) is about: something I call engineered exclusion




What do I mean by “engineered exclusion”?


Engineered exclusion is when technology predictably leaves certain people out—not because of a bug, but because of how the system was designed from the start.
Most AI communication tools assume a very specific kind of user:
  • Speaks fluently
  • Speaks quickly
  • Uses “standard” English
  • Communicates in neat, predictable ways
If that’s not you, the system often decides—without saying it out loud—that your communication doesn’t count. For many minimally speaking autistic people who use AAC (augmentative and alternative communication)—text to speech, letterboards, gestures, partial speech—this shows up everywhere:
  • Voice assistants that don’t recognize their speech at all
  • Text-to-speech voices that mispronounce basic words or names
  • Systems that require extra labor just to be understood
  • Interfaces designed more for caregivers than for the user themselves
The exclusion isn’t random. It’s built into the pipeline.

“Nonspeaking” doesn’t mean “no language”

One thing I want to be very clear about: Nonspeaking is not the absence of language. Many nonspeaking and minimally speaking autistic people have rich, complex thoughts and communicate in multiple ways, often depending on: Fatigue, Anxiety, Sensory overload, Motor planning demands, Environment and predictability

AI systems, however, tend to flatten all of that variation into a single question: Does this look like typical speech or not? If the answer is no, the system often treats the user as noise.

Why this keeps happening


AI systems learn from data—and the data overwhelmingly comes from:
  • Fluent speakers
  • Neurotypical communicators
  • Majority-language users
  • Western norms of “clear” expression

Then we evaluate those systems using benchmarks that reward speed, fluency, and predictability. So when a system fails to understand a nonspeaking autistic user, the problem isn’t labeled exclusion. It’s labeled error. And the burden to fix it gets pushed onto the user—who has to type things phonetically, add extra spaces, reword sentences, or give up altogether. From the system’s perspective, everything looks fine. From the user’s perspective, communication becomes exhausting.

Designed dignity: a different way forward


The paper doesn’t just critique what’s broken. It proposes a shift in how we think about accessibility. I call this designed dignity. Instead of asking, “How do we retrofit accessibility after the fact?” Designed dignity asks, “What if we treated human variation as expected from the start?”
That means:
  • Valuing expressive access as much as input accuracy
  • Designing for communication that changes over time and state
  • Measuring whether people can be heard, not just whether the system performs well on average
  • Including nonspeaking autistic people (and their families) as co-designers, not edge cases

Accessibility isn’t a bonus feature. It’s part of whether AI can honestly claim to be fair.

Why I wrote this


AI is rapidly becoming the middleman for how people communicate—at school, at work, in healthcare, and in public life. If we don’t question whose communication counts now, we risk hard-coding old forms of ableism into the infrastructure of the future. This paper is my attempt to slow that down and say: Let’s design systems that don’t just listen—but listen on human terms.