Featured Post

social media

When AI Can’t Hear You, It’s Not Neutral — It’s Designed That Way

I’ve been thinking a lot about who gets heard by AI—and who doesn’t. We tend to talk about artificial intelligence as if it’s neutral. Objective. Just math and data. But for many autistic people—especially those who are minimally speaking or nonspeaking—AI systems don’t just fail sometimes. They quietly shut people out. That’s what my paper (currently under peer review) is about: something I call engineered exclusion




What do I mean by “engineered exclusion”?


Engineered exclusion is when technology predictably leaves certain people out—not because of a bug, but because of how the system was designed from the start.
Most AI communication tools assume a very specific kind of user:
  • Speaks fluently
  • Speaks quickly
  • Uses “standard” English
  • Communicates in neat, predictable ways
If that’s not you, the system often decides—without saying it out loud—that your communication doesn’t count. For many minimally speaking autistic people who use AAC (augmentative and alternative communication)—text to speech, letterboards, gestures, partial speech—this shows up everywhere:
  • Voice assistants that don’t recognize their speech at all
  • Text-to-speech voices that mispronounce basic words or names
  • Systems that require extra labor just to be understood
  • Interfaces designed more for caregivers than for the user themselves
The exclusion isn’t random. It’s built into the pipeline.

“Nonspeaking” doesn’t mean “no language”

One thing I want to be very clear about: Nonspeaking is not the absence of language. Many nonspeaking and minimally speaking autistic people have rich, complex thoughts and communicate in multiple ways, often depending on: Fatigue, Anxiety, Sensory overload, Motor planning demands, Environment and predictability

AI systems, however, tend to flatten all of that variation into a single question: Does this look like typical speech or not? If the answer is no, the system often treats the user as noise.

Why this keeps happening


AI systems learn from data—and the data overwhelmingly comes from:
  • Fluent speakers
  • Neurotypical communicators
  • Majority-language users
  • Western norms of “clear” expression

Then we evaluate those systems using benchmarks that reward speed, fluency, and predictability. So when a system fails to understand a nonspeaking autistic user, the problem isn’t labeled exclusion. It’s labeled error. And the burden to fix it gets pushed onto the user—who has to type things phonetically, add extra spaces, reword sentences, or give up altogether. From the system’s perspective, everything looks fine. From the user’s perspective, communication becomes exhausting.

Designed dignity: a different way forward


The paper doesn’t just critique what’s broken. It proposes a shift in how we think about accessibility. I call this designed dignity. Instead of asking, “How do we retrofit accessibility after the fact?” Designed dignity asks, “What if we treated human variation as expected from the start?”
That means:
  • Valuing expressive access as much as input accuracy
  • Designing for communication that changes over time and state
  • Measuring whether people can be heard, not just whether the system performs well on average
  • Including nonspeaking autistic people (and their families) as co-designers, not edge cases

Accessibility isn’t a bonus feature. It’s part of whether AI can honestly claim to be fair.

Why I wrote this


AI is rapidly becoming the middleman for how people communicate—at school, at work, in healthcare, and in public life. If we don’t question whose communication counts now, we risk hard-coding old forms of ableism into the infrastructure of the future. This paper is my attempt to slow that down and say: Let’s design systems that don’t just listen—but listen on human terms.

About That Autism Barbie and the Headphones

A few weeks ago, there was a lot of social media posts on something that was being widely celebrated online: a new Barbie meant to represent autism.

It had noise-canceling headphones. It had an AAC device. It had flexible hands for stimming.


And I felt… conflicted.


That moment is what eventually became my new Psychology Today .


Representation can be good—and still incomplete


Let me be clear upfront: AAC matters. Assistive technology matters. Seeing communication differences reflected in a mainstream toy does matter.


But I paused when I saw the headphones. Not because headphones are bad—they’re not. Many autistic people use them, including me at times. But because headphones have quietly become a shorthand for autism itself.


As I wrote in the article, tools meant to support autistic people are increasingly being treated as symbols that define them. That’s where things get tricky.


When autism is visually reduced to one object, it subtly tells a story: this is the fix. Put the headphones on, problem solved.


And that story just doesn’t match reality.


Headphones don’t “fix” sensory processing

One of the biggest myths about sensory differences in autism is that they’re just about loudness. They’re not.


Sensory processing involves:

  • unpredictability,
  • timing,
  • filtering,
  • body awareness,
  • and how the nervous system anticipates what comes next.


Noise-canceling headphones can help with certain kinds of sound, in certain contexts, for certain people. But they don’t:

  • prevent sudden sensory intrusion,
  • resolve auditory–visual mismatches,
  • stop cumulative overload,
  • or regulate the nervous system on their own.


In the article, I put it this way: Headphones can reduce input, but they don’t restore control. That distinction matters—especially for parents, educators, and designers who genuinely want to help.


Sensory experiences aren’t accessories

Another reason I wrote the piece is that autism is so often discussed through what it looks like from the outside. Headphones are visible. AAC devices are visible. Stimming is visible.


But sensory experience itself is mostly invisible.

Two autistic people can wear the same headphones and have completely different experiences:

  • One feels relief.
  • Another still feels overwhelmed.
  • A third finds the pressure uncomfortable.
  • A fourth only benefits in very specific environments.


When representation collapses all of that into a single image, it unintentionally flattens autistic experience. Or as I wrote - When support tools become symbols, we stop asking who they work for—and when they don’t.


Why the Barbie moment mattered 

It gave me pause because it reflects a broader pattern: good intentions paired with shallow understanding. We’re getting better at saying “autism exists.” We’re still struggling with understanding how autism actually works—especially at the sensory and nervous-system level. That’s why I wanted to write something that didn’t attack representation, but complicated it.


Because real inclusion isn’t about having the right objects on display. It’s about designing environments, expectations, and supports that don’t assume one solution fits everyone.


What I hope readers take away

If there’s one takeaway I hope sticks, it’s this:

Headphones are a tool, AAC is a tool. But Neither is autism.

Autism lives in how a nervous system senses, predicts, and responds to the world—often beautifully, sometimes painfully, always uniquely.


If we want better representation, we need to move beyond symbols and toward understanding.

My TedX Talk

  My Ted X talk titled "Pebbles in the Pond of Change

Hari Srinivasan, shares a powerful message about the power of small actions in creating ever-widening ripples in the pond of change. Drawing from personal experiences and the legacy of disability rights leaders, he redefines progress as a journey that starts with simple, accessible steps. His inspiring message encourages everyone to identify and act on their own "small pebbles" to drive societal transformation.

"Incorporating well-being into daily routines can reduce the dependency on inaccessible therapies." - Hari Srinivasan

Read on... https://www.liebertpub.com/doi/10.1089/aut.2024.38246.pw

Why Sensory Relief Isn’t About Quiet.

Psychology Today published my piece “Why Sensory Relief Isn’t About Quiet.”

It’s about something that has quietly bothered me for years: the assumption that sensory discomfort is mainly a volume problem.

Too loud.
Too bright.
Too busy.

If we could just turn things down, the thinking goes, people—especially autistic people or those with ADHD—would feel better.

But that hasn’t matched my experience. And it hasn’t matched what neuroscience tells us either.

Quiet Isn’t Always Comfortable

Some of the hardest sensory moments I know happen in places that are nearly silent.

Waiting rooms.
Open offices during off-hours.

These spaces aren’t intense. They’re ambiguous.

In the PT article, I open with a waiting room because it captures this perfectly. Nothing is happening—but nothing is resolving either. The nervous system stays on standby, tuned for change. Time stretches. Small sounds take on disproportionate weight.

By contrast, walking down a busy sidewalk can feel easier. There’s noise, movement, and unpredictability, but there’s also direction. Flow. A sense of what’s coming next.

That contrast is the heart of the piece.

The Neuroscience Thread

The article leans on a simple idea from neuroscience, even though it doesn’t use much jargon:

The brain isn’t just reacting to stimulation.
It’s constantly trying to stay oriented in time.

At every moment, it’s asking a quiet question:

What’s happening next?

When environments answer that question—through clear timing, transitions, and structure—perception feels smoother, even if the environment is busy. When they don’t, attention stays suspended, even if the environment is quiet.

This isn’t about preference or personality. It’s about coherence.

Neuroscience gives us language for this—predictive processing, multisensory integration, expectation—but what matters most to me is what those ideas explain in real life.

Predictability ≠ Sameness

One thing I was careful about in this piece was predictability.

Predictability is often misunderstood, especially when autism is involved. It gets flattened into a stereotype: rigidity, sameness, control.

That’s not what I mean.

Predictability doesn’t require repetition. It doesn’t require things to stay the same. It only requires that changes make sense—timing is consistent, signals match their sources, events unfold in context.

In the article, I describe predictability less as a preference and more as a stabilizer. Something that helps the nervous system keep its footing in time and space.

That framing matters. It shifts the conversation away from “why are you so sensitive?” toward “what structure is missing here?”

Why “Just Wear Headphones” Falls Short

Another reason I wrote this piece is frustration with well-meaning but incomplete advice.

“Just wear noise-canceling headphones.”
“Just reduce stimulation.”

Sometimes that helps. Sometimes it doesn’t.

Turning the volume down doesn’t automatically make a situation feel settled. In some cases, it removes cues the brain relies on to stay oriented, making the world quieter but no more legible.

What helps more often are small changes that increase clarity:

  • Clear transitions

  • Consistent timing

  • Advance notice

  • Signals that match what’s happening

These don’t quiet the world. They organize it.

From Accommodation to Design

One subtle shift I wanted to make in this article is how we talk about solutions.

I don’t frame these ideas as accommodations alone. I think of them as design choices—ways of supporting perception so it doesn’t have to stay suspended.

When sensory strain is framed only as a personal limitation, the solution is always to cope more: tolerate longer, adapt faster, endure quietly.

A focus on predictability and coherence asks something different of environments instead.

What I Hope Readers Take Away

If there’s one thing I hope readers notice after reading the PT piece, it’s this:

Pay attention not just to what feels loud or busy—but to what feels unfinished.

Where does perception settle into rhythm?
Where does it stay waiting?

Sometimes what the nervous system needs most isn’t quiet.

It’s coherence.


Neurodiversity 2.0: Contemporary Research, Evolving Frameworks, and Practice Implications


Next month, I’ll be speaking at NIEPID (National Institute for the Empowerment of Persons with Intellectual Disabilities) on a topic I’ve been thinking and writing about for some time: what it means to take neurodiversity seriously without flattening disability.


This is a training-focused talk, aimed at educators, clinicians, and rehabilitation professionals who want research-grounded tools for understanding communication differences and nervous system responses to unpredictability.


Rather than framing autism only through strengths or only through deficits, the session draws on contemporary neuroscience to show how difference, disability, and context interact in real life.


I’ll be offering a plain-language overview of research relevant to education and rehabilitation, including:

  • sensory processing and sensory–motor integration
  • interoception and regulation
  • motor planning and coordination
  • nervous system responses to unpredictability and stress

The goal is not to provide a single “correct” model of autism, but to offer a research-informed lens that helps professionals better understand distress, communication differences, and participation across diverse support needs.


📅 Date: 7th February 2025
🕖 Time: 7:00 PM (Indian Standard Time)
💻 Platform: Google Meet. Link(https://meet.google.com/ocp-mozi-vrf)


Talk Abstract: This talk introduces Neurodiversity 2.0 as a way to move beyond polarized debates about autism (medical vs. social, strengths vs. challenges, independence vs. dependence) and focus on a more realistic “both–and” understanding. Alongside this framing, I present a plain-language overview of contemporary neuroscience that is relevant to education and rehabilitation contexts, including sensory processing, interoception and emotion labeling, motor planning, and nervous system responses to unpredictability. The goal is not clinical instruction, but a research-informed lens that can help trainees think more clearly about distress, communication differences, and participation across a wide range of support needs.

New preprint: AI, Autism, and the Architecture of Voice

New preprint: AI, Autism, and the Architecture of Voice


I’m sharing a new preprint exploring how AI systems shape whose voices are heard, whose are filtered out, and what it would mean to design AI around dignity rather than accommodation after the fact.

The paper examines how current AI architectures—especially those governing speech, communication, and interaction—often reproduce forms of engineered exclusion for autistic and minimal/nonspeaking people. It then proposes a shift toward designed dignity: building voice, agency, and access into systems from the outset rather than retrofitting accessibility later.

📄 Preprint available on SocArXiv
🔗 https://doi.org/10.31235/osf.io/eahjb_v1

This work is intended as a bridge between AI ethics, disability studies, and lived experience.


When the Senses Argue - Why Neuroscientists love sensory illusions

 When the Senses Argue

Why neuroscientists love sensory illusions

The first time most people encounter a sensory illusion, the reaction is laughter—followed quickly by disbelief. Wait, that can’t be right. You rewind the clip. You try again. Your eyes insist on one thing, your ears on another, and your brain calmly delivers a third answer you never asked for.

That moment—when confidence gives way to curiosity—is exactly why neuroscientists keep coming back to sensory illusions. They aren’t parlor tricks. They’re controlled disagreements between the senses, designed to reveal how the brain decides what counts as reality.

Because here’s the uncomfortable truth: perception isn’t a recording. It’s a verdict.

The illusion that makes people argue with their own ears

Take the McGurk effect. You watch a video of a person clearly forming one speech sound while the audio plays a different one. Many people don’t hear either. Instead, they hear a third sound that doesn’t exist in the video or the audio track.

What’s striking isn’t just the illusion—it’s how certain people feel about it. Some insist the sound changed. Others swear the speaker must be cheating. A few can switch what they hear simply by shifting attention between the mouth and the sound.

From a neuroscience perspective, this is audiovisual integration under conflict. The brain assumes speech sounds and lip movements belong together, and when they don’t match, it searches for the most plausible compromise. Perception becomes a negotiation, not a receipt.

This illusion made researchers realize that attention, reliability, and prior experience all shape how senses are fused. Hearing isn’t just hearing. Seeing isn’t just seeing. They’re constantly influencing one another.

When vision tells sound where it came from

Then there’s ventriloquism. Not the stage trick—the perceptual effect. If a voice plays while a visible object moves, people tend to locate the sound at the object, even if it’s coming from elsewhere.

What surprises first-time viewers is how automatic this feels. Nobody thinks, I will now assign this sound to that face. It just happens.

Vision tends to dominate spatial judgments, especially when timing lines up. The brain bets that what you see moving is the source of the sound. Over time, repeated exposure can even recalibrate auditory space itself.

This illusion helped establish one of multisensory neuroscience’s core ideas: the brain weights senses differently depending on the question it’s trying to answer. For “where,” vision often wins.

When hearing creates things you swear you saw

Some illusions are subtler—and creepier.

In the double flash illusion, a single flash of light is paired with two quick beeps. Many people report seeing two flashes. They’ll argue for it. They’ll describe it vividly.

Nothing happened in the visual system to justify that experience. Hearing altered vision.

This illusion unsettles people because it challenges a deep assumption: that vision is the most trustworthy sense. It turns out that timing information from sound can override what the eyes deliver, especially when events unfold quickly.

For researchers, this illusion became a clean way to probe temporal binding—how the brain decides which events belong together in time.

The illusion that makes people gasp

No multisensory illusion produces stronger reactions than the rubber hand illusion.

A fake hand is placed on a table in front of you. Your real hand is hidden. Both are stroked at the same time. At first, it feels silly. Then strange. Then, unexpectedly, the rubber hand begins to feel like it’s yours.

People laugh nervously when this happens. Some feel a creeping sense of ownership. Others report a strange displacement, as if their real hand has drifted toward the fake one.

And then comes the hammer.

In many demonstrations, the experimenter suddenly raises a hammer and strikes the rubber hand. Even knowing it’s fake, people flinch. Some gasp. Some pull back. Skin conductance spikes. The body reacts as if you were under threat.

Nothing touched your real hand. But your brain had already rewritten the boundary of the self.

This illusion revealed that body ownership is not fixed. It’s constructed moment by moment by integrating vision, touch, and proprioception. The “self” is multisensory.

Why illusions work at all

What ties these illusions together is not deception, but inference.

The brain assumes that signals close in space and time belong to the same event. It assumes the world is mostly coherent. When cues conflict, it doesn’t freeze—it resolves the disagreement using probability, past experience, and context.

Illusions arise when those assumptions are pushed just far enough to expose the rules underneath.

They show that multisensory integration is nonlinear, adaptive, and learned. The brain isn’t adding signals. It’s choosing interpretations.

A note on autism—and why illusions matter here

Toward the end of many multisensory studies, autism enters the discussion—not as a punchline, but as a lens.

Some autistic individuals are less susceptible to certain illusions. Others experience them differently or under narrower conditions. Attention may play a larger role. Timing windows may be tighter. Integration may be more deliberate.

This isn’t about being “fooled” or not fooled. It’s about how coherence is constructed.

Illusions help researchers see whether perception relies more on automatic fusion or on sustained interpretation. They reveal differences in weighting, timing, and flexibility—strategies, not failures.

And that’s why these illusions matter beyond the lab. They remind us that there is more than one way to assemble a world.

The lesson illusions keep teaching us

Every time an illusion works, it tells the same story: perception is not passive. It’s an active synthesis shaped by uncertainty, context, and experience.

We don’t see what’s there.
We see what the brain decides is most likely.

And for a brief moment—when a hammer falls on a rubber hand, or a sound creates a flash that never happened—we get to watch that decision being made.

Masking is Evolution at Work — With a Cost.

Masking is often described as “pretending to be neurotypical,” as if autistic people are performing or being inauthentic.

That framing misses what masking really is.

In my Psychology Today article Masking as an Evolutionary Advantage,” I approach masking as adaptation — what happens when a nervous system learns that being visibly different carries social risk.



Humans evolved in small, interdependent groups. Belonging meant access to food, protection, shared knowledge, and safety. Being excluded meant vulnerability. In that world, standing out was never neutral. It attracted attention. And attention could mean danger.

For autistic people — whose movements, speech, timing, and sensory responses naturally diverge from social norms — that creates powerful selection pressure. Over time, the brain learns:
If I reduce how different I appear, I am more likely to stay in the group.

That is the evolutionary advantage of masking.
It increases the probability of acceptance, inclusion, and survival — and, in many contexts, reduces the risk of harm.

Masking isn’t just hiding stimming or forcing eye contact. It includes mirroring tone, copying social rhythms, suppressing natural movements, and constantly scanning for signs of disapproval. From the outside, this can look like social fluency. From the inside, it feels more like vigilance — an ongoing effort to stay safe.

This pressure is not evenly distributed.

Autistic women often live inside what researchers describe as a triple bind:
they are expected to be socially attuned, emotionally responsive, and compliant — while also navigating the penalties attached to disability and difference. The cost of not masking is often higher for them: social rejection, misinterpretation, or being labeled difficult, rude, or unstable. Masking becomes a way to survive gendered social expectations layered on top of neurodivergence.

People with higher support needs face a different but equally powerful bind. Their differences are more visible, and visibility increases vulnerability — to punishment, restraint, exclusion, or loss of autonomy. For them, masking is often less about fitting in and more about reducing the likelihood of being harmed.

Evolution doesn’t select for comfort. It selects for what keeps you in the group. Masking, in many environments, does exactly that. It helps autistic people remain in classrooms, workplaces, medical systems, and families that might otherwise push them out.

But survival strategies come with costs.

Maintaining two versions of yourself — who you are and who you must appear to be — consumes enormous energy. Over time, that split leads to exhaustion, anxiety, and autistic burnout. What looks like competence from the outside can feel like never being allowed to rest on the inside.

Seeing masking as an evolutionary response shifts the frame. The issue isn’t that autistic people mask. It’s that so many environments still require it.

When people don’t have to camouflage their nervous system just to stay safe, they don’t burn out trying to survive.

Why Sensory Overload Isn’t About “Too Much”

Why Sensory Overload Isn’t About “Too Much”

A neuroscientist's view of sensory effort in Autism & ADHD

I’m starting a new series of articles in Psychology Today focused on demystifying sensorimotor issues in autism and ADHD—translating the science into plain speak without losing what actually matters.

In the first piece [link to article], I take on one of the most common misunderstandings: that sensory overload is about too much sound, light, or movement. What I argue instead is that overload is really about effort—the effort the brain has to put in when the world becomes hard to interpret.

Your brain is constantly trying to answer a simple question: What is happening right now, and does it matter? To do that, it has to stitch together sight, sound, touch, motion, and timing into a single coherent picture. When those signals line up, the brain relaxes. When they don’t, it has to work harder.

As I write in the article, “the brain works harder when information is unclear, and it eases off when things are easy to interpret.”

One of the examples I use is deliberately ordinary. A faint sound by itself is easy to ignore. A tiny flicker in your peripheral vision is easy to ignore. But when they happen together, your brain snaps to attention. “The world doesn’t just suddenly get loud. Instead, uncertainty skyrockets.”

That uncertainty is what creates overload.

What looks like hypersensitivity from the outside is often relentless problem-solving on the inside. The brain is constantly checking: Was that important? Was that a person? A threat? A mistake? In everyday environments—overlapping conversations, out-of-sync audio and visuals, visual clutter, subtle vibrations—those micro-decisions never really stop.

As I put it in the piece, “From the outside, this appears to be oversensitivity. From the inside, it often feels like work that never quite lets up.” For autistics and ADHDers, that work doesn’t fade quickly. It accumulates into fatigue, shutdown, or distress.

That’s why the solution isn’t just “make things quieter” or “reduce stimulation.” What really helps is making environments more predictable, more legible, and easier to parse. When the brain can quickly tell what’s happening and what matters, it doesn’t have to stay in high-alert mode.

Sensory overload isn’t about too much.
It’s about too much uncertainty for too long.