Featured Post

social media

"Incorporating well-being into daily routines can reduce the dependency on inaccessible therapies." - Hari Srinivasan

Read on... https://www.liebertpub.com/doi/10.1089/aut.2024.38246.pw

Why Sensory Relief Isn’t About Quiet.

Psychology Today published my piece “Why Sensory Relief Isn’t About Quiet.”

It’s about something that has quietly bothered me for years: the assumption that sensory discomfort is mainly a volume problem.

Too loud.
Too bright.
Too busy.

If we could just turn things down, the thinking goes, people—especially autistic people or those with ADHD—would feel better.

But that hasn’t matched my experience. And it hasn’t matched what neuroscience tells us either.

Quiet Isn’t Always Comfortable

Some of the hardest sensory moments I know happen in places that are nearly silent.

Waiting rooms.
Open offices during off-hours.

These spaces aren’t intense. They’re ambiguous.

In the PT article, I open with a waiting room because it captures this perfectly. Nothing is happening—but nothing is resolving either. The nervous system stays on standby, tuned for change. Time stretches. Small sounds take on disproportionate weight.

By contrast, walking down a busy sidewalk can feel easier. There’s noise, movement, and unpredictability, but there’s also direction. Flow. A sense of what’s coming next.

That contrast is the heart of the piece.

The Neuroscience Thread

The article leans on a simple idea from neuroscience, even though it doesn’t use much jargon:

The brain isn’t just reacting to stimulation.
It’s constantly trying to stay oriented in time.

At every moment, it’s asking a quiet question:

What’s happening next?

When environments answer that question—through clear timing, transitions, and structure—perception feels smoother, even if the environment is busy. When they don’t, attention stays suspended, even if the environment is quiet.

This isn’t about preference or personality. It’s about coherence.

Neuroscience gives us language for this—predictive processing, multisensory integration, expectation—but what matters most to me is what those ideas explain in real life.

Predictability ≠ Sameness

One thing I was careful about in this piece was predictability.

Predictability is often misunderstood, especially when autism is involved. It gets flattened into a stereotype: rigidity, sameness, control.

That’s not what I mean.

Predictability doesn’t require repetition. It doesn’t require things to stay the same. It only requires that changes make sense—timing is consistent, signals match their sources, events unfold in context.

In the article, I describe predictability less as a preference and more as a stabilizer. Something that helps the nervous system keep its footing in time and space.

That framing matters. It shifts the conversation away from “why are you so sensitive?” toward “what structure is missing here?”

Why “Just Wear Headphones” Falls Short

Another reason I wrote this piece is frustration with well-meaning but incomplete advice.

“Just wear noise-canceling headphones.”
“Just reduce stimulation.”

Sometimes that helps. Sometimes it doesn’t.

Turning the volume down doesn’t automatically make a situation feel settled. In some cases, it removes cues the brain relies on to stay oriented, making the world quieter but no more legible.

What helps more often are small changes that increase clarity:

  • Clear transitions

  • Consistent timing

  • Advance notice

  • Signals that match what’s happening

These don’t quiet the world. They organize it.

From Accommodation to Design

One subtle shift I wanted to make in this article is how we talk about solutions.

I don’t frame these ideas as accommodations alone. I think of them as design choices—ways of supporting perception so it doesn’t have to stay suspended.

When sensory strain is framed only as a personal limitation, the solution is always to cope more: tolerate longer, adapt faster, endure quietly.

A focus on predictability and coherence asks something different of environments instead.

What I Hope Readers Take Away

If there’s one thing I hope readers notice after reading the PT piece, it’s this:

Pay attention not just to what feels loud or busy—but to what feels unfinished.

Where does perception settle into rhythm?
Where does it stay waiting?

Sometimes what the nervous system needs most isn’t quiet.

It’s coherence.


Neurodiversity 2.0: Contemporary Research, Evolving Frameworks, and Practice Implications


Next month, I’ll be speaking at NIEPID (National Institute for the Empowerment of Persons with Intellectual Disabilities) on a topic I’ve been thinking and writing about for some time: what it means to take neurodiversity seriously without flattening disability.


This is a training-focused talk, aimed at educators, clinicians, and rehabilitation professionals who want research-grounded tools for understanding communication differences and nervous system responses to unpredictability.


Rather than framing autism only through strengths or only through deficits, the session draws on contemporary neuroscience to show how difference, disability, and context interact in real life.


I’ll be offering a plain-language overview of research relevant to education and rehabilitation, including:

  • sensory processing and sensory–motor integration
  • interoception and regulation
  • motor planning and coordination
  • nervous system responses to unpredictability and stress

The goal is not to provide a single “correct” model of autism, but to offer a research-informed lens that helps professionals better understand distress, communication differences, and participation across diverse support needs.


📅 Date: 7th February 2025
🕖 Time: 7:00 PM (Indian Standard Time)
💻 Platform: Google Meet. Link(https://meet.google.com/ocp-mozi-vrf)


Talk Abstract: This talk introduces Neurodiversity 2.0 as a way to move beyond polarized debates about autism (medical vs. social, strengths vs. challenges, independence vs. dependence) and focus on a more realistic “both–and” understanding. Alongside this framing, I present a plain-language overview of contemporary neuroscience that is relevant to education and rehabilitation contexts, including sensory processing, interoception and emotion labeling, motor planning, and nervous system responses to unpredictability. The goal is not clinical instruction, but a research-informed lens that can help trainees think more clearly about distress, communication differences, and participation across a wide range of support needs.

New preprint: AI, Autism, and the Architecture of Voice

New preprint: AI, Autism, and the Architecture of Voice


I’m sharing a new preprint exploring how AI systems shape whose voices are heard, whose are filtered out, and what it would mean to design AI around dignity rather than accommodation after the fact.

The paper examines how current AI architectures—especially those governing speech, communication, and interaction—often reproduce forms of engineered exclusion for autistic and minimal/nonspeaking people. It then proposes a shift toward designed dignity: building voice, agency, and access into systems from the outset rather than retrofitting accessibility later.

📄 Preprint available on SocArXiv
🔗 https://doi.org/10.31235/osf.io/eahjb_v1

This work is intended as a bridge between AI ethics, disability studies, and lived experience.


When the Senses Argue - Why Neuroscientists love sensory illusions

 When the Senses Argue

Why neuroscientists love sensory illusions

The first time most people encounter a sensory illusion, the reaction is laughter—followed quickly by disbelief. Wait, that can’t be right. You rewind the clip. You try again. Your eyes insist on one thing, your ears on another, and your brain calmly delivers a third answer you never asked for.

That moment—when confidence gives way to curiosity—is exactly why neuroscientists keep coming back to sensory illusions. They aren’t parlor tricks. They’re controlled disagreements between the senses, designed to reveal how the brain decides what counts as reality.

Because here’s the uncomfortable truth: perception isn’t a recording. It’s a verdict.

The illusion that makes people argue with their own ears

Take the McGurk effect. You watch a video of a person clearly forming one speech sound while the audio plays a different one. Many people don’t hear either. Instead, they hear a third sound that doesn’t exist in the video or the audio track.

What’s striking isn’t just the illusion—it’s how certain people feel about it. Some insist the sound changed. Others swear the speaker must be cheating. A few can switch what they hear simply by shifting attention between the mouth and the sound.

From a neuroscience perspective, this is audiovisual integration under conflict. The brain assumes speech sounds and lip movements belong together, and when they don’t match, it searches for the most plausible compromise. Perception becomes a negotiation, not a receipt.

This illusion made researchers realize that attention, reliability, and prior experience all shape how senses are fused. Hearing isn’t just hearing. Seeing isn’t just seeing. They’re constantly influencing one another.

When vision tells sound where it came from

Then there’s ventriloquism. Not the stage trick—the perceptual effect. If a voice plays while a visible object moves, people tend to locate the sound at the object, even if it’s coming from elsewhere.

What surprises first-time viewers is how automatic this feels. Nobody thinks, I will now assign this sound to that face. It just happens.

Vision tends to dominate spatial judgments, especially when timing lines up. The brain bets that what you see moving is the source of the sound. Over time, repeated exposure can even recalibrate auditory space itself.

This illusion helped establish one of multisensory neuroscience’s core ideas: the brain weights senses differently depending on the question it’s trying to answer. For “where,” vision often wins.

When hearing creates things you swear you saw

Some illusions are subtler—and creepier.

In the double flash illusion, a single flash of light is paired with two quick beeps. Many people report seeing two flashes. They’ll argue for it. They’ll describe it vividly.

Nothing happened in the visual system to justify that experience. Hearing altered vision.

This illusion unsettles people because it challenges a deep assumption: that vision is the most trustworthy sense. It turns out that timing information from sound can override what the eyes deliver, especially when events unfold quickly.

For researchers, this illusion became a clean way to probe temporal binding—how the brain decides which events belong together in time.

The illusion that makes people gasp

No multisensory illusion produces stronger reactions than the rubber hand illusion.

A fake hand is placed on a table in front of you. Your real hand is hidden. Both are stroked at the same time. At first, it feels silly. Then strange. Then, unexpectedly, the rubber hand begins to feel like it’s yours.

People laugh nervously when this happens. Some feel a creeping sense of ownership. Others report a strange displacement, as if their real hand has drifted toward the fake one.

And then comes the hammer.

In many demonstrations, the experimenter suddenly raises a hammer and strikes the rubber hand. Even knowing it’s fake, people flinch. Some gasp. Some pull back. Skin conductance spikes. The body reacts as if you were under threat.

Nothing touched your real hand. But your brain had already rewritten the boundary of the self.

This illusion revealed that body ownership is not fixed. It’s constructed moment by moment by integrating vision, touch, and proprioception. The “self” is multisensory.

Why illusions work at all

What ties these illusions together is not deception, but inference.

The brain assumes that signals close in space and time belong to the same event. It assumes the world is mostly coherent. When cues conflict, it doesn’t freeze—it resolves the disagreement using probability, past experience, and context.

Illusions arise when those assumptions are pushed just far enough to expose the rules underneath.

They show that multisensory integration is nonlinear, adaptive, and learned. The brain isn’t adding signals. It’s choosing interpretations.

A note on autism—and why illusions matter here

Toward the end of many multisensory studies, autism enters the discussion—not as a punchline, but as a lens.

Some autistic individuals are less susceptible to certain illusions. Others experience them differently or under narrower conditions. Attention may play a larger role. Timing windows may be tighter. Integration may be more deliberate.

This isn’t about being “fooled” or not fooled. It’s about how coherence is constructed.

Illusions help researchers see whether perception relies more on automatic fusion or on sustained interpretation. They reveal differences in weighting, timing, and flexibility—strategies, not failures.

And that’s why these illusions matter beyond the lab. They remind us that there is more than one way to assemble a world.

The lesson illusions keep teaching us

Every time an illusion works, it tells the same story: perception is not passive. It’s an active synthesis shaped by uncertainty, context, and experience.

We don’t see what’s there.
We see what the brain decides is most likely.

And for a brief moment—when a hammer falls on a rubber hand, or a sound creates a flash that never happened—we get to watch that decision being made.

Masking is Evolution at Work — With a Cost.

Masking is often described as “pretending to be neurotypical,” as if autistic people are performing or being inauthentic.

That framing misses what masking really is.

In my Psychology Today article Masking as an Evolutionary Advantage,” I approach masking as adaptation — what happens when a nervous system learns that being visibly different carries social risk.



Humans evolved in small, interdependent groups. Belonging meant access to food, protection, shared knowledge, and safety. Being excluded meant vulnerability. In that world, standing out was never neutral. It attracted attention. And attention could mean danger.

For autistic people — whose movements, speech, timing, and sensory responses naturally diverge from social norms — that creates powerful selection pressure. Over time, the brain learns:
If I reduce how different I appear, I am more likely to stay in the group.

That is the evolutionary advantage of masking.
It increases the probability of acceptance, inclusion, and survival — and, in many contexts, reduces the risk of harm.

Masking isn’t just hiding stimming or forcing eye contact. It includes mirroring tone, copying social rhythms, suppressing natural movements, and constantly scanning for signs of disapproval. From the outside, this can look like social fluency. From the inside, it feels more like vigilance — an ongoing effort to stay safe.

This pressure is not evenly distributed.

Autistic women often live inside what researchers describe as a triple bind:
they are expected to be socially attuned, emotionally responsive, and compliant — while also navigating the penalties attached to disability and difference. The cost of not masking is often higher for them: social rejection, misinterpretation, or being labeled difficult, rude, or unstable. Masking becomes a way to survive gendered social expectations layered on top of neurodivergence.

People with higher support needs face a different but equally powerful bind. Their differences are more visible, and visibility increases vulnerability — to punishment, restraint, exclusion, or loss of autonomy. For them, masking is often less about fitting in and more about reducing the likelihood of being harmed.

Evolution doesn’t select for comfort. It selects for what keeps you in the group. Masking, in many environments, does exactly that. It helps autistic people remain in classrooms, workplaces, medical systems, and families that might otherwise push them out.

But survival strategies come with costs.

Maintaining two versions of yourself — who you are and who you must appear to be — consumes enormous energy. Over time, that split leads to exhaustion, anxiety, and autistic burnout. What looks like competence from the outside can feel like never being allowed to rest on the inside.

Seeing masking as an evolutionary response shifts the frame. The issue isn’t that autistic people mask. It’s that so many environments still require it.

When people don’t have to camouflage their nervous system just to stay safe, they don’t burn out trying to survive.

Why Sensory Overload Isn’t About “Too Much”

Why Sensory Overload Isn’t About “Too Much”

A neuroscientist's view of sensory effort in Autism & ADHD

I’m starting a new series of articles in Psychology Today focused on demystifying sensorimotor issues in autism and ADHD—translating the science into plain speak without losing what actually matters.

In the first piece [link to article], I take on one of the most common misunderstandings: that sensory overload is about too much sound, light, or movement. What I argue instead is that overload is really about effort—the effort the brain has to put in when the world becomes hard to interpret.

Your brain is constantly trying to answer a simple question: What is happening right now, and does it matter? To do that, it has to stitch together sight, sound, touch, motion, and timing into a single coherent picture. When those signals line up, the brain relaxes. When they don’t, it has to work harder.

As I write in the article, “the brain works harder when information is unclear, and it eases off when things are easy to interpret.”

One of the examples I use is deliberately ordinary. A faint sound by itself is easy to ignore. A tiny flicker in your peripheral vision is easy to ignore. But when they happen together, your brain snaps to attention. “The world doesn’t just suddenly get loud. Instead, uncertainty skyrockets.”

That uncertainty is what creates overload.

What looks like hypersensitivity from the outside is often relentless problem-solving on the inside. The brain is constantly checking: Was that important? Was that a person? A threat? A mistake? In everyday environments—overlapping conversations, out-of-sync audio and visuals, visual clutter, subtle vibrations—those micro-decisions never really stop.

As I put it in the piece, “From the outside, this appears to be oversensitivity. From the inside, it often feels like work that never quite lets up.” For autistics and ADHDers, that work doesn’t fade quickly. It accumulates into fatigue, shutdown, or distress.

That’s why the solution isn’t just “make things quieter” or “reduce stimulation.” What really helps is making environments more predictable, more legible, and easier to parse. When the brain can quickly tell what’s happening and what matters, it doesn’t have to stay in high-alert mode.

Sensory overload isn’t about too much.
It’s about too much uncertainty for too long.





The Race Model: Two Runners, One Decision

 I want to start with a moment most of us recognize, even if we’ve never named it.

You’re waiting to cross the street. Your eyes are fixed on the signal. Somewhere in the background, there’s a faint beeping sound. You’re not consciously deciding which one to trust. You’re just waiting—and the instant something tells you it’s time, you move.

Now imagine the light changes and the beep happens at the same time. You step forward a little faster than usual.

At first glance, it feels obvious why. Two senses together must be “working better,” right? Vision and hearing combine, reinforce each other, and speed things up.

But neuroscience has a habit of questioning things that feel obvious.

This is where the idea known as the race effect comes in, and it quietly complicates how we think about multisensory processing—especially in autism.

The race effect starts with a surprisingly modest claim. What if your senses aren’t collaborating at all? What if they’re competing?

Instead of vision and hearing merging into a single unified signal, imagine them running in parallel, like two runners heading toward the same finish line. Whichever one gets there first triggers your response. When both are present, you’re faster not because your brain fused them, but because you gave it two chances to succeed.

This isn’t a metaphor neuroscientists use casually. It’s formalized in what’s called the race model, which acts as a kind of skeptic inside multisensory research. It asks whether the benefits of seeing and hearing something together can be explained by simple probability alone. Two independent processes, racing side by side, will naturally produce faster responses some of the time. No communication required.

Why does this matter? Because for years, faster responses to multisensory input were often taken as automatic evidence of integration. The race model forces a pause. It draws a line in the sand and says: up to this point, speed can be explained without the senses ever talking to each other. Only when responses are faster than that line allows do we have strong evidence that the brain is truly integrating information across senses.

This distinction turns out to be especially important when we talk about autism.

Autistic sensory processing is often described using blunt language. Too sensitive. Not sensitive enough. Overwhelmed. Delayed. But the race effect invites a more careful question: when autistic people respond differently to multisensory input, is that because integration is impaired—or because the brain is doing something else entirely?

In many studies, autistic participants don’t always show strong violations of the race model. Sometimes multisensory cues don’t speed things up as much as expected. Sometimes they help only under specific timing conditions. Sometimes they don’t help at all.

It’s tempting to interpret this as a deficit. But that interpretation assumes that faster is always better, and that automatic integration is always the goal.

What if it isn’t?

If your brain is less inclined to fuse sensory signals automatically, you may rely more on each sense independently. That can mean slower responses in simple lab tasks—but it can also mean greater precision, reduced susceptibility to misleading cues, and more control over when and how information is combined.

From this perspective, autistic sensory processing isn’t broken integration. It’s selective integration.

And selective integration comes with a cost that doesn’t show up neatly in reaction-time graphs: effort.

Many everyday environments are designed around the assumption that multisensory integration happens effortlessly. Classrooms, offices, restaurants, and public spaces bombard us with overlapping sounds, lights, movements, and social signals. If your nervous system doesn’t automatically collapse all of that into a single, coherent stream, you’re left doing continuous sensory arbitration—deciding, moment by moment, what to trust, what to ignore, and what to act on.

The race effect helps explain why this can be exhausting. When senses are racing rather than fusing, the brain stays on high alert. It doesn’t take shortcuts. It doesn’t assume redundancy is helpful. It waits.

Slower responses, in that light, aren’t signs of disengagement. They’re signs of caution.

This reframing matters because it challenges a quiet moral judgment that often sneaks into discussions of autism: that efficiency equals health, and speed equals competence. The race model reminds us that nervous systems are not optimizing for speed alone. They are optimizing for survival in specific contexts.

In uncertain or overwhelming environments, automatic integration can backfire. Ignoring redundant cues, delaying decisions, or keeping sensory channels separate may actually be protective. Sometimes, letting senses race instead of forcing them to merge is the safer strategy.

Autism makes this tradeoff visible. It reveals the hidden labor that most brains perform invisibly—and reminds us that what looks like delay from the outside may reflect careful computation on the inside.

Once you see the race effect this way, the question shifts. It’s no longer “Why don’t autistic people integrate senses automatically?” It becomes “What kinds of environments assume automatic integration—and who do those environments leave behind?”

That’s not just a neuroscience question. It’s a design question. A social question. And, ultimately, an ethical one.

Star Stuff

 Carl Sagan's declaration, "We are all made of star stuff," is more than just a poetic observation; it is a statement about our origins and our connection to the cosmos, an idea invites us to contemplate our place in the universe, our shared humanity, and the intricate web of existence that binds us all.


The Cosmic Connection

At its core, Sagan's statement is rooted in scientific fact. The elements that make up our bodies—carbon, nitrogen, oxygen, and more—were forged in the nuclear furnaces of ancient stars. When these stars exploded in supernovae, they scattered these elements across the universe, eventually coalescing into new stars, planets, and life forms. This means that the atoms in our bodies were once part of distant stars, connecting us to the cosmos in a tangible and intimate way.
 

A Sense of Wonder and Awe

Reflecting on our stellar origins can evoke a profound sense of wonder and awe. It reminds us that we are not merely isolated beings on a small planet but are intrinsically linked to the vast, ever-changing universe. This perspective can inspire a sense of humility and reverence for the natural world, encouraging us to look beyond our immediate surroundings and appreciate the grandeur of the cosmos.
 

Shared Humanity

Sagan's quote also underscores our shared humanity. Regardless of our differences—whether cultural, racial, ideological, or (dis)ability —we all share the same cosmic heritage. We are all made of the same star stuff, which can serve as a powerful reminder of our commonality. This realization can foster a sense of unity and solidarity, encouraging us to transcend divisions and work together for the greater good.


The Fragility and Preciousness of Life

Understanding that we are made of star stuff can also deepen our appreciation for the fragility and preciousness of life. The processes that led to our existence are incredibly complex and improbable, making life a rare and precious gift. This awareness can motivate us to cherish and protect life in all its forms, nurturing a sense of responsibility towards our planet and each other.
 

The Search for Meaning

Sagan's insight invites us to ponder the larger questions of existence: Why are we here? What is our purpose? While science provides us with an understanding of our physical origins, the quest for meaning is a deeply personal and philosophical journey. Recognizing our cosmic roots can serve as a foundation for exploring these questions, prompting us to seek out purpose and significance in our lives and relationships.


Interconnectedness and Interdependence

The notion that we are made of star stuff highlights the interconnectedness and interdependence of all things. Just as the elements in our bodies were forged in the hearts of stars, so too are our lives intertwined with those of others. Our actions have ripple effects that extend far beyond ourselves, impacting the world and the cosmos. This understanding can inspire us to live more mindfully, with greater awareness of our impact on the environment and the well-being of others.
 

In acknowledging our stellar heritage, we are reminded that we are not just inhabitants of Earth but participants in the grand, unfolding story of the universe.

Why Sensory Overload Isn’t About “Too Much”

 

Why Sensory Overload Isn’t About “Too Much”

A neuroscientist’s view of sensory effort in autism and ADHD

Key points

  • The brain works harder when sensory information is unclear, and eases off when it’s clear.
  • Sensory overload often reflects sustained effort, not oversensitivity.
  • Autism and ADHD can involve carrying that effort for longer periods of time.
  • Predictability often reduces sensory strain more than reducing stimulation.

When autistic or ADHD people talk about sensory overload, the responses are usually meant to be reassuring.

“Everyone gets overwhelmed sometimes.”
“Try to tune it out.”
“You’ll get used to it.”

What these comments quietly assume is that sensory overload is a problem of quantity. Too much noise. Too much light. Too much stimulation.

As a neuroscientist, I think that framing is incomplete.

What strains the brain is not simply intensity. It’s uncertainty.

The brain is not a passive receiver of sensory input. It doesn’t wait for the world to arrive and then react. Instead, it is constantly combining information from sight, sound, touch, movement, and timing to answer a basic question: What is happening right now, and how important is it?

In neuroscience, this process is called multisensory integration. It refers to how the brain fuses information across the senses into a single interpretation of an event.

You experience it all the time. A voice becomes linked to a face. Footsteps paired with motion feel more urgent. A room feels calm or chaotic before you can explain why. Most of the time, this integration happens smoothly and outside awareness.

Until it doesn’t.

One detail that’s often overlooked is that multisensory integration isn’t guaranteed. The brain doesn’t always fuse information just because it arrives through more than one sense. Integration depends on how trustworthy the signals feel and whether combining them actually reduces uncertainty.

That distinction matters, because it means integration itself can be effortful.

One of the core principles governing multisensory integration is known as inverse effectiveness. Despite the technical name, the idea is intuitive.

When sensory signals are weak, ambiguous, or unreliable, the brain boosts their combination more strongly. When signals are already clear and robust, adding more information helps less.

Neuroscientists describe this using terms like superadditive and subadditive integration.

Superadditive integration means the brain’s response to two signals together is greater than the sum of each signal on its own. Two weak cues can suddenly feel urgent when they occur together. Imagine hearing a faint sound in a quiet house. On its own, you might ignore it. Now imagine that same faint sound paired with a slight movement in your peripheral vision. Neither signal is strong, but together they demand attention. The brain amplifies the combination because it reduces uncertainty: something is happening.

Subadditive integration, by contrast, occurs when signals are already strong and clear. In that case, adding more information doesn’t help much—and can even interfere. If someone is speaking loudly and clearly right in front of you, adding background music or visual clutter doesn’t improve understanding. It makes the experience more effortful, because the brain has to sort out what’s relevant and what isn’t.

These aren’t abstract math concepts. They describe how the brain allocates effort. Superadditive responses reflect a system working hard to extract meaning from uncertainty. Subadditive responses reflect efficiency—the brain already has enough information and doesn’t need to amplify further.

This distinction helps explain why sensory experiences can feel so different across people and contexts.

Many everyday environments are not just stimulating; they are informationally messy. Loud, but not meaningful. Busy, but unpredictable. Full of signals that don’t line up cleanly in space or time.

In those conditions, the brain may remain in a more superadditive mode—continually amplifying combined sensory input in an effort to reduce uncertainty. That amplification is adaptive. But it is also costly.

For autistic and ADHD individuals, whose sensory systems often place greater weight on incoming information, that cost can accumulate quickly.

Multisensory integration also depends on expectations about space and time. Signals that come from the same location and occur close together in time are easier for the brain to bind. Neuroscientists refer to these constraints as spatial alignment and temporal alignment.

When these expectations are met, integration tends to be efficient and often subadditive. When they are violated—when sound and sight drift apart, when timing is inconsistent—integration becomes less efficient, and amplification increases.

Modern environments introduce many small misalignments: overlapping conversations, asynchronous audiovisual cues, subtle visual flicker, unpredictable movement. None of these is necessarily overwhelming on its own. Over time, however, they can push the brain toward sustained superadditive processing—constantly boosting signals to maintain coherence.

What looks like “overreaction” from the outside is often ongoing neural problem-solving.

Another important insight from neuroscience is that multisensory integration isn’t just a reflex. Even rapid orienting responses are shaped by top-down influences—meaning the brain’s expectations, goals, and current state help decide when sensory signals should be amplified and when they should be restrained.

In other words, the brain actively regulates multisensory gain. State matters. Fatigue matters. Context matters. The system can dial integration up or down—but doing so requires resources.

This helps explain why sensory tolerance often collapses when someone is tired, stressed, or already carrying a heavy cognitive load.

This is why sensory challenges are not well captured by the idea of “hypersensitivity” alone.

A more precise concept from neuroscience is gain control—essentially the brain’s volume knob. When information is unclear, the brain turns the signal up to extract meaning. The tradeoff is that everything feels louder, including variability and noise.

From this perspective, heightened sensory responses can reflect a nervous system operating in a high-gain, superadditive state for extended periods. The system isn’t malfunctioning. It’s compensating.

This helps explain patterns many autistic and ADHD people recognize immediately: why unfamiliar environments are harder than familiar ones; why fatigue collapses tolerance; why predictability can be regulating even in stimulating settings; and why recovery often involves restoring coherence rather than eliminating input entirely.

People are often told to “tune out” unwanted sensory input. But multisensory integration happens automatically, at early stages of processing. It is not something one can switch off through effort or intention.

The brain binds information because that is how perception is built. Asking someone to simply ignore conflicting sensory cues is asking their nervous system to suspend a fundamental operation.

What can look like avoidance or rigidity may instead reflect strategic regulation—an attempt to move the system back toward a lower-gain, more subadditive state.

Another detail rarely discussed outside research contexts is that multisensory integration is not fully mature at birth. The brain learns not just how sensory signals relate to one another, but how much weight to give them. This calibration process is shaped by repeated experience.

This learning continues across the lifespan and is shaped by both individual processing styles and repeated experience. Seen this way, sensory patterns in autism and ADHD reflect how people adapt to their environments, rather than fixed characteristics.

Change the context, and the balance between superadditive and subadditive processing can shift—sometimes dramatically.

When sensory challenges are framed solely as personal limitations, responsibility rests entirely on the individual: cope more, adapt faster, tolerate longer.

Looking at sensory overload this way adds nuance without replacing one explanation with another. It shifts attention to the fit between how a person processes information and what their surroundings ask of them—and to how long anyone can reasonably sustain that level of effort.

Some individuals can carry that load with little cost. Others can do so only briefly, or only in certain contexts.

Neither response is a failure.

Rather than treating sensitivity as something to overcome, neuroscience invites us to see it as information—a signal about how a particular nervous system is interacting with a particular set of demands.

Some brains amplify uncertainty more strongly. Some situations generate more uncertainty than others. When those factors align, overload can emerge—not as a sign of weakness, but as a sign that regulatory limits have been reached.

I’m curious how readers recognize this in their own lives.
Are there environments where your sensory system feels efficient and supportive—and others where it feels effortful or draining? What kinds of predictability, timing, or alignment make the biggest difference for you?

I’d welcome your reflections in the comments.

References

Stein, B. E., & Stanford, T. R. (2008).
Multisensory integration: Current issues from the perspective of the single neuron.
Nature Reviews Neuroscience, 9(4), 255–266.
https://doi.org/10.1038/nrn2331


How the Brain Fuses the Senses

 

How the Brain Fuses the Senses

A classic paper that changed how we think about perception

Imagine walking down the street when you hear a dog bark and, at the same moment, see something moving toward you. You don’t experience these as two separate events—sound first, vision second. Instead, your brain delivers a single, urgent message: something is coming—pay attention.

That seamless fusion is so natural we rarely question it. But how the brain actually pulls this off—how it combines sight, sound, touch, and movement into a coherent experience—turns out to be one of the deepest questions in neuroscience.

In 2008, neuroscientists Barry Stein and Terrence Stanford published a paper that fundamentally reshaped how scientists think about this process. Rather than talking about perception in abstract terms, they asked a far more concrete question: what does a single neuron do when it receives information from more than one sense?

The answer changed everything.

Multisensory integration is not just “many senses at once”

At first glance, multisensory integration sounds obvious. We see and hear at the same time, so of course the brain combines those signals. But Stein and Stanford were very precise about what counts as integration.

From a neural perspective, integration only occurs when a neuron responds differently to a combined stimulus than it does to the strongest single stimulus alone. If a neuron fires the same way whether it hears a sound or hears a sound plus sees a flash, nothing special is happening. But if the combined input changes the response—boosting it, suppressing it, or reshaping it—then the brain is doing real computation.

This distinction matters because it shows perception isn’t about stacking sensory channels side by side. It’s about transformation.

Inside the multisensory neuron

Some neurons are wired to respond to more than one sensory modality. A single neuron might fire to a sound, fire to a visual cue, and then respond even more strongly when those cues occur together.

What Stein and Stanford showed is that this extra response follows rules. Sometimes the combined signal produces a dramatic boost. Other times, the neuron actually responds less when multiple senses are involved.

That might seem counterintuitive. Why would the brain ever dampen a response when it has more information? Because integration isn’t about maximizing input—it’s about deciding what matters.

When more becomes less—and why that’s useful

One of the most influential insights from the paper is that multisensory integration can enhance or depress neural responses. A combined sight-and-sound signal might amplify activity, or it might suppress it, depending on context.

This led to a deeper realization: neurons don’t combine signals in a single way. Sometimes the response to two senses is greater than the sum of their parts. Sometimes it’s roughly equal. And sometimes it’s far less than you’d expect.

Out of this came a principle that now appears everywhere in multisensory research: inverse effectiveness. The weaker or noisier the individual signals, the more the brain gains by combining them. When each sense is already clear and strong, integration adds little. But when information is uncertain—dim light, background noise, ambiguity—the benefits of fusion become dramatic.

This helps explain why multisensory processing plays such a powerful role in development, in challenging environments, and in many clinical contexts. Integration is not a luxury. It’s a strategy for dealing with uncertainty.

Space and time set the boundaries

The brain doesn’t integrate signals indiscriminately. Stein and Stanford showed that multisensory neurons obey strict spatial and temporal constraints.

Signals are most likely to be fused when they come from the same place. If a sound originates on the left and a visual cue appears on the right, neurons are far less likely to treat them as part of the same event. This spatial rule reflects a basic assumption built into the nervous system: things that belong together tend to happen together in space.

Timing matters just as much. The brain operates with a temporal binding window—a span of time during which signals can still be linked even if they don’t arrive simultaneously. This window accounts for the fact that sound, light, and touch travel at different speeds and are processed at different rates. Integration works best when neural responses overlap in time, not merely when stimuli occur at the exact same instant.

Together, these spatial and temporal rules ensure that integration supports coherent perception rather than confusion.

A midbrain structure with outsized influence

Much of Stein and Stanford’s work focused on the superior colliculus, a midbrain structure involved in orienting the eyes and head, shifting attention, and responding quickly to important events.

The superior colliculus turned out to be densely packed with multisensory neurons, making it an ideal place to study integration at the level of individual cells. When integration occurs here, behavior improves: responses are faster, localization is sharper, reactions are more efficient.

But one of the paper’s most striking findings is that the superior colliculus doesn’t work alone.

Integration is not a reflex—it’s a circuit

When researchers temporarily deactivated certain cortical areas, superior colliculus neurons still responded to sights and sounds. But something crucial disappeared. The extra boost from combining senses vanished. So did the behavioral advantages.

This showed that multisensory integration is not a simple bottom-up reflex. It depends on communication between cortex and midbrain. Integration emerges from a distributed circuit, shaped by experience, context, and higher-level processing.

Learning to fuse the senses

Perhaps the most surprising insight in the paper is that multisensory integration is not fully present at birth. Early in development, neurons may respond to multiple senses, but they don’t yet integrate them effectively.

Integration has to be learned.

Animals raised without normal cross-sensory experience—such as visual input paired with sound—fail to develop typical multisensory integration. The brain needs correlated experience to discover which signals belong together.

This makes multisensory integration a powerful example of experience-dependent plasticity. The brain doesn’t just receive the world. It learns how to bind it.

Cortex adds meaning, not just alignment

In higher cortical areas, multisensory integration becomes less about where and when, and more about meaning. Signals are evaluated for context, relevance, and semantic fit.

A voice paired with a matching face enhances neural responses. A mismatch can suppress them. Integration here reflects interpretation, not just detection.

This reveals an important shift: multisensory integration is not one process but many. Each brain region integrates information in ways that serve its goals—action, communication, prediction, understanding.

Rethinking “unisensory” cortex

The paper ends with a question that still unsettles neuroscience. If even early sensory areas receive input from other senses, does it still make sense to talk about purely visual or auditory cortex?

Stein and Stanford don’t argue for abandoning these labels altogether. Instead, they suggest a more nuanced view—one that recognizes gradients, transitional zones, and widespread multisensory influence.

Perception, in this view, is never purely unisensory. It is shaped by context from the start.

Why this paper still matters

Nearly two decades later, this work remains foundational because it demonstrated that multisensory integration is nonlinear, rule-governed, learned, and behaviorally meaningful. It showed that perception is not passive reception but active synthesis—built from circuits that balance signal strength, uncertainty, experience, and purpose.

That insight continues to shape how we think about attention, development, peripersonal space, predictive processing, and sensory differences in autism and ADHD.

In short, the paper taught us that the brain doesn’t simply sense the world. It actively constructs it—one integrated moment at a time.

Reference
Stein, B. E., & Stanford, T. R. (2008). Multisensory integration: current issues from the perspective of the single neuron. Nature Reviews Neuroscience, 9(4), 255–266. https://doi.org/10.1038/nrn2331