A new MIT study uncovers specialized neural circuits for processing materials like water and sand, distinct from those for rigid objects, revealing a sophisticated ‘physics engine’ in our visual cortex.
Imagine a rubber ball bouncing down a flight of stairs. You can almost hear the rhythmic thump-thump-thump as it careens off each step, its path predictable and its form constant. Now, picture a cascade of water flowing down those same stairs. It doesn’t bounce; it splashes, flows, and conforms to the shape of each step, its movement a chaotic and fluid dance. You perceive these two events as fundamentally different, not just in their appearance but in their very nature. Your brain understands, instantly and intuitively, that you can pick up the ball but would need a bucket to contain the water.
This effortless distinction, something we do countless times a day, turns out to be rooted in a profound and previously unknown division of labor within our brains. Neuroscientists at MIT have discovered that our visual system has dedicated, separate neural circuits for processing “things”—rigid or deformable objects—and “stuff”—amorphous substances like liquids or sand. This finding suggests our brain doesn’t just see the world; it actively simulates it, running different physical models depending on what it’s looking at.
For decades, neuroscience has made significant strides in understanding how we perceive solid objects. Early and influential work by researchers like Nancy Kanwisher, a senior author on the new study, helped map out the brain’s visual pathways. These studies identified key regions like the lateral occipital complex (LOC), located in the ventral visual pathway (often called the “what” pathway), which is crucial for recognizing the shapes of 3D objects. Another critical network is the frontoparietal physics network (FPN) in the dorsal visual pathway (the “how” pathway), which analyzes an object’s physical properties, such as its mass and stability.
However, this vast body of research had a significant blind spot: it almost exclusively used solid objects. “Nobody has asked how we perceive what we call ‘stuff’ — that is, liquids or sand, honey, water, all sorts of gooey things,” explains Vivian Paulun, the study’s lead author. These materials behave in ways that defy the physics of solids. They flow, slosh, and pour, requiring entirely different strategies for interaction. This gap in knowledge prompted the MIT team to ask whether the brain, in its efficiency, might have evolved specialized regions to handle the unique challenges posed by “stuff.”
To investigate this, the researchers needed to show the brain materials in motion. Paulun utilized sophisticated software, typically used by visual effects artists, to create over 100 short video clips. These videos depicted a variety of “things” and “stuff” interacting with their environment in realistic ways. A rigid object might be seen bouncing down stairs, while a granular substance like sand would tumble and spread. A liquid would slosh inside a transparent container or flow over another object. This carefully controlled set of stimuli was the key to isolating the brain’s response to each category of material.
While participants watched these videos, their brains were scanned using functional magnetic resonance imaging (fMRI), a technique that measures brain activity by detecting changes in blood flow. The results were striking. The team found that both the object-recognizing LOC and the physics-analyzing FPN contained distinct subregions that showed a clear preference. One set of subregions became more active when participants viewed “things,” while an entirely separate, adjacent set lit up in response to “stuff.”
“Both the ventral and the dorsal visual pathway seem to have this subdivision, with one part responding more strongly to ‘things,’ and the other responding more strongly to ‘stuff,’” Paulun notes. This pattern, known as a double dissociation, is powerful evidence that the brain uses different neural machinery to process these two categories. It’s not just a matter of degree; it’s a fundamental split in processing.
This discovery lends weight to a fascinating hypothesis: the brain may operate much like the advanced physics engines used to create realistic graphics in video games and movies. These artificial engines don’t use a one-size-fits-all approach. They represent solid objects as a collection of connected points, or a mesh, which allows them to simulate bouncing and colliding. Fluids, on the other hand, are often represented as a collection of individual particles that can flow and rearrange themselves. The brain, it seems, may have stumbled upon a similar computational solution.
“The interesting hypothesis that we can draw from this is that maybe the brain, similar to artificial game engines, has separate computations for representing and simulating ‘stuff’ and ‘things,’” Paulun suggests. This internal simulation would be a critical step in preparing us to interact with the world.
And that is perhaps the most important implication of this research. This neural sorting system isn’t just for passive observation; it’s for action. As Nancy Kanwisher puts it, “When you’re looking at some fluid or gooey stuff, you engage with it in a different way than you do with a rigid object. With a rigid object, you might pick it up or grasp it, whereas with fluid or gooey stuff, you probably are going to have to use a tool to deal with it.” By categorizing a material as “stuff” or a “thing” at a very early stage of visual processing, the brain gets a head start on planning the appropriate motor response.
The journey into the brain’s physics engine is far from over. The researchers are now planning to explore whether the brain regions identified for “things” are directly connected to the motor circuits involved in planning grasping motions. They also hope to delve deeper into the “stuff” network to see if it processes more specific features, like the viscosity of a liquid or the texture of a granular substance. By revealing this fundamental division in our perceptual world, this study opens up a rich new territory for understanding how our minds build models of reality, one bounce, splash, and tumble at a time.