TALIA KONKLE AND AUDE OLIVA, MIT.
The brain sees a qualitative difference between “small” objects—ones we usually pick up, such as paperclips or strawberries—and “large” objects—ones we use our bodies to interact with, such as chairs or cars. While researchers have previously identified brain regions that recognize specific objects like faces and letters, this discovery, published yesterday (June 21) in Neuron, is one of the first documented “rules” about how people interpret the world around them.
“This paper stands out in that it found a very large-scale organization that covers just about all the parts of the visual cortex that are responsive to shape of any kind,” said visual neuroscientist Ed Connor, director of the Zanvyl Krieger Mind/Brain Institute at Johns Hopkins University who was not involved in the study. “Instead of a finding of a small area of specialization, they are describing an overall organizing principle.”
Talia Konkle and Aude Oliva at the Massachusetts Institute of Technology’s Department of Brain and Cognitive Sciences used fMRI to image participants’ brains as they were shown pictures of objects categorized as either big or small. The objects were all presented as the same size so that the amount of space an image takes up in the eye was not a factor, only the brain’s perception of the real-world object.
They found that a broad region of the brain lights up as people view the objects, but depending on the size of the object, some parts were more active than others. The parts of the brain that responded to large object overlapped with regions known to be active when identifying spaces, such as streets or elevators, while small object areas were close to regions that process information on tools. Oliva said this agrees with the original premise they wanted to test: Does the brain organize shapes based on whether they are an object or a space? In other words, is it something we use with our hands or something we interact within?
“It suggests you don’t have a dichotomy between an object like a chair and a space like an elevator,” said Oliva. “In both cases you put your body in it, you go towards it, you interact with it.”
To test the robustness of the division, Konkle and Oliva showed participants images of tiny cars and giant apples, and found that the large object areas of the brain still responded to the cars and the small areas to the apples, suggesting that the relative size of the real world object still held sway over the brain’s activity. The researchers also used participants’ imaginations, by asking them to visualize normal and odd-sized objects, such as a “tiny piano” or “giant peach.” While they did see some slight shifts in the regions activated in this exercise, the brain’s overall organization still categorized objects based on their real-world sizes, showing a preference for reality over abstract concepts.
“It’s not just any organizing principle,” said Connor. “It’s one that immediately resonates as something that could be very important for understanding our interaction with objects.”
That understanding could inform future robotics, Oliva noted. Knowing how the brain defines objects could improve artificial systems like robotic arms that rely on signaling from the brain of a paralyzed person, for example. But to further put the concept to the test, the team is working on what the brains does with medium-sized objects, and Oliva suspects the answer lies in how we use the object.
“If you have a suitcase, are you manipulating it with your hands or is it acting as a landmark because you are going towards it?” said Oliva. “Those are questions we are going to start looking at.”
T. Konkle, A. Oliva, ” A real-world size organization of object responses in occipitotemporal cortex,” Neuron, 74: 1114-1124, 2012.