Researchers used fMRI to create semantic maps of the brain while people listened to “The Moth Radio Hour.” ©ALEXANDER HUTH/THE REGENTS OF THE UNIVERSITY OF CALIFORNIATo better understand how the brain processes language, researchers from the University of California (UC), Berkeley, and their colleagues used functional magnetic resonance imaging (fMRI) to map the brains of people listening to a storytelling podcast. Using the resulting maps, the team could accurately predict the study participant’s neural responses to hearing new stories. And these responses were surprisingly consistent across individuals, according to the team’s study, published today (April 27) in Nature.

“This paper nicely illustrates both the potential power and limitations of purely data-driven methods for evaluating functional brain-imaging data,” Alex Martin, chief of cognitive neuropsychology at the National Institute of Mental Health, who was not involved in the work, wrote in an email to The Scientist. “What...

Previous neuroimaging studies of how the brain interprets speech have revealed a group of brain areas called the semantic system that appears to represent the meaning of language. Traditionally, these studies have focused on a single, narrow question or hypothesis about how the brain represents word or sentence meanings.

To map the brain’s semantic representation more broadly, study coauthor Jack Gallant of UC Berkeley and colleagues scanned the brains of seven graduate student volunteers while the study participants listened to more than two hours of stories from “The Moth Radio Hour.”

“We wanted to do the mapping when the brain was in as natural a state as possible,” Gallant told The Scientist.

The team quantified the response of small chunks, or voxels, of brain tissue to different concepts in the stories by measuring blood flow. First, the researchers computed how often certain words in the stories occurred alongside a set of 985 common English words (for example, “month” and “week” are often found together). They then used a regression model to estimate how these common words produced responses in each voxel for every volunteer.

The researchers used this model to predict fMRI responses in the volunteers’ brains when the study participants listened to a story they had not heard before, and were able to accurately predict brain activity in a variety of brain areas, including the temporal cortex, parietal cortex, and parts of the prefrontal cortex.

Next, the researchers set out to determine what type of semantic information each part of the cortex represented. Because their data contained too many dimensions to feasibly model, the researchers used principle component analysis to home in on the three dimensions that preserve most of the information. They used these dimensions to tile the brains of each participant with color-coded semantic maps, in which different cortical regions corresponded to concepts such as people, places, or visual properties.

Finally, Gallant’s team developed a computational method to combine the maps of the different individuals to create a general semantic atlas. Despite some variation, the maps were surprisingly similar across individuals. This, the authors noted, may in part have been an effect of the small, somewhat homogeneous sample (graduate students at UC Berkeley).

One of the more surprising findings was the functional symmetry between both brain hemispheres of the people studied, which appears to contradict decades of research on brain-injury patients suggesting a left-hemisphere bias in language processing. But most of these studies were focused on speech production, whereas the present study examined speech comprehension, Gallant told The Scientist.

The work adds fuel to a growing debate in the cognitive neuroscience community about the value of data-driven studies versus more-conventional, hypothesis-driven experiments.

“In cognitive neuroscience in general, we’re in a transition period between hypothesis- or theory-driven investigations and data-driven investigations,” Anjan Chatterjee at the University of Pennsylvania Perelman School of Medicine who was not involved in the study told The Scientist. The fundamental issue with data-driven approaches, he said, is they “can ferret out patterns, but that tells you nothing at all about the meaning of those patterns.”

“I have great admiration for the technical savvy displayed here,” David Poeppel of New York University wrote in an email. “But based on results such as these, it's pretty unlikely that we would change our conceptualizations of semantics or the neural basis of language processing.”

Uri Hasson of Princeton University, who also studies language representation in response to real-world stimuli but was not involved in the present work, was in favor of using data-driven approaches in combination with hypothesis-driven ones. "There is no one recipe to do science," he said. 

A. Huth et al., “Natural speech reveals the semantic maps that tile human cerebral cortex,” Nature, doi:10.1038/nature17637, 2016.

Interested in reading more?

The Scientist ARCHIVES

Become a Member of

Receive full access to more than 35 years of archives, as well as TS Digest, digital editions of The Scientist, feature stories, and much more!
Already a member?