Menu

Locating Language within the Brain

Researchers map the mental semantic systems of podcast listeners.

Apr 27, 2016
Tanya Lewis

Researchers used fMRI to create semantic maps of the brain while people listened to “The Moth Radio Hour.” ©ALEXANDER HUTH/THE REGENTS OF THE UNIVERSITY OF CALIFORNIATo better understand how the brain processes language, researchers from the University of California (UC), Berkeley, and their colleagues used functional magnetic resonance imaging (fMRI) to map the brains of people listening to a storytelling podcast. Using the resulting maps, the team could accurately predict the study participant’s neural responses to hearing new stories. And these responses were surprisingly consistent across individuals, according to the team’s study, published today (April 27) in Nature.

“This paper nicely illustrates both the potential power and limitations of purely data-driven methods for evaluating functional brain-imaging data,” Alex Martin, chief of cognitive neuropsychology at the National Institute of Mental Health, who was not involved in the work, wrote in an email to The Scientist. “What is unclear,” he continued, “is whether any new organizational principles emerge from these data, [and] how do we validate these findings?”

Previous neuroimaging studies of how the brain interprets speech have revealed a group of brain areas called the semantic system that appears to represent the meaning of language. Traditionally, these studies have focused on a single, narrow question or hypothesis about how the brain represents word or sentence meanings.

To map the brain’s semantic representation more broadly, study coauthor Jack Gallant of UC Berkeley and colleagues scanned the brains of seven graduate student volunteers while the study participants listened to more than two hours of stories from “The Moth Radio Hour.”

“We wanted to do the mapping when the brain was in as natural a state as possible,” Gallant told The Scientist.

The team quantified the response of small chunks, or voxels, of brain tissue to different concepts in the stories by measuring blood flow. First, the researchers computed how often certain words in the stories occurred alongside a set of 985 common English words (for example, “month” and “week” are often found together). They then used a regression model to estimate how these common words produced responses in each voxel for every volunteer.

The researchers used this model to predict fMRI responses in the volunteers’ brains when the study participants listened to a story they had not heard before, and were able to accurately predict brain activity in a variety of brain areas, including the temporal cortex, parietal cortex, and parts of the prefrontal cortex.

Next, the researchers set out to determine what type of semantic information each part of the cortex represented. Because their data contained too many dimensions to feasibly model, the researchers used principle component analysis to home in on the three dimensions that preserve most of the information. They used these dimensions to tile the brains of each participant with color-coded semantic maps, in which different cortical regions corresponded to concepts such as people, places, or visual properties.

Finally, Gallant’s team developed a computational method to combine the maps of the different individuals to create a general semantic atlas. Despite some variation, the maps were surprisingly similar across individuals. This, the authors noted, may in part have been an effect of the small, somewhat homogeneous sample (graduate students at UC Berkeley).

One of the more surprising findings was the functional symmetry between both brain hemispheres of the people studied, which appears to contradict decades of research on brain-injury patients suggesting a left-hemisphere bias in language processing. But most of these studies were focused on speech production, whereas the present study examined speech comprehension, Gallant told The Scientist.

The work adds fuel to a growing debate in the cognitive neuroscience community about the value of data-driven studies versus more-conventional, hypothesis-driven experiments.

“In cognitive neuroscience in general, we’re in a transition period between hypothesis- or theory-driven investigations and data-driven investigations,” Anjan Chatterjee at the University of Pennsylvania Perelman School of Medicine who was not involved in the study told The Scientist. The fundamental issue with data-driven approaches, he said, is they “can ferret out patterns, but that tells you nothing at all about the meaning of those patterns.”

“I have great admiration for the technical savvy displayed here,” David Poeppel of New York University wrote in an email. “But based on results such as these, it's pretty unlikely that we would change our conceptualizations of semantics or the neural basis of language processing.”

Uri Hasson of Princeton University, who also studies language representation in response to real-world stimuli but was not involved in the present work, was in favor of using data-driven approaches in combination with hypothesis-driven ones. "There is no one recipe to do science," he said. 

A. Huth et al., “Natural speech reveals the semantic maps that tile human cerebral cortex,” Nature, doi:10.1038/nature17637, 2016.

February 2019

Big Storms Brewing

Can forests weather more major hurricanes?

Marketplace

Sponsored Product Updates

Bio-Rad Releases First FDA-Cleared Digital PCR System and Test for Monitoring Chronic Myeloid Leukemia Treatment Response
Bio-Rad Releases First FDA-Cleared Digital PCR System and Test for Monitoring Chronic Myeloid Leukemia Treatment Response
Bio-Rad Laboratories, Inc. (NYSE: BIO and BIOb), a global leader of life science research and clinical diagnostic products, today announced that its QXDx AutoDG ddPCR System, which uses Bio-Rad’s Droplet Digital PCR technology, and the QXDx BCR-ABL %IS Kit are the industry’s first digital PCR products to receive U.S. Food and Drug Administration (FDA) clearance. Used together, Bio-Rad’s system and kit can precisely and reproducibly monitor molecular response to treatment in patients with chronic myeloid leukemia (CML).
Bio-Rad Showcases New Automation Features of its ZE5 Cell Analyzer at SLAS 2019
Bio-Rad Showcases New Automation Features of its ZE5 Cell Analyzer at SLAS 2019
Bio-Rad Laboratories, Inc. (NYSE: BIO and BIOb) today showcases new automation features of its ZE5 Cell Analyzer during the Society for Laboratory Automation and Screening 2019 International Conference and Exhibition (SLAS) in Washington, D.C., February 2–6. These capabilities enable the ZE5 to be used for high-throughput flow cytometry in biomarker discovery and phenotypic screening.
Andrew Alliance and Sartorius Collaborate to Provide Software-Connected Pipettes for Life Science Research
Andrew Alliance and Sartorius Collaborate to Provide Software-Connected Pipettes for Life Science Research
Researchers to benefit from an innovative software-connected pipetting system, bringing improved reproducibility and traceability of experiments to life-science laboratories.
Corning Life Sciences to Feature 3D Cell Culture Technologies at SLAS 2019
Corning Life Sciences to Feature 3D Cell Culture Technologies at SLAS 2019
Corning Incorporated (NYSE: GLW) will showcase advanced 3D cell culture technologies and workflow solutions for spheroids, organoids, tissue models, and applications including ADME/toxicology at the Society for Laboratory Automation and Screening (SLAS) conference, Feb. 2-6 in Washington, D.C.