For more than a decade, Alexander Huth from the University of Texas at Austin had been striving to build a language decoder—a tool that could extract a person’s thoughts noninvasively from brain imaging data. Earlier this year, he succeeded.1
To build a language decoder, Huth first needed functional MRI (fMRI) data to input into the model. He and his team recorded brain activity from participants as they listened to 16 hours of narrative podcasts such as The Moth Radio Hour and The Modern Love Podcast. The team then used this data to teach an artificial intelligence-based decoder which language patterns correlated with what brain activity. Finally, they instructed the participants to memorize a new story that neither they nor the decoder had ever heard before and narrate it in their heads. The model made guesses of what the participant was thinking and ranked these guesses based on how well they corresponded to the participant’s brain activity.
The decoder wasn’t perfect. It wasn’t very good at conserving pronouns, for example, and it mixed up first and third person. But it successfully extracted the meaning of what participants thought. For example, if the scientists directed a person to think, “I don’t have my license yet,” the decoder generated the sentence, “She has not even started to learn to drive yet.” It captured the gist of what the participant was thinking, explained Huth. It was even able to capture the flavor of what participants viewed in a video, which Huth found shocking.
For the first time in history, there is a system other than the human that’s able to do something that looks like language.
—Alexander Huth, University of Texas at Austin
Huth hopes that the technique will help people who are unable to speak to communicate again. But these types of experiments may also tell scientists something fundamental about how the brain understands and organizes meaning. As artificial intelligence more accurately mimics human speech, cognitive neuroscientists hope that it can tell us about how humans distinguish “apple” from “orange.”
In Huth’s study, he used a large language model called GPT1, an early version of the engine that runs ChatGPT, to decode brain activity. “For the first time in history, there is a system other than the human that’s able to do something that looks like language,” said Huth, but whether it is anything like a human brain remains an open question. “And these language models are just wildly useful.”
Why is it so hard to tell what people think?
The algorithm’s ability to decode imagined speech was pretty remarkable, said Huth. Other scientists agree. “It’s amazing that something like fMRI, which has such a slow temporal resolution, is even capable of doing this,” said Laura Gwilliams, a neuroscientist at Stanford University. fMRI is agonizingly slow compared to the speed of human thoughts or even human speech. The technique measures changes in blood flow within the brain as proxies for brain activity, which can take seconds.2
Scientists have few other good options for studying language in the brain. Humans share many senses and cognitive processes with other animals, but not language. Scientists can rarely use invasive electrical recordings, which have much higher temporal resolution, in humans. The only possibility is when patients are undergoing treatment for neurological diseases such as epilepsy.
The question of how the brain distinguishes meaning is also extremely complicated, Gwilliams explained. The average English speaker knows 15,000 words and countless phrases, and understanding whether one phrase is similar to another is an extremely difficult task. It would require an extremely large number of measurements, said Alona Fyshe, a computational neuroscientist at the University of Alberta.
Most of what we know about language is based on linguistic theory rather than experimental data, said Gwilliams. While we generally know which parts of the brain process language, we don’t know which parts handle syntactical information (word order) or semantic information (the meaning of words).
Even if fMRI is a noisy, slow approximation of the brain’s electrical activity, Huth wasn’t particularly surprised by his findings. Researchers have predicted brain activity from words for years and managed to decode music and even dreams from brain activity.2 Since the early days of neural networks trained to recreate language, scientists have theorized that brains and neural networks share similar properties.
What can AI reveal about how we decipher meaning?
While neural networks take inspiration from the brain, they are not specifically designed to mimic it, explained Fyshe. Computational neural networks are built in layers. Each layer is composed of multiple building blocks called neurons, which are connected in various ways. When a neural network is asked to perform a task, an input goes into the network, and the artificial neurons in each layer extract information from that input by performing a series of computations, with subsequent or deeper layers gathering and combining information from previous layers, much like how information is hierarchically organized in the brain. But unlike a human brain, neural networks theoretically can be taken apart and analyzed piece by piece, making them potentially useful tools for studying the brain.
There are two ways in which artificial neural networks help scientists study the brain. In the first way, scientists use artificial neural networks or other computational models to predict or encode neural responses to a stimulus, such as an auditory word input. This gives researchers an abstract model of the brain. When training an encoding model, researchers feed an auditory stimulus into a neural network, where the stimulus goes through multiple layers of computation. Each layer spits out a series of numbers, which can be mapped onto the neural response.5,6
In the second approach, researchers decode a stimulus from actual neural responses, such as fMRI measurements, using artificial neural networks. These data provide insights into which brain areas are active when a stimulus is present.
Back in 2016, Fyshe tested the first approach; her team used data from multiple modalities of noninvasive brain imaging to understand how the brain responded to single words. Their goal was to understand if the brain encoded information like neural networks designed to produce language, specifically those trained to predict the other words in a sentence. An algorithm called Skip-gram, first introduced three years prior to their study, served as the point of comparison.3
They found that the brain and neural networks group words together in similar ways. For example, both might group words like “apple” and “banana” together, recognizing them as being more related than “banana” and “car.” Fyshe’s team discovered this by looking at how similar brain responses were to words like banana and apple versus banana and car, and how likely neural networks were to predict the word as banana, apple, or car in a sentence like, “she ate a…”
Neural networks like Skip-gram couldn’t reproduce natural language very well since they represented words with a single vector and couldn’t capture words with multiple meanings. Finally in 2017, when transformer neural networks like GPT first appeared, neural networks could mimic something like human speech.4
Transformer neural networks spark connections
In 2021, cognitive neuroscientist Evelina Fedorenko from the Massachusetts Institute of Technology and her team published a paper presenting their evaluation of how well several state-of-the-art artificial neural network models mapped neural responses.7 Many of these models were transformer language models, which were taking the field by storm with their ability to produce human-like language, said Fedorenko. Unlike other neural networks, which break an input down into similar components (for example, a language model groups similar words together), transformers predict an outcome based on what came before. Fedorenko tested how well transformer-based models that guess the next word in a sentence predict neural activity from words or phrases participants hear. “They capture neural responses very well,” she said. She concluded that to produce language, the human brain makes predictions about what word comes next based on past brain activity, much like an artificial neural network does.5
Next, Fedorenko focused on finding the key components models need to map on to neural activity. “We have this amazing set of new toolkits for probing brains,” she said. “It’s really a revolution.”
The key to many recent advances, including those in his paper, Huth explained, was the rise of transformer neural networks that predict subsequent words. For his study, Huth used a transformer neural network GPT-1 as the basis for the decoder.
We have this amazing set of new toolkits for probing brains. It’s really a revolution.
—Evelina Fedorenko, Massachusetts Institute of Technology
One key finding from Huth’s study is that the decoder could use any part of the brain to accurately predict what participants were thinking, while only the prefrontal cortex was active the entire time. There didn’t seem to be a specific part of the brain specialized to extract meaning from the sentences. The decoder also didn’t pick up on syntactic information well, but always returned the flavor of what the participant was thinking. According to Huth, this means that the brain cares more about meaning than syntax and implies that all parts of the brain keep track of meaning-related information, although they might do different things with it. “I’m a big proponent that it’s all meaning,” he said.
Still, connecting artificial intelligence algorithms back to biology has been a sore spot for computational neuroscientists, especially as generative artificial intelligences such as GPT proliferate. The “neurons” in neural networks are unlike neurons in the brain, and it’s hard to relate what they do back to biology.
“In some ways, I know exactly what a neural network is doing,” Fyshe said. “In other ways, I have no idea what a neural network is doing.”
According to Fyshe, the inputs and outputs in neural networks are recognizable, but the stuff in between is hard to interpret. Each layer of the neural network generates numbers using known computational functions. It’s really hard to relate those numbers to something meaningful in real life.
“The way that transformers are built is very nonbiologically plausible,” Fedorenko said. “At least what people have been thinking about human neural circuits, it’s really pretty different.”
The next step, said Fedorenko, is to relate computations that the language models perform to parts of the brain. “But we’re not quite there yet,” she said.
Even if they don’t behave like brains, neural networks are getting pretty good at predicting what humans are thinking. But does that amount to mind reading? Huth doesn’t think so. For one, his decoder model was not generalizable across subjects, meaning that the same decoder doesn’t work across multiple individuals. People can also resist the decoder by thinking about something else, and it can’t read memories.
That doesn’t eliminate privacy concerns in the future, however, as neuroscientists will build more perfect decoders. That’s the way that the field is trending, said George McConnell, a neuroscientist at Stevens Institute of Technology. No one knows how good this technology will get.
Gwilliams agreed, adding that the field should be prepared to mull over privacy questions as encoding and decoding models improve and as cutting-edge advancements in imaging allow for more precise and less invasive measurements in the human brain. These advancements are happening quickly, she said. “It’s important that we’re having these conversations now so that we’re not caught unaware,” said Gwilliams.
- Tang J, et al. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat Neurosci. 2023;26(5):858-866.
- Naselaris T, et al. Encoding and decoding in fMRI. NeuroImage. 2011;56(2):400-410.
- Lazaridou A, et al. Combining Language and Vision with a Multimodal Skip-gram Model. Published online 2015.
- Kriegeskorte N. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing. Annu Rev Vis Sci. 2015;1(1):417-446.
- Federer C, et al. Improved object recognition using neural networks trained to mimic the brain’s statistical properties. Neural Networks. 2020;131:103-114.
- Kriegeskorte N. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing. Annu Rev Vis Sci. 2015;1(1):417-446
- Schrimpf M, et al. The neural architecture of language: Integrative modeling converges on predictive processing. Proc Natl Acad SciUSA. 2021;118(45):e2105646118.
Note: The visuals and a source affiliation in this article have been updated.
Membership Open House!
Enjoy OPEN access to Premium Content for a limited timeInterested in exclusive access to more premium content?