AI Decodes Speech and Hearing Based on Brain Activity
AI Decodes Speech and Hearing Based on Brain Activity

AI Decodes Speech and Hearing Based on Brain Activity

The proof-of-concept study could be a step toward better assisted communication devices for paralyzed people.

Jul 30, 2019
Shawna Williams

ABOVE: © ISTOCK.COM, SAMARETS1984

When people listened to questions from a predetermined set and spoke a response from a group of answer options, a computer program could correctly predict the question based on their brain activity most of the time, researchers report today (July 30) in Nature Communications.

The study, conducted on three people who had arrays of electrodes temporarily implanted in their brains to monitor their brain activity in preparation for surgery for epilepsy, was funded by Facebook and carried out at the University of California, San Francisco (UCSF).

“This is the first time this approach has been used to identify spoken words and phrases,” coauthor David Moses tells The Guardian. “It’s important to keep in mind that we achieved this using a very limited vocabulary, but in future studies we hope to increase the flexibility as well as the accuracy of what we can translate.” 

See “Computer Program Converts Brain Signals to a Synthetic Voice

After training on a limited set of questions and answers, a computer model used in the study was able to correctly decode the question a participant heard 76 percent of the time, and the answer that person gave 61 percent of the time, based on the participants’ brain activity, Moses and his colleagues report. Listening and speaking produced activity in different brain regions. While monitoring that activity required an invasive device, Facebook would ultimately like to build a non-invasive gadget that could convert a person’s imagined speech directly to text, with no typing required. “We expect that to take upwards of 10 years,” Mark Chevillet, a research director at Facebook Reality Labs, tells CNN Business. “This is a long-term research program.”

Meanwhile, study coauthor Edward Chang has begun another Facebook-sponsored effort, this one working with a single patient who cannot speak, to see whether tracking his brain activity with an electrode array can help him to communicate. “We’ve got a tall order ahead of us to figure out how to make that work,” Chang says in an interview with CNN Business

Asked by The Guardian whether a “speech neuroprosthesis” could in the future reveal people’s most private thoughts, Chang said extracting a person’s inner thoughts was technically near-impossible. “I have no interest in developing a technology to find out what people are thinking, even if it were possible,” he tells the newspaper. “But if someone wants to communicate and can’t, I think we have a responsibility as scientists and clinicians to restore that most fundamental human ability.” 

Shawna Williams is an associate editor at The Scientist. Email her at swilliams@the-scientist.com.