For the first time, scientists report they have devised a method that uses functional magnetic resonance imaging brain recordings to reconstruct continuous language. The findings are the next step in the quest for better brain-computer interfaces, which are being developed as an assistive technology for those who can’t speak or type.
In a preprint posted September 29 on bioRxiv, a team at the University of Texas at Austin details a “decoder,” or algorithm, that can “read” the words that a person is hearing or thinking during a functional magnetic resonance imaging (fMRI) brain scan. While other teams had previously reported some success in reconstructing language or images based on signals from implants in the brain, the new decoder is the first to use a noninvasive method to accomplish this.
“If you had asked any cognitive neuroscientist in the world twenty years ago if this was doable, they would have laughed ...





















