Researchers were able to reverse engineer the sounds of human speech using only patterns of neuron firing measured in subjects listening to such sounds.
They measured nerve impulses in one of the auditory regions of patients’ brains while playing them a recording of words and sentences. After feeding the electrical impulses through an algorithm that interpreted certain characteristics of the sounds, such as volume changes between syllables in a word, the computer could recreate the words or sentences. The research, published in PLoS Biology on Monday (January 31), could help improve treatment for people with aphasia, or locked-in syndrome.
“A major goal is to figure out how the human brain allows us to understand speech despite all the variability, such as a male or female voice, or fast or slow talkers,” first author Brian Pasley told Nature. While the sounds made on the computer are still somewhat crude in comparison to actual speech, they are recognizable when heard following their natural counterpart. (Listen to a recording of the computer generated words.)
True speech recognition from neuronal recordings may still be a ways off. But “this approach may enable [the authors] to start determining the kinds of transformations and representations underlying normal speech perception,” University College London neuroscientist Sophie Scott, who was not involved in the research, told Nature.