Transcriptional profiling of post-mortem human brains reveals commonalities in the genes over- and under-expressed in schizophrenia, bipolar disorder, autism, and major depression.
Researchers create a program that can use fMRI data to identify which musical pieces are in participants' heads.
February 5, 2018|
Researchers at the D’Or Institute for Research and Education in Brazil have created an algorithm that can use functional magnetic resonance imaging (fMRI) data to identify which musical pieces participants are listening to. The study, published last Friday (February 2) in Scientific Reports, involved six participants listening to 40 pieces of music from various genres, including classical, rock, pop, and jazz.
“Our approach was capable of identifying musical pieces with improving accuracy across time and spatial coverage,” the researchers write in the paper. “It is worth noting that these results were obtained for a heterogeneous stimulus set . . . including distinct emotional categories of joy and tenderness.”
The researchers first played different musical pieces for the participants and used fMRI to measure the neural signatures of each song. With that data, they taught a computer to identify brain activity that corresponded with the musical dimensions of each piece, including tonality, rhythm, and timbre, as well as a set of lower-level acoustic features. Then, the researchers played the pieces for the participants again while the computer tried to identify the music each person was listening to, based on fMRI responses.
The computer was successful in decoding the fMRI information and identifying the musical pieces around 77 percent of the time when it had two options to choose from. When the researchers presented 10 possibilities, the computer was correct 74 percent of the time.
“The combination of encoding and decoding models in the musical domain has the potential to extend our comprehension of how complex musical information is structured in the human auditory cortex,” the researchers write. “This will foster the development of models that can ultimately decode music from brain activity, and may open the possibility of reconstructing the contents of auditory imagination, inner speech, and auditory hallucinations.”