ANDRZEJ KRAUZE“[I was] somewhere, in a place like a studio to make a TV program or something,” a groggy study participant recounted (in Japanese). “A male person ran with short steps from the left side to the right side. Then, he tumbled.” The participant had recently been awoken by Masako Tamaki, a postdoc in the lab of neuroscientist Yukiyasu Kamitani of the ATR Computational Neuroscience Laboratories in Kyoto, Japan. He was lying in a functional magnetic resonance imaging (fMRI) scanner, doing his best to recall what he had been dreaming about. “He stumbled over something, and stood up while laughing, and said something,” the participant continued. “He said something to persons on the left side.”
At first blush, the story doesn’t seem particularly informative. But the study subject saw a man, not a woman. And he was inside some sort of workplace. That fragmented information is enough for Kamitani...
Knowing what is represented during sleep would help to understand the function of dreaming.—Yukiyasu Kamitani, ATR Computational Neuroscience Laboratories
Analyzing more than 200 dream reports—some 30–45 hours of interviews with each of three participants—Kamitani and his colleagues built a “dream-trained decoder” based on fMRI imagery of the V1, V2, and V3 areas of the visual cortex. “We find some rule, or mapping, or pattern between what the person is seeing and what activity is happening in the brain,” Kamitani explains. And it worked, according to Kamitani, who presented the results at the Society for Neuroscience meeting in New Orleans in October 2012, predicting whether or not the 20 objects occurred in dreams with 75–80 percent accuracy.
But while Kamitani’s dream-decoding study is interesting, says neurobiologist David Kahn of Harvard Medical School, the algorithms used are quite primitive, only providing a handful of clues about the dream’s content. “We still have a long way to go before we can actually re-create the story that is the dream,” he says. “This is almost science fiction, because we’re way, way far from it . . . [but] this is an added tool.”
“Decoding is very primitive,” Kamitani agrees, “but I think there are a lot of potentials.” One way to get a more complete picture of the dream is to increase the complexity of the decoder, he notes. In this first study, for example, the researchers focused on nouns representing visual objects, but going forward, Kamitani says he hopes to include other concepts, like verbs. “By analyzing that aspect we may be able to add some action aspects in the dream.”
Furthermore, researchers might not have to fully interpret the dream themselves to benefit from the new decoder. Instead, the clues gleaned from the fMRI images could simply be used to jog participants’ memories. “We know that dreams—even the most vivid dreams we remember, [like] nightmares or lucid dreams—are really fragile memories,” says Antonio Zadra, an experimental psychologist at the University of Montreal. “Unless you wrote it down or told it to someone in the morning, usually even before lunch, that memory will start fading. And by night, you might just have the essence.”
Unfortunately, that failing memory was the only resource for researchers studying dreams. Now, with a little bit of supplemental information, they may be able to help participants recall dreams more precisely. “The subjective reports are never complete,” Kamitani says. “By giving the subject what we reconstructed, they may remember something more.”
At an even more basic level, the decoder could help scientists understand what’s happening in the brain during dreaming. “To create this whole virtual world out of nothing—with no visual input or auditory input—is quite fascinating and undoubtedly very complex,” Zadra says. “This research will certainly help us better understand what brain areas are doing what, to even allow for this to happen.”
In Kamitani’s study, for example, the researchers found that areas of higher-level visual processing, which respond to more abstract features, were more useful for interpreting dream content than lower-level processing areas. This makes sense, given that those lower areas of the visual cortex are more closely connected to the direct input from the retina. But, Kamitani notes, this could simply have to do with the way the study was designed. “We didn’t train the decoder with low-level visual features,” such as shape or contrast, he says. “We just used the semantic category information.”
Indeed, given the richness of the dreaming experience, such visual qualities may well be encoded during sleep. “Your brain creates a whole virtual world for you when you are dreaming, complete with characters, settings, interactions, dialogues,” says Zadra. “But you’re actually in your bed asleep; there is no visual input. So your brain is literally creating this virtual world from A to Z.”
Correction (January 3, 2013): This story has been updated to correctly reflect the source of the pull quote as neuroscientist Yukiyasu Kamitani of the ATR Computational Neuroscience Laboratories in Kyoto, Japan, not his postdoc Masako Tamaki. The Scientist regrets the error.