ADVERTISEMENT
ADVERTISEMENT

Seeing with Sound

Converting sights to sounds reveals that the brains of congenitally blind people respond similarly to various objects as those of subjects who can see.

Jef Akst
Jef Akst

Jef Akst is managing editor of The Scientist, where she started as an intern in 2009 after receiving a master’s degree from Indiana University in April 2009 studying the mating behavior of seahorses.

View full profile.


Learn about our editorial policies.

WIKIMEDIA, AHMET ANIRResearchers have long recognized that the brains of people blind from a young age compensate for the lack of visual input by putting more emphasis on the other senses. According to a study published last week (March 6) in Current Biology, in which blind participants used an augmented reality system that converts images into sounds, the brains of blind and sighted people sometimes react to similar objects in much the same way, despite vastly different sensory inputs.

The system, which was invented in the early 1990s but is not commonly used today, uses pitch, timing, and duration of sounds to indicate information about the object’s location and dimensions. Scanning the brains of blind and sighted people presented with different objects in this system, researchers at the Hebrew University of Jerusalem found that in response to the outline or silhouette of a human body, the cerebral cortex—and the extrastriate...

“The brain, it turns out, is a task machine, not a sensory machine,” coauthor Ella Striem-Amit told ScienceNow. “You get areas that process body shapes with whatever input you give them—the visual cortex doesn’t just process visual information.”

Overall, the augmented reality system allowed blind people to accurately classify 78 percent of objects they were presented in one of three groups: people, everyday objects, or textured patterns. Moreover, the soundscapes emitted by the system also portrayed information about a person’s position, for example, among other details. “During training, the participants were asked to report the body posture of the people in the images they ‘saw,’ and could verbally describe it quite well, and also mimic it themselves,” Striem-Amit told Wired.

Striem-Amit and her colleague Amir Amedi hope that the system could one day be used more regularly to help blind people interpret their environments. Others, including Amedi’s lab, are also taking advantage of sound to provide information, specifically by using music to portray color.

Interested in reading more?

The Scientist ARCHIVES

Become a Member of

Receive full access to more than 35 years of archives, as well as TS Digest, digital editions of The Scientist, feature stories, and much more!
Already a member?
ADVERTISEMENT