Advertisement

Seeing with Sound

Converting sights to sounds reveals that the brains of congenitally blind people respond similarly to various objects as those of subjects who can see.

By | March 10, 2014

WIKIMEDIA, AHMET ANIRResearchers have long recognized that the brains of people blind from a young age compensate for the lack of visual input by putting more emphasis on the other senses. According to a study published last week (March 6) in Current Biology, in which blind participants used an augmented reality system that converts images into sounds, the brains of blind and sighted people sometimes react to similar objects in much the same way, despite vastly different sensory inputs.

The system, which was invented in the early 1990s but is not commonly used today, uses pitch, timing, and duration of sounds to indicate information about the object’s location and dimensions. Scanning the brains of blind and sighted people presented with different objects in this system, researchers at the Hebrew University of Jerusalem found that in response to the outline or silhouette of a human body, the cerebral cortex—and the extrastriate body area in particular—became active in both blind and sighted people. And in both blind and sighted subjects, human body shapes also elicited activation in a brain region called the temporal-parietal junction, which may be involved in determining the intentions of others.

“The brain, it turns out, is a task machine, not a sensory machine,” coauthor Ella Striem-Amit told ScienceNow. “You get areas that process body shapes with whatever input you give them—the visual cortex doesn’t just process visual information.”

Overall, the augmented reality system allowed blind people to accurately classify 78 percent of objects they were presented in one of three groups: people, everyday objects, or textured patterns. Moreover, the soundscapes emitted by the system also portrayed information about a person’s position, for example, among other details. “During training, the participants were asked to report the body posture of the people in the images they ‘saw,’ and could verbally describe it quite well, and also mimic it themselves,” Striem-Amit told Wired.

Striem-Amit and her colleague Amir Amedi hope that the system could one day be used more regularly to help blind people interpret their environments. Others, including Amedi’s lab, are also taking advantage of sound to provide information, specifically by using music to portray color.

Advertisement

Add a Comment

Avatar of: You

You

Processing...
Processing...

Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo

Comments

March 10, 2014

great inovation

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Advertisement
Life Technologies