WIKIMEDIA, CPL. DENGRIER M. BAEZIf “seeing is believing,” then “hearing is learning” might be the new maxim born from two reports showing that Bluetooth-style listening devices can treat dyslexia. The research also identifies a biological explanation for this language disorder that could lead to its earlier diagnosis, even in toddlers who have not begun to speak or read. The studies were published this week (February 19) in the Journal of Neuroscience and in Proceedings of the National Academy of Sciences last autumn (October 9, 2012).
“These papers are very important because they provide a neural explanation for a large body of research on auditory processing in children with language learning problems, including dyslexia,” said Paula Tallal, a leader in the field for 30 years and co-director of the Center for Molecular and Behavioral Neuroscience at Rutgers University.
Dyslexia is the most prevalent learning disability among children. Not merely an affliction of the visual system that causes children’s eyes to rearrange written words, dyslexia also stems from problems with auditory processing, such as accurately interpreting speech. While their hearing ability is typically normal, dyslexics often struggle with assigning the right sounds to the right letters, a skill known as phonological awareness. For example, they might confuse the words “bad” and “dad” because they misinterpret the “b” sound as a “d,” or vice versa. Further complicating matters, many children with the disorder are easily distracted by ambient noise, which can make it hard to pay attention to a teacher’s lecture.
“If a child is not making sound-to-meaning connections in language, then the response to sound won’t be tuned in the way that it should,” said Nina Kraus, senior author of both studies and director of the Auditory Neuroscience Laboratory at Northwestern University.
To see if removing background noise could enhance language comprehension, Kraus and her colleagues provided 38 dyslexic children, aged 8 to 14 years, with Bluetooth headphones directly linked to a teacher’s microphone and asked the children to wear them to class every day. After an entire school year, parent surveys revealed a positive trend in the children’s attention levels, which paralleled improvements in literacy and phonological awareness.
“We fixed them,” said Kraus. “They went from poor readers to readers who were within a normal range.”
The researchers gained further insights into the biological mechanisms behind these improvements by measuring brain activity in the auditory center of the brainstem, a region that is networked with a variety of brain areas that help fine-tune the interpretation of spoken language. By measuring the brain activity of 100 dyslexic and non-dyslexic students, the team found that when a “good reader” hears a word, the auditory brainstem consistently produces the same brain signature, which corresponds to a precise pattern of neuronal firing, while in poor readers, the signature were less consistent. Looking at the same brain region in the students that received Bluetooth headphones, the researchers saw that the signature became crisper after the year of using the devices.
“These are the first training studies to show that improvements in signal-to-noise directly account for changes in auditory processing in the brain,” said Michael Merzenich, director of the Brain Plasticity Institute and CSO at PositScience, who was not involved in the research.
Interestingly, the signature was more inconsistent in poor readers when they heard a consonant versus a vowel., “Tricky consonants,” as they are sometimes called, are tougher for people with dyslexia, as the brain has less time to process them—as little as 40 milliseconds—relative to longer vowel sounds. Kraus’s team found that dyslexic students who used the Bluetooth devices had clearer neural representations of consonant sounds; in other words, a more refined auditory brain signature.
The practical applications for these findings range from the bench to the bedside, said R. Holly Fitch, a behavioral neuroscientist at the University of Connecticut, who also did not participate in the studies. “This gives us a venue for creating animal models and an incredible ability to look at dyslexia-risk genes,” she said. Furthermore, focusing on sound comprehension, rather than language, could prove useful in diagnostics as well. “By moving away from higher order language, we can also look much earlier in [human] development,” says Fitch.
Indeed, Kraus’s team is now embarking on a longitudinal study—affectionately called BioTots—which will examine the links between interpreting acoustics and language acquisition in 3 year-old toddlers. And other researchers are planning to look even earlier. April Benasich, director of the Infancy Studies Laboratory at Rutgers University, will use a similar approach as Kraus to measure auditory brain responses in babies as young as 4 months.
While measuring auditory brain response currently requires sophisticated and delicate skin electrodes, Kraus feels it will one day be simplified into “an iPod and a headband” for broader use in pediatrics.
“These discoveries have social, educational, and medical implications,” said Kraus, “and I’m motivated by translating the science to fit these purposes.”
J. Hornickel et al., “Assistive listening devices drive neuroplasticity in children with dyslexia,” Proceedings of the National Academy of Sciences, 32:14156-64, 2012.
J. Hornickel, N. Kraus, “Unstable representation of sound: a biological marker of dyslexia,” Journal of Neuroscience, 33:3500–04, 2013.