ENIKÖ KUBINYILike humans, dogs use the left sides of their brains to process words and the right sides to process intonation. In a recent study, praise only activated dogs’ reward center in the brain when both the words and the intonation were positive. The results, published this week (August 30) in Science, suggest that the neural mechanisms to process language are not unique to humans and evolved earlier than previously believed.
“The human brain not only separately analyzes what we say and how we say it, but also integrates the two types of information, to arrive at a unified meaning,” study coauthor Attila Andics of Eötvös Loránd University in Budapest said in a press release. “Our findings suggest that dogs can also do all that, and they use very similar brain mechanisms.”
Andics and colleagues trained 13 dogs to lie still in an fMRI scanner as a trainer spoke to them. The trainer would praise them with positive intonation (e.g., “well done!” in Hungarian), praise them with neutral intonation, or speak words that were meaningless to the dogs (e.g., “as if”) in positive or neutral intonations. The results of the scans showed that the dogs used their left hemispheres to process meaningful words and their right hemispheres to distinguish positive and neutral tones. This finding is consistent with previous work, which found that when dogs hear emotional speech-like sounds, they tend to turn to the left—suggesting they’re using the right sides of their brains—and when they hear verbal commands from a robot, they turn to the right.
Andics and colleagues also found that the dogs’ reward center responded to praise, but only true praise that was given with a positive intonation. “It shows that for dogs, a nice praise can very well work as a reward, but it works best if both words and intonation match,” Andics said in the release. “So dogs not only tell apart what we say and how we say it, but they can also combine the two, for a correct interpretation of what those words really meant.”
“It’s an important study that shows that basic aspects of speech perception can be shared with quite distant relatives,” Tecumseh Fitch, a cognitive biologist at the University of Vienna, who was not involved in the work, told Science.