Isabelle Peretz made me an espresso in her office. This, I would soon realize, was the equivalent of giving a cigarette to a man about to face the firing squad. Peretz is a cognitive neuropsychologist and a professor at Université de Montréal, where she is a founding co-director of the influential International Laboratory for Brain, Music and Sound Research, better known by the acronym BRAMS. It was March 2011, and I’d travelled to her lab on the north side of Mont Royal, the big hill that sits in the middle of Montreal, to determine once and for all whether I was really, scientifically tone deaf.
I wasn’t particularly nervous as I walked from the Metro station up to the former convent that’s now home to BRAMS on what I, as a McGill grad, always considered the other side of the mountain. True, I hadn’t done all that well on the lab’s online test. I’d had to listen to pairs of short melodies and then indicate whether they were the same or different. (I later learned that in half of the second melodies a single note was different. If the tests had, say, played two sentences — “This, I would soon realize, was the equivalent of giving a cigarette to a man about to face the firing squad” and “This, I would soon realize, was the equivalent of giving a cigarette to the man about to face the firing squad” — I’d have no problem noticing that “a man” had become “the man.” But when I was comparing melodies, I was often flummoxed. By the time the second melody started, I’d couldn’t remember the first one well enough. I blamed it on my inability to focus. Since there are only two choices each time, a random result is 50; in other words, a completely deaf person would, on average, get 50 on the test. In the block I found the hardest, I managed to pull off a 54. This result brought my average score on the three tests down to 69 percent. I knew that wasn’t technically a pass, but I hoped it was close enough that Peretz might consider me just woefully untrained. A gentleman’s C kinda thing.
The lab is in a beautiful building with hardwood floors and none of the institutional look and feel I expected from a modern university research centre. When I arrived on Monday morning, I met Mihaela Felezeu. Originally from Romania, she has a serene demeanour, which I appreciated under the circumstances. She took me into a large room, sat me down in front of one of the three computers, and gave me a set of headphones. The results were sometimes different in the lab, she explained, because there were fewer distractions than when people do them online. This was my chance to redeem myself. But I didn’t and, despite a complete lack of evidence, I again blamed my wandering mind.
One of the tests that was new to me measured my ability to detect meter, the rhythmic structure of a composition. The task was to decide if a short segment of music was a waltz or a march, which have different time signatures. A waltz is in 3/4 time: A strong beat followed by two lighter ones: oom-pah-pah, oom-pah-pah. A march, on the other hand, is a four-beat pattern, in 4/4 time: A drill sergeant calling out “hup-two-three-four, hup-two-three-four.”
I thought this would be easy, but the snippets all sounded like waltzes to me. For another test, I had to listen to five tones and indicate whether the fourth was the same as the others. This seemed a lot easier — at least when the fourth tone was quite different from the others. But I realized I probably wasn’t doing so well when the fourth tone was only slightly different.
If the listening session wasn’t embarrassing enough, Felezeu wanted me to sing. First, she recorded me butchering “Happy Birthday.” Then, piling on the mortification, she wanted another rendition of the song, but with la la la instead of the lyrics. Finally, I had to sing aaah from low to high to low again. I sure was glad when it was over.
Fortunately, the next set of tests was a better fit with my skills. One was a spatial exam, which started off easy and then became tougher and tougher. Called the Matrix Test, it assesses visual perception and problem-solving ability by presenting unfinished geometric patterns and asking the subject to complete them with one of five possible solutions. Felezeu then recited a string of numbers, which I had to repeat backwards. The strings became longer and longer as we went, but I could tell by the look on her face that I was acing this one. At least, I think that’s what her face was indicating.
After two hours of testing, Peretz invited me into her office and introduced me to researcher Nathalie Gosselin. The three of us chatted about singing and music as Peretz worked her magic with her espresso machine. Then she said, “So, the verdict . . .”
By this point, my early morning confidence was beginning to melt. “Oh, dear.”
“No, I’m kidding. The coffee first . . .” And then she giggled, though not unkindly. She has a charming, gentle laugh.
“Like in Hitchcock,” Gosselin said, “we create tension with music.”
Of course, I didn’t know Peretz shared my love of a good espresso when I first contacted her in 2011. I just knew she was a pioneer — the pioneer, really — in tone deafness research, though tone deaf is not a term she uses, preferring the one she coined: congenital amusia.
Born in 1956, Peretz grew up in Brussels, where she studied classical guitar for eleven years. Eventually she figured she’d be better at science than music, though she kept her guitar. A few years ago, she began taking singing lessons with the goal of joining a choir. She was motivated by the prevailing theory that singing with other people releases endorphins. She thought she needed lessons first because she wasn’t confident in her abilities; her experience was limited to singing to her children when they were small.
When I saw her at the 2013 Society of Music Perception and Cognition conference, a biennial gathering of the world’s leading music researchers, which just happened to be in Toronto, I invited her to join me for an espresso at a nearby café. She said the lessons were really making a difference to her mood. Several months later, though, she told me that after a year of lessons, she had decided to stick to the guitar. But because she believes that music shouldn’t be played or listened to in isolation, that it’s really something to do with other people, she joined a guitar orchestra. “I’m totally immersed and back to what I was doing when I was young,” she said. “That’s a lot of fun.”
In 1985, armed with a Ph.D. in experimental psychology from Université libre de Bruxelles, she landed at Université de Montreal. In 2005, she created BRAMS with Robert Zatorre of McGill. Originally from Buenos Aires, he did a Ph.D. in experimental psychology at Brown University and then started a post-doc at McGill in 1981. He’s been there ever since. Montreal already had a strong reputation for music research before BRAMS, and the new lab only increased the city’s standing as a leading cluster.
Canada is at the forefront of the rapidly expanding field of music cognition with people such as Glenn Schellenberg, who runs the Music & Cognition Lab at the University of Toronto Mississauga; Sandra Trehub, who studies the development of listening skills in infants and young children at the same school; Laurel Trainor, who researches musical development in children and infants at McMaster University in Hamilton, Ontario; and Caroline Palmer, Canada research chair in the cognitive neuroscience of performance at McGill. Outside of Canada, leading researchers include Aniruddh Patel at Tufts University in Medford, Massachusetts, Carol Lynne Krumhansl at Cornell University in New York State, Stefan Koelsch at the University of Bergen in Norway, and Bill Thompson, who was at the University of Toronto until 2007 when he moved to Macquarie University in Sydney, Australia. Other top experts who specialize in amusia research include Gottfried Schlaug at Harvard and Lauren Stewart at Goldsmiths, University of London.
For a long time, many people thought tone deafness, as a neurological condition, was just a myth. But while studying patients who’d lost their ability to appreciate music after stroke or trauma had damaged their brains, Peretz began to wonder if it was possible that some people were born this way. Finding a case proved harder than she had imagined. Using newspaper ads to find subjects (since the Internet was not yet ubiquitous), Peretz tested many people over many years and began to doubt she’d ever discover one.
But then a French-speaking woman in her early forties, a former nurse working on a master’s degree in mental health, answered an ad. She described herself as “musically impaired.” Under social pressure, she’d been in a church choir as a child and, later, in a high school band. But after she married a college music teacher, she became acutely aware that she didn’t like listening to music because it sounded like noise and stressed her out. Testing showed that the woman couldn’t detect variations in pitch smaller than two semitones. Of the thirty-seven people Peretz had studied closely, here finally was the textbook example she’d been looking for: the first documented case of what she would call congenital amusia. In a 2002 paper on the woman, known as Monica in the study, Peretz and her colleagues stated, “From the data presented, we conclude that congenital amusia, or tone-deafness, is not a myth, but a genuine and specific learning disability for music.”
At the time, the prevailing theory, especially in music education circles, was that everyone was musical and could learn to play an instrument. This jibed well with another popular idea at the time that anyone, with enough practice, could be a high-performing, high-achieving musician. Peretz’s discovery threw a little scientific stink bomb into this egalitarian view. Though it a was shocking finding at the time, even for Peretz, it wasn’t long before researchers all over the world were studying the disorder, or as researchers sometimes call it, the deficit.
Regular folks like me sometimes think scientists and academics get their jollies coming up with fancy terminology that only they can understand, but Peretz has good reason to avoid the colloquial name and not just because tone deaf is a dis in other fields, including politics, corporate public relations, and personal relationships. People call bad singers tone deaf even if their problem has nothing to do with an inability to hear pitch. Peretz talks to so many people who mistakenly think they’re tone deaf that her guess is that educated overachievers — her fellow university professors, for example — are so accustomed to succeeding at everything that they think there must be something wrong with them if they are not naturally gifted singers.
Lots of people also tell me they are tone deaf. Though I’ve sent many the link to the BRAMS online test, only one of them failed. Or perhaps more accurately, only one admitted to me that she’d failed. (The news didn’t seem to bother her; she told me in a Facebook message: “It’s okay if I am tone deaf. I gave up on a music career a long time ago.”) Usually, I can squelch a self-misdiagnosis just by asking if the problem is simply that they can’t sing well or if they also have trouble telling pitches apart. For many years, researchers believed that 4 percent of the population was amusic, but recently Peretz’s lab did a broader study and found that, according to the criteria she uses, it’s even rarer: just 2.5 percent.
Although both “tone deaf” and “tin ear” — another common term for pitch impairment as well as for mocking politicians and others — suggest the trouble is aural, it’s actually in the brain. Peretz also liked the term congenital amusia because it was broad enough to cover pitch perception difficulties, rhythm perception troubles, and other music processing problems. Though it’s much rarer than pitch deafness, some people are beat deaf and have trouble with rhythm. (But just because someone is a terrible dancer, doesn’t mean he’s beat deaf, just as not all bad singers are tone deaf.)
Peretz’s term builds on a much older one. In 1888, German doctor and anatomist August Knoblauch came up with the first cognitive model of music. It wasn’t entirely accurate: he thought music was a left-hemisphere function, for instance, and we now know that the networks that support it are distributed across both hemispheres. But some of his ideas were prescient. He split music disorders into sensory impairments (perception) and motor ones (production), and used amusia to refer to the latter when caused by lesions to the motor centre. He proposed the existence of nine forms of the condition, including “note blindness” (problems reading musical notation) and “note deafness” (trouble comprehending music). Knoblauch’s paper spurred a lot more research into music and the brain in the following years, and amusia came to mean an impairment in music abilities due to brain damage. Today, acquired amusia is the kind you get after a stroke or brain injury; congenital amusia is the kind you’re born with. It’s similar to disorders such as dyslexia and other learning disabilities.
While this area of research seems a strange one for someone who might have become a musician, Peretz doesn’t see it that way. “It allows us to better understand the mechanics behind pitch perception and music processing at the behavior level, at the cognitive or theoretical level, at the computation level, at the brain level, and at the genetic level,” she said. “What else can you hope for when you are in psychology?”