A Face to Remember

Once dominated by correlational studies, face-perception research is moving into the realm of experimentation—and gaining tremendous insight.

By | November 1, 2014


Ron Blackwell reclined in a hospital bed at Stanford University, bandages from his recent brain surgery wrapped snugly around his head. Doctors had just removed a piece of his cranium, implanted electrodes on the surface of his brain, and closed him back up. He waited for a seizure.

Blackwell, 49, had his first seizure when he was 11 and had experienced similar incidents periodically thereafter. But after he turned 40, the seizures became more frequent. He wanted to feel secure when caring for his two young children instead of worrying that he might have a seizure while bringing them to the park. So in 2012 he gave doctors the OK to implant the electrodes, which were designed to pinpoint the epicenter of his seizures as they came and help determine whether he’d be good candidate for surgery to remove the culprit tissue.

Once such electrodes are in place, the wait for a seizure can take days, so to pass the time Blackwell participated in some cognitive and perceptual tests. On occasion, researchers visited him and presented him with tasks to perform on a computer, such as clicking a button when a particular object appeared on a screen. After days with no sign of a seizure, Josef Parvizi, Blackwell’s neurologist, asked for permission to stimulate the electrodes on Blackwell’s brain as part of an experiment. Blackwell agreed.

Parvizi instructed Blackwell to look at objects in the room: a Mylar balloon, a TV screen. The doctor clicked a button, but nothing happened. “Then he said, ‘Look at my face,’ and he hit the button, and it was the most bizarre thing,” Blackwell recalls. In that brief moment when Parvizi zapped the electrodes, Blackwell saw the doctor’s face “metamorphose.”

“His face just sagged. His eyes drooped; his nose drooped and just shifted. It was very cartoonish,” he says. The face looked somewhat familiar, but it was no longer Parvizi’s—until the doctor stopped stimulating the electrodes on Blackwell’s brain. As soon as the stimulus was over, the familiar face of the neurologist returned.

Wanting to learn more, Parvizi asked Blackwell some questions: Could he still tell that Parvizi was a male? “Oh, yeah,” Blackwell replied. “How did you know?” asked Parvizi. “Because you’re still wearing a suit and tie. Only your face changed. Everything else was the same.” And with that, Blackwell gave to science the best experimental evidence yet that humans have what researchers call a face-selective area, or “face patch” for short—a region of the brain specialized for the perception of faces.1

Previous human studies had relied on imaging techniques to link face perception to face patches by association; none of them showed that disrupting the face patch could alter face percep­tion. But more recent research provides evidence that face patches, generally recognized as three chunks of the temporal lobe, are critical to the everyday observation of faces. Blackwell is now just one of about a dozen patients in whom Parvizi and Stanford colleague Kalanit Grill-Spector have demonstrated the electrical disruption of face perception.

“It’s just so striking how specific the perceptual distortion is to the face,” says Grill-Spector. “This is why it’s a very important discovery, because it shows the specificity of the cortical region to processing faces.”

Neuroscientists are now capitalizing on this specificity to unpack the fundamental computational processes that go into identifying a face, a feat most of us perform without thought or effort. The face patches—and even individual neurons in them—appear to do different jobs, such as analyzing the features of the face, responding to how the head is tilted, and, ultimately, determining someone’s identity.

“We meet thousands of individuals . . . and we can differentiate them, we can recognize them in different conditions: when there are shadows on them, when the face is turned at different angles, if they get a haircut,” says Marlene Behrmann, a cog­nitive neuroscientist at Carnegie Mellon University. “It’s an incredibly robust human ability.” 

Natural experiments

MODIFIED FROM © CREATIVEMARC/ISTOCKPHOTO.COMThe neural stimulation Parvizi gave to Blackwell caused momentary prosopagnosia?,? or face blindness—an inability to discern the identity of a person by his face. There are those who have had prosopagnosia all their lives. Author and neurologist Oliver Sacks famously documented his own condition in his writings. In a 2010 New Yorker article, Sacks wrote that he often cannot recognize people he knows well; he has even bumped into his own reflection, thinking it was another bearded man walking toward him.2

Given face perception’s importance and ubiquity in social interactions, prosopagnosia is a stressful deficiency to live with. “I avoid conferences, parties, and large gatherings as much as I can, knowing that they will lead to anxiety and embarrassing situations, since I not only fail to recognize people that I know well but tend also to greet strangers as old friends,” Sacks wrote. “I am much better at recognizing my neighbors’ dogs (they have characteristic shapes and colors) than my neighbors themselves.”

To investigate the roots of such face blindness, Behrmann is studying the brains of prosopagnosia patients. In one study, she asked such patients to view faces while they sat in a functional magnetic resonance imaging (fMRI) scanner. Like people without the perceptual deficiency, the patients showed normal activity in what Behrmann refers to as the “core” patches involved in face perception: the fusiform face area (FFA), the occipital face area (OFA), and the superior temporal sulcus (STS). “We were perplexed,” says Behrmann. “We knew [the patients’ brains] were impaired, but we couldn’t find where.”

So Behrmann’s team, using an MRI approach called diffusion tensor imaging, embarked on a high-resolution sleuthing trip through the hills and valleys of the brain’s morphology to find the differences. The researchers found a reduction in two white-matter tracts—the myelin-wrapped axons connecting neurons of different brain regions—between these core face patches in the back of the brain and “extended” face-processing areas toward the front of the brain.3 (See illustration below.) The frayed cabling “suggests it’s a failure to propagate signals from the core to the extended regions,” says Behrmann. (For more information on the technique of diffusion tension imaging, see “White’s the Matter,” The Scientist, November 2014.)

It appears that the prosopagnosics’ face patches operate well enough to determine that a face is a face, but the poor connection to the extended regions prevents them from establishing that person’s identity.

“Intriguingly, the magnitude of the compromise [in white matter] was correlated with the magnitude of the prosopagnosia,” says Behrmann. “It was a key piece of the puzzle.”

More recently, the group has found, through fMRI comparisons between people with and without prosopagnosia, that activity in the extended regions contacted by these white matter tracts is indeed reduced during face perception.4 It appears that the prosopagnosics’ face patches operate well enough to know that a face is a face, but the poor connection to the extended regions prevents them from determining that person’s identity.

Also using fMRI, Ida Gobbini and James Haxby of Dartmouth College have shown that the extended regions are much more active when viewing familiar faces than unfamiliar ones.5 “My interpretation is, when you see someone familiar, you retrieve the ‘person knowledge’ of that person,” says Gobbini. By recruiting extended brain regions such as the precuneus or amygdala, people may be able to pull together the memories and emotions that go along with seeing someone’s face.

“Something [cognitive neuroscience] is working really hard on is trying to understand the relative contribution of these different areas,” Behrmann says.

Manipulating humans

PATCHWORK: In the 1990s, researchers published data identifying a face “patch,” dubbed the fusiform face area (FFA), which lit up more in response to faces than to other objects.1 Soon, other researchers identified two additional face patches, the occipital face area (OFA) and the superior temporal sulcus (STS). Together, the FFA, OFA, and STS comprise what some call the core regions of face perception.
See full infographic: JPG | PDF
In September, Stanford’s Parvizi and Grill-Spector published data from electrode-stimulation experiments on 10 participants, all of them with medical situations like Blackwell’s who were hoping to find a surgical solution to their maladies and willing to lend their brains for study. Half the patients had electrodes implanted on the fusiform gyrus in their right cerebral hemispheres; the other half, on the same area on the left side of their brains. Although both areas were active during face perception, only when Parvizi “tickled” the right hemisphere face patch with electrical stimulation did the patients report seeing an altered face.6 Intriguingly, some patients also saw faces that weren’t really there. One reported: “The black spot on the top of the TV shows some kind of face expression. It looked like a human face, then disappeared.” Another patient had an experience similar to Blackwell’s, in which the doctor’s face morphed: “It was almost like you were a cat.”

The results support the idea that face perception is lateralized, which scientists had suspected since the first documented cases of prosopagnosia in patients with damage to the right-hemisphere fusiform gyrus. “Only the right side is important for changing conscious perception of faces,” says Parvizi. “We think the left side might be important for retrieving names or anything language-related, but it’s probably not doing the same thing as the right hemisphere.”

Parvizi says additional studies of this sort could help to determine how the patches are connected and what jobs they perform, as well as the precise brain regions where the visual decoding involved in face perception takes place. Intracranial recordings could also help resolve questions about the specificity of the face patches and the role of neighboring cells. Of course, patients who require brain electrode implantation and are willing to participate in such neuroscience studies are few and far between, making it difficult to amass data. To achieve bigger sample sizes, Brad Duchaine of Dartmouth College and David Pitcher of the National Institute of Mental Health have used transcranial magnetic stimulation (TMS) to noninvasively excite brain regions of healthy volunteers. (See “Brain Massage,” The Scientist, November 2014.) A magnetic coil delivers short bursts of electrical stimulation, which interrupts normal brain activity for about 20 minutes. By placing the coil close to a face patch, “we can temporarily make you bad at face perception,” says Pitcher.

Two patches—the OFA and the posterior STS (pSTS)—lie at the surface of the brain. With fMRI, the researchers can determine the precise location of the patches in an individual, then use TMS at a particularly high frequency (called theta burst TMS, or TBS) to scramble the neurons’ normal function. Volunteers then reenter the brain scanner to look at various images as the researchers observe the activity in the patches.

A prevailing model has held that visual information about faces first comes into the OFA from the early visual cortex, then branches out to the other face patches. According to this view, the OFA acts “kind of like a gatekeeper,” says Pitcher. But his latest work shows that, although disrupting the OFA reduces brain activity in the pSTS when the participants view photographs of faces, pSTS activity remains normal when they view videos.7 These results suggest that facial information reaches more than just the OFA initially and does not travel linearly through the patches. “What people have been arguing more and more so, and my paper is one of the first to show it experimentally, is that all the patches are connected to each other,” says Pitcher. The face patches “all get information round about the same time and share that information.” (See illustration above.)

For all that’s been gleaned from the human experiments, there is still much left to be discovered. Pitcher points out that the neuronal connections between patches are still largely unmapped in humans, for instance. Additionally, “we don’t have a good feel for how the division of labor is being set up” among the various face patches, says Duchaine. Such work would require invasiveness so far impossible in humans, he notes, but research on face perception in other animals is beginning to yield clues.

Monkey business

Much of what scientists can only dream of doing in human brains, Doris Tsao and Winrich Freiwald have accomplished in macaques. As a graduate student in Margaret Livingstone’s lab at Harvard, Tsao read about the discovery of the FFA, finding it “astonishing” that there would be a region specialized for faces when there are so many other objects humans have to identify. But the fMRI data that revealed the existence of the FFA couldn’t explain what the cells themselves were doing. To get at function on the cellular level, Tsao says, “it seemed easy to test in a monkey.”

So, in the early 2000s, she teamed up with Freiwald, then a postdoc in Nancy Kanwisher’s lab at MIT. The duo would insert electrodes into the brain region that responded to faces as the monkeys viewed a slide show of a variety of objects and human faces. Freiwald, now at Rockefeller University, vividly recalls those initial trials as the researchers guided electrodes slowly through the brain toward what they call the “middle face patch” while pictures flashed before the monkey. A crackling sound heard over speakers connected to the electrophysiology rig would alert the researchers to a neuron firing. On occasion, a little rumbling would sound and then fade away. Then, just as a face popped up on the screen, they heard a loud “kkkrrrr,” and made a note of it as a cell that “likes faces.”

Then they got another. “Kkkrrr.” And another. “And then we realized, ‘Wow, every time we stuck our electrode into that patch we got face cell after face cell after face cell,’?” says Tsao.8 “It was tremendously exciting. It meant we could now have hope to understand the brain’s vocabulary for how the brain codes objects.”

The study also provided strong support for the specificity of face patches. The response to the other objects the macaques viewed was much smaller, sometimes silence altogether, compared to the neural activity triggered by faces. (See “Just for Faces?” at bottom.)

Pretty much everybody you speak to has the total intuition that face recog­nition is seamless, rapid, effortless.
And yet it’s probably the most difficult prob­lem the visual system has to solve.—Marlene Behrmann,
Carnegie Mellon University

Freiwald and Tsao spent hours Photoshopping faces for new experiments. To see whether individual neurons respond to particular features in the face, they created a simple cartoon in which the parts could change independently of one another—the eyebrows could be removed, the eyes spaced far apart, and so on. They found that within the same face patch, individual neurons were tuned to respond more readily to particular features, such as the eyes or hair, and that many of the cells were especially keen on facial geometry, such as the roundness of the face, the size of the irises, or the space between the eyes.9 “Here you can have neighboring cells, cells separated by microns, doing totally separate things,” says Tsao, who pursued this work as a postdoc with Livingstone and, later, Roger Tootell at Massachusetts General Hospital.

Jim DiCarlo and his colleagues at MIT have used optogenetics to actually turn off some of those neurons and see what happens. In a recent study demonstrating the technique, they asked what would happen if they silenced the neurons in and around a particular face patch in macaques that had been trained to discriminate the gender of human faces. “The field’s hypothesis would be that silencing the face patch should produce deficits [in discrimination], and silencing other tissue should affect [perception of nonface] objects,” says DiCarlo. Sure enough, with suppressed face-patch activity, the monkeys were less able to distinguish men from women. Inhibition of neighboring regions, on the other hand, had no such effect.10

DiCarlo says this study, presented by lead author Arash Afraz at the Vision Sciences Society meeting earlier this year, is just the beginning of what optogenetics can bring to the study of object and face perception. DiCarlo’s group is now conducting extensive face and object discrimination tests, with plans to silence bits of neural tissue in one or more brain regions.

Tsao and Freiwald’s studies have also supported the idea that faces are processed across a network of patches. After working with the middle face patch in macaques, the researchers identified a number of other regions active during face perception. They found that these regions are tightly coupled anatomically; one patch communicates with multiple other patches, each of which appears to perform a distinct task. In one region, for instance, the cells responded only to faces in particular orientations—say, profile or straight-on. A different patch might respond to just one individual face, but will do so regardless of the face’s orientation.

Many consider Tsao and Freiwald’s work the best evidence to date that face perception operates like an orchestra, with units cooperating, communicating, and building upon one another to provide a harmonious picture of facial identity. “I’ve learned more from one of their papers than from 10 to 20 human papers because you can get in there and record from single neurons,” says Duchaine. “They get so much interesting evidence out of their recordings, it blows me away.”

Points of view

MODIFIED FROM ©FRANCKREPORTER/ISTOCKPHOTO.COMComprehending how the brain computes faces has obvious implications for understanding prosopagnosia. Working with Behrmann, Adrian Nestor of the University of Toronto Scarborough aims to build a face from the fMRI data of a viewer. The researchers have built an algorithm that attempts to link the objective pictorial qualities of a face with the corresponding neural code. Then, starting with the neural code, the software can work backward to construct the face. By analyzing the neural codes of healthy participants, researchers could uncover new insights into the computation of face perception, and the scans of prosopagnosia patients may prove even more interesting.

“If we can approximate the face [seen by] prosopagnosics, we can finally not only theorize and hypothesize as to why they cannot recognize faces at the individual level, we can see what they see,” says Nestor. “That’s a very powerful demonstration of what’s wrong—what’s not functioning at the neural level.”

The early results show that the reconstructed faces formed by the data from congenital prosopagnosics are remarkably alike. “The neural patterns are so similar,” Nestor says. “We’re still trying to figure out why.” In a person without face blindness, on the other hand, the neural responses to viewing two different faces would be measurably different.

Faces are such potent stimuli to get us into the emotional and social brain in ways that escape cognitive control in the first pass.—Winrich Freiwald,
Rockefeller University

For Freiwald and many others, face perception also offers a window into the mechanism underlying our nature as social beings. “Over the last [few] years, I realized that faces are such potent stimuli to get us into the emotional and social brain in ways that escape cognitive control in the first pass,” he says. Take, for instance, the way humans “ooh” and “aah” and feel warmth for the face of a baby animal. “We’re just showing pixels of colors,” says Freiwald, “but by the way they’re arranged we’re getting into the emotional brain.”

Could this social nature be at the heart of how we so easily identify faces—three-dimensional objects that follow the same basic pattern, yet carry so much significance individually? “The face is what one goes by, generally,” Alice tells Humpty Dumpty in Lewis Carroll’s Through the Looking Glass. Yet, as Humpty Dumpty rightly responds, “Your face is the same as everybody has—the two eyes, so . . . nose in the middle, mouth under.”

“That’s exactly why this is such an interesting domain to be in,” says Behrmann. “Pretty much everybody you speak to has the total intuition that face recognition is seamless, rapid, effortless. And yet . . . it’s probably the most difficult problem the visual system has to solve. There’s a real disconnect between our introspection and the nature of the computation.”


© THOMAS NORTHCUT/GETTY IMAGESWhile plenty of imaging data support the involvement of several brain regions in perceiving faces, there remains considerable debate about each area’s precise function, especially regarding whether the neurons are exclusively devoted to facial recognition. Isabel Gauthier of Vanderbilt University makes the argument that the patches may not be face-specific at all, but might simply reflect our expertise in facial recognition.

She and her colleagues have asked experts in recognizing particular objects, say, car enthusiasts or bird watchers, to view faces, cars, birds, and other items while in an fMRI scanner. “And over and over again, we find the same relationship: the response to nonface objects in the fusiform face area [FFA] predicts your performance in recognizing objects,” she says. In other words, experts use the FFA to discriminate the objects of their expertise, just as they do when perceiving faces.1 “It’s not surprising faces would activate this region, because we’re experts at [recognizing faces],” Gauthier says.

Other researchers have countered Gauthier’s argument with their own data. Earlier this year, Brad Duchaine of Dartmouth College and his colleagues showed that prosopagnosics can still become experts in identifying particular objects when they’re trained to do so.2 “They were able to learn [the task] even though they don’t have [a fully functioning] FFA,” says David Pitcher, a research fellow at the National Institute of Mental Health who was a coauthor of the study. If the FFA is an expertise brain region, the prosopagnosics should have failed to learn, just as they cannot be trained to recognize faces.

But others have also raised doubts about the idea that face patches are solely responsible for processing faces. It’s also possible that neurons neighboring the face patches contribute to detecting someone’s identity. To get to the bottom of face perception, Jim DiCarlo and Arash Afraz of MIT and other researchers are moving the field from one of observation and correlation to one of experimentation and causation. “If we can do causality studies, I think we will find [the so-called “face”] neurons are doing other things not face-related,” he says. So far though, their data seem to support the idea that face patches are specialized for recognizing faces, DiCarlo notes. “That hypothesis is too simple, but we don’t have evidence to refute it. So it might be true.”

  1. I. Gauthier et al., “Expertise for cars and birds recruits brain areas involved in face recognition,” Nat Neurosci, 3:191-97, 2000.
  2. C. Rezlescu et al., “Normal acquisition of expertise with greebles in two cases of acquired prosopagnosia,” PNAS, 111:5123-28, 2014.


  1. J. Parvizi et al., “Electrical stimulation of human fusiform face-selective regions distorts face perception,” J Neurosci, 32:14915-20, 2012.
  2. O. Sacks, “Face-blind,” The New Yorker, August 30, 2010.
  3. C. Thomas et al., “Reduced structural connectivity in ventral visual cortex in congenital prosopagnosia,” Nat Neurosci, 12:29-31, 2008.
  4. G. Avidan et al., “Selective dissociation between core and extended regions of the face processing network in congenital prosopagnosia,” Cereb Cortex, doi:10.1093/cercor/bht007, 2013.
  5. M.I. Gobbini, J.V. Haxby, “Neural systems for recognition of familiar faces,” Neuropsychologia, 45:32-41, 2007.
  6. V. Rangarajan et al., “Electrical stimulation of the left and right human fusiform gyrus causes different effects in conscious face perception,” J Neurosci, 34:12828-36, 2014.
  7. D. Pitcher et al., “Combined TMS and fMRI reveal dissociable cortical pathways for dynamic and static face perception,” Curr Biol, 22:2066-70, 2014.
  8. D.Y. Tsao et al., “A cortical region consisting entirely of face-selective cells,” Science, 311:670-74, 2006.
  9. W.A. Freiwald et al., “A face feature space in the macaque temporal lobe,” Nature Neurosci, 12:1187-96, 2009.
  10. A. Afraz et al., “Optogenetic and pharmacological suppression of face-selective neurons reveal their causal role in face discrimination behavior,” J Vision, 14:600, 2014.

Add a Comment

Avatar of: You



Sign In with your LabX Media Group Passport to leave a comment

Not a member? Register Now!

LabX Media Group Passport Logo


Avatar of: Dr. S

Dr. S

Posts: 2

November 4, 2014

This is all very interesting--that there is a section of the brain specific for face recognition. But, although scientists have narrowed the search, the results say nothing about how we learn to recognize faces. And, I doubt that face recognition is so special. After all, we learn to recognize slight and subtle differences between an almost infinite number of other stimuli, not just faces. Faces get a lot of attention because of their social value, but I doubt that they are special otherwise.

Also, cognitive science offers no cogent theory of face perception. And saying that the brain perceives or organizes is misleading. To perceive is to behave and individual organisms, not brains, behave. The brain is programmed by a combination of genes and learning experiences, with the latter surely more important, especially in humans. When we first come into the world there is little to no perception (i.e., reacting differentially to stimuli) of any kind; we have to learn it. And we do so thorugh operant conditioning from interactions with an ever increasing complex environment. In fact, no other theory besides an operant one can explain why we cannot recognize someone when seeing them but can do so when we hear their voice, or see some other visual stimulus associated with that person. This is because we have been reinforced to call people by name in the presence of a variety of different stimuli associated with them, faces being only one. I'm sure neuroscientists could look for the neurons that mediate voice recognition too. All of these efforts are a little like a more sophisticated phrenology. We may ultimately be able to point to exact locations in the brain responsible for mediating thousands of interesting behaviors; but that will tell us nothing about the genesis or function of those behaviors. 

Neurophysiology without an experimentally based theory of behavior--not mind--will only tell a very small part of the story. The decade of the brain has surely eclipsed the decade of behavior.

Avatar of: Subhash


Posts: 6

November 19, 2014

'Pretty much everybody you speak to has the total intuition that face recog­nition is seamless, rapid, effortless.
And yet it’s probably the most difficult prob­lem the visual system has to solve.—Marlene Behrmann


How is it that the above quote copied from the article reads so differently - all spelling errors have been corrected by simply copying and pasting the quote? 

Popular Now

  1. Investigation Finds Pathologist Guilty of Systemic Misconduct
  2. Misconduct Finding Could Impact PubPeer Litigation
  3. Common STD May Have Come from Neanderthals
  4. Bacteria and Humans Have Been Swapping DNA for Millennia