In the mid-1980s, György Buzsáki was trying to get inside rats’ heads. Working at the University of California, San Diego, he would anesthetize each animal with ether and hypothermia, cut through its scalp, and drill holes in its skull. Carefully, he’d screw 16 gold-plated stainless steel electrodes into the rat’s brain. When he was done with the surgery, these tiny pieces of metal—just 0.5 mm in diameter—allowed him to measure voltage changes from individual neurons deep in the brain’s folds, all while the rodent was awake and moving around. He could listen to the cells fire action potentials as the animal explored its environment, learning and remembering what it encountered (J Neurosci, 8:4007-26, 1988).
In those days, recording from two cells simultaneously was the norm. The 16-site recording in Buzsáki’s 1988 study “was the largest ever in a rat,” he says. Nowadays, scientists can measure voltage changes from 1,000 neurons at the same time with silicon multielectrode arrays. But the basic techniques of using a probe to measure electrical activity within the brain (electrophysiology) or from outside it (electroencephalography, or EEG) are still workhorses of neural imaging labs. “The new tools don’t replace the old ones,” says Jessica Cardin, a neuroscientist at the Yale School of Medicine. “They add new layers of information.”
Another decades-old neuroscientific technique that remains popular today is patch clamping. Developed in the late 1970s and early 1980s, it can detect changes in the electric potential of individual cells, or even single ion channels. With a tiny glass pipette suctioned against the cell’s membrane, researchers can make a small tear, sealed by the pipette tip, and detect voltage changes inside the cell. With some improvements, the patch clamp, like electrophysiology and EEG, has remained a regular part of the neuroscientist’s tool kit. Recently, researchers had a robot carry out the process (Nat Methods, 9:585-87, 2012).
However, patch clamping is invasive, records only from single cells, and can’t measure over long periods because the process can interfere with the cell’s normal functioning. Instead, some scientists use a less invasive, more durable, and higher-throughput approach first described half a century ago: calcium imaging (J Cell Comp Physiol, 59:223-39, 1962). When an action potential reaches an axon terminal, calcium ions rush inward. To detect when this happens, scientists use molecules that fluoresce when they bind calcium.
Initial efforts were clunky, however, and the technique’s popularity surged only after calcium sensors and microscopy techniques improved in the 1990s and early 2000s. Now, one of the most popular sensors is the genetically encoded calcium indicator (GECI) GCaMP6, a fusion of green fluorescent protein and the calcium-binding protein calmodulin (Nature, 499:295–300, 2013). The proteins glow like lightning bugs when a neuron fires. By thinning an area of an animal’s skull to create a “cranial window,” scientists can watch when and where action potentials occur in the outer layers of the cortex. “You can take a mouse, alive and functional, with the head fixed in one spot. But the animal still can run on a treadmill,” says David Prince, a neurologist at the Stanford University School of Medicine. “You can detect fluorescent signals in his brain . . . and try to relate those changes to changes in his behavior that are simultaneously occurring.”
But calcium imaging has its own drawbacks. It can only measure changes in electrical activity on 50- to 100-millisecond time scales, while an action potential occurs in just 1 millisecond. Patch clamping and other electrodes, on the other hand, can measure individual action potentials.
To complement ongoing advances in eavesdropping on the brain’s neural chatter, scientists have developed new techniques for visualizing the brain’s structure. For example, last year, MIT synthetic neurobiologist Ed Boyden and his colleagues introduced expansion microscopy, which allows a brain tissue sample to be expanded to as much as 100 times its original volume while preserving the arrangement of the molecules. With the addition of vibrant color probes, the team imaged nanoscale features of cells and synapses in the mouse hippocampus (Science, 347:543-48, 2015). Understanding the anatomy of the brain better will help reveal how it works, Boyden says. “Anatomy’s the kind of thing where if you have enough information about something, it actually gives you function.”
Manipulating the brain can also shed light on how it works. In the late 1960s, Jose Delgado of Yale University placed electrodes in the brain of a chimpanzee named Paddy to alter the animal’s emotional behavior. The transmitter produced an unpleasant sensation in response to a specific pattern of activity in Paddy’s amygdala. After six days of repeated stimulation, she grew depressed, and the activity pattern decreased by 99 percent. Paddy recovered after two weeks, but when Delgado repeated the experiment, she became melancholic again. “What he did was incredible,” says Buzsáki, now at New York University, because Delgado could listen to and precisely manipulate brain waves in an era when vacuum tubes were considered high tech.
Over the past 50 years, with ever more compact and precise electrodes, researchers have continued to use electromagnetic stimulation to manipulate neural activity with the goal of understanding brain function. Transcranial magnetic stimulation, or applying a noninvasive magnetic field outside the head, has been used to reveal the physiological underpinnings of neurological diseases and identify potential treatments in animal models, for example. And following in Delgado’s footsteps, Buzsáki and his colleagues have obliterated ripples of neural activity responsible for consolidating the day’s memories in sleeping rats (Nat Neurosci, 12:1222-23, 2009). “The animal remembers nothing the next day, even though it had perfect sleep,” he says.
In 2005, a powerful alternative to electrodes and magnetic fields emerged: controlling neurons with light. Boyden, along with Stanford’s Karl Deisseroth and their colleagues, inserted algal ion channels that respond to light into mammalian neural cells. With photons, they could depolarize a cultured neuron’s membrane at will (Nat Neurosci, 8:1263-68, 2005), and researchers soon used the tool in vivo. Named science’s Method of the Year by Nature in 2010, optogenetics allows researchers to target specific cell populations. The technique has been used in mice to alter (Science, 341:387-91, 2013) or trigger a memory (Nature, 484:381-85, 2012), shut down epileptic seizures (Nat Commun, 4:1376, 2013), and inhibit aggressive behavior (Nature, 470:221-26, 2011).
Another technique relies on chemistry, rather than light. Scientists deliver G-coupled receptors, created through directed evolution, to cells in an animal’s brain. Then, by injecting a synthetic ligand into the body, scientists can trigger neuron firing or silencing (PNAS, 104:5163–68, 2007). Last year, a team at the University of Maryland used this approach—called designer receptors exclusively activated by designer drugs (DREADDs)—to disrupt a mouse’s ability to learn to avoid an aggressive male (J Neurosci, 35:10773-85, 2015).
This expanding and diverse tool set is crucial for what neuroscientists seek to accomplish. “My hope is that we can actually solve the brain,” Boyden says. “And to do that, we have to have a full map, we have to be able to watch it in action, and we have to be able to control it.”
Even though researchers now have tools to record from hundreds of neurons simultaneously and to manipulate small populations of cells in rodent brains, the human brain remains largely a mystery. With some 86 billion neurons linked by 100 trillion synapses, the three-pound organ remains a scientific puzzle of epic proportions.
Magnetic resonance imaging (MRI), used for the first time on a human in 1977, enabled researchers to noninvasively image the structure of a person’s brain. For taking pictures of the brain, MRI was easier to perform than its predecessor, positron emission tomography (PET), which required intravenously injecting radionuclide-labeled metabolites to assess brain activity. Marcus Raichle, a neurologist at Washington University in St. Louis who helped develop PET in the 1970s, recalls his reaction: “My God, if you had an MR[I] scanner, you were in the brain-mapping business.”
But what catapulted MRI into a mainstay of neural imaging was the discovery of functional MRI (fMRI) scanning. In 1990, Seiji Ogawa, then at AT&T Bell Laboratories, and colleagues showed that deoxygenated blood responds differently to a magnetic field, and that this could be used as an internal contrast agent to illuminate changes in the brain (PNAS, 87:9868-72, 1990). As a subject in an fMRI scanner performs a simple task, researchers can use the increased flow of oxygenated blood in different regions of the brain as a proxy for neural activity.
Another variant of MRI has also made waves: diffusion MRI. This technique traces the axons of neurons by the Brownian motion of water molecules, which are more likely to diffuse along the axon’s fibrous structure rather than perpendicular to it. The method traces the trajectories of fibers between gray matter regions, dense with neuronal cell bodies. The National Institutes of Health–funded Human Connectome Project is using this tool and those above to map all the connections in the brain—a figure greater than the number of stars in the sky.