COURTESY OF DEAN BUONOMANODean Buonomano was among the first neuroscientists to begin to ask how the human brain encodes time. It’s not an easy concept to grasp, Buonomano says, and for that reason many researchers overlook it. “The first field of modern science was probably geometry, which was formalized by Euclid around 300 B.C.,” says the researcher, who is currently working on a popular-science book about time and the brain. “What’s amazing about geometry is that there is absolutely no time involved; it’s the study of things that never change. And there’s a reason why it is one of the first science fields. Science is much easier if you can ignore time.”
Buonomano was in grad school when he became enamored of the question of how we navigate through time. As a graduate...
Mauk and Buonomano modeled the way the cerebellum’s circuits could respond to stimuli and showed that this type of neuronal network can differentiate between time intervals that differ by just tens of milliseconds. Such networks also have the ability to tune the timing of their responses, the two found. “My collaboration with him was absolutely formative for me,” says Buonomano. “Mauk had this very influential notion that time is encoded in the changing patterns of neuronal activity.”
“Over the past 10 years, the field has definitely embraced the intrinsic model—that all of the circuits in the brain can tell time— more and more, and that’s been a really rewarding process to participate in.”
Today, Buonomano’s laboratory at the University of California, Los Angeles, uses computational modeling, in vitro electrophysiology, and human psychophysics experiments to explore how neurons and the brain as a whole perceive and respond to time. Here, Buonomano describes how he performed his first experiments on his little sister, bathed mice with antidandruff shampoo, and hypothesized that timing is so integral to brain function that all of our brain’s circuits keep tabs on the clock.
BUONOMANO BEGINS
Young experimenter. Buonomano was born in Providence, Rhode Island, lived in Hamilton, Ontario, in Canada, and then, when he was seven, moved with his parents to São Paulo, Brazil. His father, a physicist and mathematician, had accepted a faculty position at the State University of Campinas. His younger sister was born two years later. “One of my initial interests in neurobiology was a result of my big-brother experience, of witnessing a young brain develop. I saw my sister go from a baby that’s vulnerable and helpless to a child making sense of the buzzing, sometimes confusing sensory world we live in.”
Buonomano’s sibling became his first experimental subject: “I did little experiments on my sister, which made me appreciate how amazing the brain really is.” He lovingly called her “dummy.” “One day I was out front playing with my friends, and someone called someone else a dummy, and she came running out and asked who called her. I realized then that I should stop calling her that, but also that she didn’t know any better, and that it’s individual patterns and environment that help us make sense of who we are.” After reading a Scientific American article about rapid eye movement (REM) sleep, Buonomano says, he would wait for his sister to fall asleep and then open her eyelids to see if her eyes were indeed moving back and forth.
Good old days. By high school, Buonomano knew he wanted to study neuroscience. “Not that I knew much about the brain, but I liked the idea that our personalities and our ability to learn and feel can be attributable to what’s happening in the brain.” He also developed an interest in computer programming. In 1982, Buonomano’s father brought home one of the first available personal computers, the TRS-80. “It was a golden age of computers. These machines could do cool things, but didn’t until you learned how to program and manipulate them,” says Buonomano. His interests in time and timing were already evident: he programmed his TRS-80 to do reaction-time tests by asking how long it took his subjects (family and friends) to distinguish between a real word and a nonsense word.
Botany, schmotany. Buonomano studied biology at São Paulo State University, in Campinas. The classroom curriculum emphasized botany, which held little interest for him. “I did the absolute minimum work I had to.” Instead, Buonomano sought out opportunities to do research. His first laboratory experience, as a freshman, was washing mice with antidandruff shampoo. “The principal investigator was testing whether metals in the shampoo were affecting the myelination process of peripheral nerves in mice.” Then, in another lab, he worked with a mouse model of depression and learned pharmacology. He also found time to study computer programming, and created computational models based on the neuroscience papers he was reading.
BUONOMANO BLOOMS
A seminal year. During his senior year of university, Buonomano read what he thought were the latest papers on synaptic plasticity—those studying the neural circuits of Aplysia californica, a marine mollusk called a sea slug. The largest neurons in the Aplysia nervous system are roughly 100 times larger than those in mice, making it relatively easy to isolate them and to measure their activity. “I thought this was the most cutting-edge research at the time—understanding how synapses change with experience,” says Buonomano. He didn’t realize that four different laboratories studying rodents had reported synaptic long-term potentiation (LTP), the strengthening of the communication signal between two neurons after stimulation—a concept important in learning and memory. The “cutting-edge” papers he was reading in Brazil were actually about a year behind. “Nineteen-eighty-six was a transformative year in neuroscience,” he says. Had he known about the LTP studies in mammals, he likely would have gone to a mouse neuroscience lab, says Buonomano.
Cellular learning. Wanting to do computational work in addition to experiments, Buonomano joined John Byrne’s laboratory at UT Houston as a graduate student. Byrne was doing both synaptic plasticity studies and electrophysiology, using “very sophisticated computational modeling.” There, Buonomano began to study neuronal plasticity in a part of the central nervous system of Aplysia called the pleural-pedal ganglia. Byrne’s lab already had some evidence that a synapse between a motor and a sensory neuron becomes stronger if two pathways are activated at the same time: when an action potential in the sensory neuron is accompanied by activation of a facilitator pathway that feeds into the synapse, the result is a stronger postsynaptic potential in the motor neuron.
In work published in Science in 1990, Buonomano demonstrated that the synaptic plasticity of the pleural ganglion can be long-lived, lasting up to 24 hours. When he paired activation of the presynaptic, sensory neuron with a facilitator pathway—in this case, neurons that release serotonin—the strength of the sensory-to-motor-neuron synapse was stronger 24 hours later. In other words, the Aplysia neurons were able to form long-term memories.
The element of time. Inspired by his work with Mauk, Buonomano hypothesized that short-term synaptic plasticity was a way for circuits to keep track of time. “All synapses basically exhibit short-term synaptic plasticity, but no one knew why,” he says. Unlike long-term synaptic plasticity, which is thought to be the basis for memory formation, short-term synaptic plasticity “means every time the presynaptic neuron fires, that synapse might get a bit weaker or stronger but just for the next few hundred milliseconds,” explains Buonomano. “It provides a little memory of what happened in the prior 100 or 200 milliseconds.”
After earning his PhD, Buonomano headed to the University of California, San Francisco, to work under Michael Merzenich, who had recently shown that experience can remodel the mammalian cortex, providing evidence for brain plasticity in adults. When Buonomano added short-term synaptic plasticity to a computer model of the cortex that he had created, the neural circuitry was able to discriminate between time intervals. Different sets of neurons responded to shorter or longer intervals of 50, 100, or 200 milliseconds, for example. The work was published in Science in 1995. “The synaptic plasticity component allowed cortical circuits, in a sense, to tell time, which we need for speech or music or many other tasks.”
Learning to hear. As a postdoc in Merzenich’s lab, Buonomano also worked with human subjects. In 1996, he and another postdoc, Beverly Wright, found that people, when trained, could improve their recognition of specific time intervals between tones, from 50 to 500 milliseconds. “The interesting finding is that when someone improved their ability to recognize a 100-millisecond interval, they were no better at recognizing the other intervals,” says Buonomano. “If we had a clock in our brain that told time, we should get better at recognizing all time intervals. What this suggested is that interval perceptual learning was interval specific.”
Decoding time. In 1998, Buonomano joined the faculty at the University of California, Los Angeles, with a joint appointment in the departments of neurobiology and psychology. He continued to dissect how short-term synaptic plasticity relates to the brain’s ability to process time. In 2000, he built a computational model that simulated the way neurons respond to time intervals. “This paper was the first to show, in a simple way, how you can make cells respond selectively to one interval or another using simple circuits and short-term synaptic plasticity,” he says.
It’s all relative. Buonomano argues that in sensing short, millisecond fractions of time, our brain is constantly influenced by the next time interval, analogous to new ripples in a pond that overlay existing ones. Time, he says, is encoded within the behavior of neural circuits. In 2007, he and his then graduate student Uma Karmarkar showed that it is relatively easy to confuse people on how long a split second lasts: the subject’s perception depended on the placement of a third, distracting tone. The human study and accompanying modeling suggests that the brain doesn’t rely on a specialized internal clock to register short intervals, but rather, uses the natural dynamics of synapses and neurons to tell time. “It is very difficult for us to time intervals independent of each other,” Buonomano says. “Everything is encoded in the context of what just happened.”
Keep it simple. Buonomano has been a big proponent of using simple in vitro systems to study the full complexity of the brain’s billions of neurons and thousands of circuits. “I argue that if you look at circuits in your brain as little computational devices that can learn and pick up patterns, then one of the most valuable tools is cultures of mammalian cortical circuits that are set to learn and pick up patterns,” he says. Buonomano has used such in vitro models—of mouse and rat brain slices—to ask whether these neural circuits are able to learn to tell time upon repeated exposure to intervals in the range of 50 to 500 milliseconds presented as light stimulation. In a 2010 study and in a paper in Neuron published this June, the lab demonstrated that the answer is yes. The neural circuits sense and adapt to the sensory world around them, learning to anticipate the timing of the light exposure and revealing the intrinsic ability to tell time. “These studies provide strong support for the notion that timing is a general computation that neurons and neural circuits evolved to do,” says Buonomano.
BUONOMANO BUILDS
Two views of time. “In the field of timing, there are two general views,” says Buonomano. “One is the ‘dedicated model’ that proposes a central clock in the brain, in the same way that a computer or smartphone has a timer chip whose job it is to tell time and govern the timing of other functions like the stopwatch or alarm clock. The second, the ‘intrinsic model,’ says that timing is such an important task that it doesn’t make sense to have a dedicated part of the brain that tells time but that all of the circuits in the brain can tell time. . . . Over the past 10 years, the field has definitely embraced the intrinsic timing model more and more, and that’s been a really rewarding process to participate in.”
Keeping it small. “My philosophy in managing a lab has always been to keep it small. I have not had more than three or four people in the lab and, right or wrong, that’s been a conscious decision from the start. That has helped me initially to continue to do experiments and, more recently, to continue doing computational and theoretical work. Staying active, I think, is a better way to do science, as opposed to too much grant writing. The advice many of us get is to have as big a lab as possible because you don’t know which projects will work out. But, in a way, it’s easier to run a smaller lab and to have fewer grants and to continue doing science yourself.”
Greatest Hits
- Demonstrated long-term associative plasticity at synapses between sensory and motor neurons
- Using neuronal circuit modeling, demonstrated that neurons discriminate between time intervals and durations, providing evidence that individual neurons can tell time.
- With colleagues, developed the first neural network models of timing, and computational models now referred to as reservoir computing
- Along with colleagues, established that neurons are able to discriminate between short time intervals through perceptual learning, but that learning does not generalize to novel intervals
- Showed that short-term synaptic plasticity allows neural network models to distinguish temporal intervals and patterns
- Demonstrated that even cortical circuits in a dish can—in a sense—learn to keep time.
- Established that it is possible to tame chaos in recurrent neural network models by tuning the recurrent synaptic weights, which allows for the robust encoding of time and memories.