<p/>

On a coffee break from the Methods in Computational Neuroscience class he codirects at the Marine Biological Laboratory (MBL) in Woods Hole, Mass., Bard Ermentrout is chatting with a student. It's unusually difficult to follow the conversation, because Ermentrout, a professor in the department of mathematics at the University of Pittsburgh, is talking entirely in equations – in near parody of most biologists' worst fears of a field populated largely by physicists and mathematicians. But despite the alien nature of the conversation, the questions that computational neuroscientists ask are becoming the questions that all neuroscientists ask. Indeed, "computational neuroscientist" eventually may become a redundant term.

"Anybody who concerns him or herself with how the brain computes is a computational neuroscientist independent of the technique they use," says Christiane Linster, assistant professor at Cornell University and current president of the Organization for Computational Neurosciences. "I would hope that eventually computational...

OF SQUIDS AND SPIKES

<p>POPULATION BEHAVIOR:</p>

© 2003 Cell Press

A) Time slice points calculated from 110 projection neuron (PN) responses to four concentrations (0.01, 0.05, 0.1, 1) of three odors, projected onto three dimensions using locally linear embedding (LLE). Shown are 60 time slice points per trajectory (6 sec. total, beginning 1 sec. before stimulus onset, 100 msec bins, averaged over three trials). (B) Time slice points in (A) were connected in sequence to visualize trajectories (lines at vertices indicate SD). (C) Trajectories corresponding to responses to five concentrations of hexanol, projected onto three dimensions using LLE. Arrows, direction of motion; blue lines, time of stimulus offset, shown for two concentrations. Five trajectories (each an average of three trials, 15 trials per odor-concentration pair) for each concentration are shown. These overlapping trajectories are separate from those of other concentrations. (Reprinted with permission from M. Stopfer, Neuron, 39:991–1004, 2003.)

The most famous model in all of neural science, the field's answer to Watson and Crick's double helix, are the equations Nobel laureates Alan Hodgkin and Andrew Huxley wrote to describe the electrical circuit properties of the squid giant axon.1 " [It's] a beautiful example of how experiments and modeling go hand in hand," says Michael Hausser of the Wolfson Institute for Biomedical Research at University College London, "The original purpose of the model was simply to provide a compact description of the data," he says, "but in the end it went far beyond the data in making experimentally testable predictions about the underlying mechanisms and providing an intellectual framework for new discoveries in the field."

Half a century later, generalizations of the original equations are still used to describe neural dynamics in such diverse animals as flies, frogs, and humans. "The model still stands, and its intellectual spawn are all over," says Koch.

Another significant influence is Claude Shannon's classic work, "A Mathematical Theory of Communications," which underlies modern digital communications.2 A Bell Labs mathematician, Shannon was concerned with getting more information down increasingly overloaded telephone wires. He mathematically described how to detect and encode only essential information, a process that involves translation and compression.

The appeal of such ideas to neuroscience is obvious, as scientists search for the answer to the fundamental question of how the brain processes sensory information. "The whole issue of coding has occupied people tremendously," says Abbott. "All of our sensory stimuli get turned into spikes." [A spike occurs when a stimulated neuron fires an action potential, or electrical pulse, down its axon.] "There's been a huge effort to understand: What's that translation, how does it work?" he says.

Thus, computational neuroscientists consider such questions as how many times per second a neuron spikes (called a rate code or independent-spike code), what patterns it makes with those spikes (a temporal or correlation code), and whether rate codes or temporal codes are important. A rate code considers how many times a neuron spikes in a given period of time, say 10 spikes per second. In a finer-grade temporal code, exactly when each individual spike occurs is significant as well. So neurons A and B may both fire 10 times in the same second, but if A fires seven times, pauses, then fires the last three times, and B fires five times, pauses, then fires five again, they are considered to be conveying different messages.

But that is just the beginning of the code; what really matters is how populations of cells fire. "Information theory provides the natural mathematical language to ask questions about the neural code," says William Bialek of Princeton University. "Using information theory we can measure how much each spike tells us about the outside world, and we can start to dissect the language of the brain. If two different cells spike at the same time [i.e., synchronously], does this convey two independent pieces of information, or does it form a new symbol in the code, in the same way that we can put letters together to form new symbols in English, like 'th'?"

Information theory may also provide general principles about how the neural code performs, says Bialek. "Especially exciting are experiments showing how the brain can change its code as it adapts to different environments, and at least in one case it was possible to show that this adaptation serves to optimize how much information is being transmitted."

AND CABLES, TOO

Second to Hodgkin and Huxley in influence may be the work of Wilfred Rall.3 In the classical model of neural electrical transmission, dendrites are the input devices while the axon provides the neuron's output. Rall's insight was to apply cable theory to dendrites, viewing them as subject to the same laws of attenuation (signal degradation over distance) as electrical cables. (Axons, by contrast, do not obey linear cable theory; they propagate signals over long distances without attenuation.) Rall's influence led a new generation of researchers to regard dendritic morphology seriously, and to consider how inputs to a highly branched dendritic arbor combine across time and space.

Bartlett Mel, associate professor at the University of Southern California, is one of several researchers asking questions about the basic functional compartments in neurons. Are they single dendritic spines or clusters of spines, dendritic branchlets, or large swaths of the dendritic tree? Showing off drawings of neurons as different in shape and size as elm trees are from tum-bleweeds, Mel says, "These cells clearly have very different cable structures, which probably translates into very different integrative behaviors." Discussing his own work, he adds, "In experiments both in computer models and in brain slices – work in collaboration with Jackie Schiller at Technion Medical School in Haifa [Israel] – we've found that if you're stimulating a neuron with two synapses, not just one, then where they are located relative to each other can make a huge difference in terms of the neuron's response."

"This harks back to Christof's [Koch] work in the early 1980s," Mel says. "His PhD thesis showed, using computer simulations, that a neuron is in principle capable of supporting a good deal of electrical compartmentalization. That is, things that are going on in this branch are likely to be isolated from things that are going on over here. So you might have a compartmentalized, highly nonlinear device."

N-DIMENSIONAL SPACE

To anyone considering a career in computational neuroscience, Abbott offers this advice: "You need a little math." Of course, as a physicist, his idea of a little math means at the very least calculus and college-level linear algebra. In computational neuroscience, as in bioinformatics, many of the breakthrough techniques are in the form of the applied math (featured in journals such as Neural Computation and the Journal of Computational Neuroscience) that is needed to analyze high-dimensional data sets, including neuronal population codes and data generated from functional magnetic resonance imaging.

<p>NEURAL NET:</p>

Courtesy of Michael Hausser

Synaptic connection made by an interneuron with the dendritic tree of a Purkinje cell in the cerebellar cortex. The Purkinje cell was filled with Texas Red, the interneuron with Lucifer Yellow.

"There are orders of magnitude more interconnections in these systems," says Abbott. "In many physical systems researchers have concentrated on nearest-neighbor couplings; each unit might couple to its neighbor on the left or the right or above and below it. But in a system like a vertebrate brain, there are thousands of interconnecting synapses per neuron that must be considered."

In addition, says Abbott, events occur over a wide variety of time scales in biological systems. "In most physical systems that mathematicians like to analyze, things change over a single time scale, say a second. But in a biological system changes range from milliseconds to days, months, and even years. So you have this incredible dynamic range you have to take into account. That one is an enormous challenge."

If, for example, you're doing heroic experiments, like MIT's Matt Wilson or Caltech's Thanos Siapas, in which you use arrays of up to 200 electrodes in the brains of living mice, you have data that is complex to the point of incomprehensibility, requiring not just more raw computing power, but entirely new mathematical formulas. "We're actually developing a kind of new mathematics," says Sejnowski, of his own lab's work on independent component analysis, a form of principal component analysis applicable to more complex datasets that involve higher-order statistical analysis. "It's not just useful for brains," says Sejnowski. "It's used for speech recognition, for work in genomics. There's a lot of common ground; we use some of the same algorithms."

Abbott sees the situation differently. "We try to have clever new ideas, but I don't know if there's a new math coming out of this ... it's conventional math being applied to more complex systems. A lot of what drives that is the ability to do simulations on a computer that the classical mathematicians were not able to do. You can analyze systems that are more nonlinear. The classical methods of mathematics were really best developed for linear systems."

So, is this really anything more than a bunch of physicists trying to make sloppy, complex biology more elegant than it actually is? Hausser says, "I think there's always the temptation to do that, but nature often does comes up with beautiful solutions to problems, and more power to the people who find those solutions."

Karen Heyman kheyman@the-scientist.com

From Palm Computers to Redwood Neurons

In 1986 Jeff Hawkins, then a graduate student in biophysics at the University of California, Berkeley, wanted to study theoretical neuroscience, but he was advised that the field didn't exist. Hawkins left academia and founded Palm Computers (which makes the Palm Pilot), and then Handspring, a Palm competitor. In 2002 Hawkins founded (and completely self-funded) the Redwood Neuroscience Institute (RNI) in Menlo Park, Calif., an independent research institute for theoretical neuroscience.

RNI has brought together a small group of principal investigators, post-docs, and graduate students to research questions in theoretical neuroscience; there are no wet labs. Although RNI does not grant degrees, it does have a formal affiliation with UC-Berkeley and a growing connection to Stanford University. "It's a very interesting group of people ... They're trying to push for the breakthrough, the big idea that's going to drive both the theory and the experiments to a whole new level," says Michael Hausser of the Wolfson Institute for Biomedical Research, University College London. "I hope it happens."

At RNI, the research is independent, says Hawkins, but common themes prevail: One is that the neocortex controls inference prediction. "In my model, it's a massive memory system, that's structured in a very unusual way, that models the world," he says. "The purpose of this system is to feedback predictions about what's going to happen next."

Hawkins has put many of his own ideas into a recently published book, On Intelligence (cowritten with The New York Times science writer Sandra Blakeslee). "I've got testable hypotheses in the book, which go down to fairly low-level predictions," says Hawkins.

Interested in reading more?

Magaizne Cover

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!