Schwartz had clearly knocked it out of the park, and his University of Pittsburgh lab was inundated with interview requests. It was a gratifying moment for the researcher, but not a comfortable one for a guy who’s more interested in the science than in the demo. “I hated it. I could never express what I wanted to express. All I could say is ‘self-feeding. Yeah, they can grasp pieces of food and bring it to their mouth,’” he said. “You end up telling the same damn thing over and over.”
Still, Schwartz was undeniably proud of the work. He’d shown proof of principle: not only could a monkey gain elegant and continuous control over a robot arm, but it could also use it as a worthy surrogate of its biological counterpart to perform an essential task.
The whiz-bang factor of brain-computer interfaces kept the public interested—and, importantly, the cash flowing—but it was the underlying science that most excited Schwartz. Brain-computer interfaces were pointing to some foundational principles of brain function. They were telling him things about how the brain learns, its relationship to objects, even thought itself. “I always laugh about psychologists and cognitive neuroscientists who say they’re going to study cognition or thinking. I say, ‘Can you define that for me? If I were going to poke one of my electrodes in the brain and find a thought, how would I know if I found it?’ ” he said. “They can’t define it! They can’t even define the necessary parameters of thought, so how am I supposed to find it?”
What Schwartz had developed, by contrast, was a closed input-output system he could use to test the accuracy of his model. “We can prove how well it works because we can look at the movement, or the performance. You can’t do that if you say, ‘Oh, thought takes some electricity and some chemicals.’ Where’s your model?” he said. “But I can say, well, based on my model— my hypothesis—my subject can do this.”
The self-feeding task didn’t give Schwartz a way to explore the more fundamental questions of how the brain generates neural code or why a motor neuron changes its activity pattern. What it did give him, however, was a way to observe the brain as it shifted those patterns of activity.
Simply stated, an individual motor neuron will fire more rapidly when initiating movement in a “preferred” direction. The farther the intended movement is away from an individual neuron’s preferred direction, the more slowly that neuron will fire. It was by combining the firing patterns of a population of individual neurons—a population vector—that the Georgopoulos lab first accurately anticipated movement in the 1980s.
The field has been working with that model ever since. But one of the features Schwartz and others have high-lighted in the intervening years is the tendency of neurons to shift their preferred firing direction to better accommodate external modalities such as robot arms.
A key tenet of neuroplasticity is that the brain reorganizes itself by creating fresh synaptic connections between neurons. Each of the brain’s estimated 100 billion neurons is synaptically connected to an estimated 10,000 other neurons. At any given moment, an individual neuron may be receiving inputs (in the form of neurotransmitters) from thousands of neighboring cells, each coaxing it to produce or withhold an action potential. Once the receiving neuron reaches an informational threshold, it will produce an action potential of its own, releasing still more electro- chemical signals to nearby neurons (each of which is receiving inputs from thousands of other cells).
No one knows for certain what the tipping point is for a neuron to fire. Is it an accretion of inputs from thousands of nearby neurons? Are there certain neurons whose activities are so intimately bound that when one fires, the other does as well? Do some neurons have more influence than others? Are they all equal? No one really knows.
Nevertheless, new synaptic connections are critical: it is a physical alteration of the brain’s physiology to produce new behaviors. Said differently, it is the physical process of learning. We learn new behaviors or skills by altering our brain’s activity and physical landscape, and these changing synaptic connections are the fundamental building blocks of that process.
And yet no one understands the underlying mechanism of this process. “Everybody talks about synapses changing their efficacy— that they’re plastic and that their synaptic efficacy changes, but that’s not a model. That’s not showing you that this happens, and then this happens, and I get this result,” said Schwartz. “But with BCI, we can do that. We can actually make a subject learn.”
Schwartz can induce changes in how the brain behaves. “I can explicitly force you to change the way your neurons fire,” he said. By altering the output algorithm that controls the robotic arm, Schwartz can make a neuron whose activity is normally associated with, say, moving the arm up and to the right, initiate a movement in another direction.
Faced with such a contradictory output, neurons in the motor cortex will actually change their directional tuning to accommodate the new paradigm. “That is much closer to the way learning really takes place in the brain than trying to understand how some neurotransmitter changes a little bit or how a protein changes,” he said. “I can’t tell you how a neuron changes its tuning function, but I can tell you certain ways that it changes its tuning function.”
Excerpted from The Brain Electric: The Dramatic High-Tech Race to Merge Minds and Machines by Malcolm Gay. Published in 2015 by Farrar, Straus, and Giroux, an imprint of Macmillan© Malcolm Gay, 2015.