ABOVE: A microscopy image of DishBrain where fluorescent markers show different types of cells: neurons and axons are green, purple marks neurons, red marks dendrites, and blue marks all cells. Where multiple markers are present, colors are merged and typically appear as yellow or pink. CORTICAL LABS

A longstanding goal of artificial intelligence research is to develop artificial general intelligence—that is, the sort of conscious or humanlike AI seen in science fiction. Engineers frequently pursue this “AGI” by trying to develop algorithms or AI architectures based on the human brain’s ability to learn and integrate information, and to piece together context in such a way that it can truly understand something. But in a new approach to recreating learning and intelligence, a system called the DishBrain instead merges living brain tissue with technology.

DishBrain, a product of the Australian biotech company Cortical Labs, is a platform that can teach living neurons to perform tasks by stimulating them with electrophysiological signals, then reading the resulting activity in the cells. In new work published today (October 12) in Neuron, researchers showed that cultures of mouse or human neurons were capable of learning to play the classic 1972 Atari video game Pong after about five minutes.

“I would say it is accurate to call this a form of learning because it is goal-directed activity adaptation that spans minutes,” Harvard Medical School neuroscientist Yasmín Escobedo Lozoya, who didn’t work on the study, tells The Scientist over email.

According to Cortical Labs Chief Scientific Officer Brett Kagan, this is ample evidence that cultured networks of neurons provided with stimulation and feedback are capable of learning—and that they’re “sentient,” he tells The Scientist. With further refinements, he says, DishBrain could be used to probe the mechanisms of intelligence, study the effects of pharmaceuticals on neurons, or develop better AI based on a synthesis of biological and engineered components.

“Can this be a new way of thinking about what neurons are?” asks Kagan. “Are they just part of human and animal biology? Or can they be a new biomaterial for intelligence? . . . Why try and mimic what you can harness?”


See “Building a Silicon Brain

The DishBrain system is an in vitro setup for electrophysiological stimulation and recording. Cultures of cortical neurons are grown on a grid of electrodes capable of delivering a jolt to specific neurons that resembles the typical electrochemical communication a cell would receive from its neighbor in the brain. The array can also record a cell’s electrophysiological response and feed that response into whatever digital task is at hand, creating a closed-loop system of real-time communication and feedback between the system’s software and living brain cells. The array is physically divided into different sections: a sensory region, through which feedback and stimulation are provided to the neurons, and multiple motor regions. 

In Pong, a player must use their paddle (little more than a line of pixels) to keep a ball from entering their goal while trying to hit it into their opponent’s, like a virtual game of air hockey. The simple game has become a go-to proof-of-concept challenge for brain-computer interface systems. Spikes in activity from the neurons situated in one motor region are interpreted as cues to move the game paddle up. Neuronal activity in the other moves the paddle down.

“When we give them information, we try to make it as close [as possible] to what one might receive biologically,” Kagan says.

While Kagan says that the neurons did indeed play Pong in real time, his team had to make a few modifications for the sake of simplicity. “That paddle’s bigger, the ball moves a bit slower,” he says. Also, the neurons’ goal is to chase a high score rather than to win. Therefore, it “doesn’t really have an opponent to play against; it can’t win,” Kagan adds. “It would be over-complicated to try and create a win condition. All it had [in terms of outcomes] was ‘hit the ball,’ ‘keep playing,’ or a ‘lose’ condition where they received a different sort of feedback.”

          Black and white scanning electron microscope image showing an electrical array covered in a highly connected network of neurons, which are clustered toward the right of the image.
Scanning electron microscope image of a highly interconnected neural culture on the DishBrain system

The stimuli provided to the neuron cultures differed in terms of intensity and predictability. The neurons in the sensory region were regularly provided with weak jolts that encoded the position of the ball in the game, according to the paper. When neurons in the motor regions behaved in such a way that they successfully lined up the paddle with the ball to bounce it back across the screen, sensory region neurons were given a predictable stimulus that fostered neuronal connectivity—the neuronal equivalent of a reward. When they failed to do so, they were given an unpredictable stimulus that was longer and more intense, resulting in disruption to the culture. The neurons soon learned to avoid the disruptive stimuli and seek out the “hit” condition.

Both human neurons cultured from induced pluripotent stem cells and embryonic mouse neurons were able to learn to play the game, and the performance of cultures of both types of cell improved over time. However, the human cells racked up significantly longer rally times, hitting the ball more times in a row, according to the study. The improvements over time to performance, Kagan says, are evidence that a small network of neurons is capable of learning a task, provided it’s given ample and specific feedback and cues.

Eric Chang, a neuroscientist at the Institute of Bioelectronic Medicine at the Feinstein Institutes for Medical Research in New York who didn’t work on the study, tells The Scientist over email that he finds the setup interesting, adding that the “utility of this project is unclear at this moment, but that does not mean it is not worthwhile. As we know, computers and artificial intelligence can outperform the human brain in many specialized tasks so I'm not sure what application would benefit from a handful of neurons interacting with this type of electronics.”

Escobedo Lozoya, however, mentions that she could see the system being used to screen potential drugs to see whether they might affect brain function.

Perhaps even more up in the air is whether the neurons cultured in the DishBrain are sentient. Kagan acknowledges that his claim that this represents sentience is likely to cause controversy, especially given what he says is an overabundance of hype and boosterism in the AI development field.

“That was a term we gave a lot of consideration and internal debate to,” Kagan says, noting that he and colleagues published a commentary in AJOB Neuroscience on their choice of terminology earlier this year. “I must stress we do not mean consciousness,” he adds, although the two are often conflated. “Consciousness is this experience of what it feels like to be humans. Sentience, formally, and historically, is being able to sense the environment . . . and to respond to it.”

That definition applies here, he adds. “We think this is the first time neurons have been put in an environment they can interact with,” rather than having an indirect relationship with the environment via the intermediary of the body.

Not everyone is convinced. Chang says that “this is a proof-of-principle paper that demonstrates how living neurons in a dish interact with a computer chip in a limited way, but it’s not something that I would call ‘sentience’ or ‘biological intelligence.’” For her part, Escobedo Lozoya says. “I’ll leave deciding whether this constitutes ‘sentience’ to the philosophers.”

Kagan says he welcomes that kind of critique as his team continues to develop the system: “We tried to reduce hype as much as possible,” he says, “we think the implications exciting as they are; there’s no need for hype around it.”