The nationwide experiment will initially include around 100,000 volunteers.
Animal robots have become a unique tool for studying the behavior of their flesh-and-blood counterparts.
October 1, 2013|
COURTESY OF JOSE HALLOY, FRANCESCO MONDADA EPFL GROUPAs a PhD student at the University of Toulouse in France, Simon Garnier was fascinated by the chemical signposts used by Argentine ants—an invasive species from the Mediterranean to California—to navigate their savanna environment. As the insects traverse complex terrain, they leave traces of pheromones that other ants will then follow, reinforcing the trailblazers’ path. “In nature, they will create these big networks of pheromone trails, sort of like the road system for us,” Garnier explains. And despite their wide-ranging and convoluted habitats, the ants always seem to construct highways that carve the shortest route back to the nest from a food source. Such navigational efficiency might suggest an advanced intelligence in these tiny-brained insects. The ants, which tend to take the path with the smallest angle of deviation at each fork in a complex maze, could be computing the angles at each bifurcation. But Garnier knew there might be a simpler answer: by just trying to head straight, the ants would have a greater chance of taking the less deviant path—no complex angle measurements required.
Like any hypothesis, his idea needed to be tested. But measuring brain activity in a moving ant—the most direct way to determine cognitive processing during animal decision making—was not possible. So Garnier didn’t study ants; he studied robots. Using a small fleet of dice-size machines, rolling on wheels powered by wristwatch motors, he and his colleagues tested the robots’ ability to navigate artificial networks, using whatever computational capability the researchers programmed. A camera detected the location of the robo-ants as they moved through an arena and relayed the information to a video projector, which shone a bit of blue light just behind a trail-laying bot. As more robots moved about, the more-frequented areas glowed brighter. The robots then navigated the environment by sensing light intensity through two sensors on their “heads.”
In making an artificial organism, you discover new constraints on what it is to be an animal and move around in the environment. I really think there is a lot to be discovered by doing the engineering side along with the science.—Jeffrey Schank, University of California, Davis
As Garnier had suspected, no higher intelligence was necessary for the robo-ants to take the less wayward turn, and when released en masse, the bots built and followed light roads that mimicked the real ants’ pheromone highways, albeit in a simpler environment.1 The only rule the robots had to follow was “Go straight,” and their behavior matched that of real ants “almost perfectly,” says Garnier. “To explain the behavior of the ants, we don’t need to have any form of complex cognitive processing; for this particular decision, it [can be] decided by the shape of the environment.”
Indeed, several groups have used autonomous robots that sense and react to their environments to “debunk the idea that you need higher cognitive processing to do what look like cognitive things,” says Andrew Philippides of the University of Sussex in the U.K. As early as the mid-1990s, early biorobotics pioneer Barbara Webb of the University of Edinburgh was developing robots to mimic the phonotaxis of female crickets—which are adept at localizing and moving toward calling mates. In so doing, Webb showed that no complex processing was required: the cricket’s auditory system, absent any cognitive processing, was sufficient for a robot to identify and approach a male’s call.2
CNRS PHOTOTHEQUE/CRCA TOULOUSE/SIMON GARNIER Today, a growing number of scientists are using autonomous robots to interrogate animal behavior and cognition. Researchers have designed robots to behave like ants, cockroaches, rodents, chickens, and more, then deployed their bots in the lab or in the environment to see how similarly they behave to their flesh-and-blood counterparts. In some cases, experimenters have thrown robots in with animals to see how they interact, and have even programmed robots to influence group decisions.
Among their many benefits, robots give behavioral biologists the freedom to explore the mind of an animal in ways that would not be possible with living subjects, says University of Sheffield researcher James Marshall, who in March helped launch a 3-year collaborative project to build a flying robot controlled by a computer-run simulation of the entire honeybee brain. “Running experiments, especially neuroscience experiments with animals, is a very costly, time-consuming process,” he says. “There’s much less scope for curiosity-driven research there.”
Furthermore, designing and programming robots to recapitulate specific behaviors forces scientists to think about animals in an entirely different way. “[In making] an artificial organism . . . you discover new constraints on what it is to be an [animal] and move around in the environment,” says Jeffrey Schank, who has built robots to study the behavior of rat pups at the University of California, Davis. “I really think there is a lot to be discovered by doing the engineering side along with the science.”
As Garnier’s efficiently navigating robo-ants demonstrated, complex behaviors are not always what they seem. Schank came to the same realization completely by accident, while investigating how young rats huddle together with their nest mates during the first week of life. He and his colleagues built self-propelled robot rats, about four times larger than real rat pups, but constructed to have the same general body shape. Each robot was equipped with a ring of pressure sensors that allowed it to respond to contact. Schank coded the rules of aggregation he’d developed from a computer simulation into the robots, and when he set them loose in an arena four times the size of the space he’d given to real rat pups, he thought it was a smashing success. Not only did the bots move around the space like the rat pups did, they aggregated in remarkably similar ways to the real animals.3 Then Schank realized that there was a bug in his program. The robots weren’t following his predetermined rules; they were moving randomly.
Robots often debunk the idea that you need higher cognitive processing to do what look like cognitive things.—Andrew Philippides, University of Sussex
“It turned out that a lot of what the pups were doing in this context could be explained by the shape of their bodies and how they interact with the arena,” says Schank. And the experience taught him a valuable lesson. “You can’t just investigate what’s going on in the brain of an organism,” he adds. “Cognition and behavior are a function of the environment, the body, and the brain.”
Of course, that doesn’t mean the animals don’t have higher processing skills. Predictions derived from robotics-based inquiries will always have to be tested in animals, emphasizes Tony Prescott, a cognitive neuroscientist at the University of Sheffield in the U.K., who develops rodent-mimicking whiskered robots. “Animal experiments are still needed to advance neuroscience.” But, he adds, robots may prove to be an indispensable new ethological tool for focusing the scope of research. “If you can have good physical models,” Prescott says, “then you can reduce the number of experiments and only do the ones that answer really important questions.”
One commonly cited benefit of robotics-based inquiries is that they are a step closer to reality than a straight computational approach. The robots, though still simulations themselves, interact in a real physical space, rather than in an environment simulated by a computer program—a necessarily incomplete depiction of the real world. For example, in Philippides’s work on visual navigation in ants and bees, “the thing that’s just so hard to simulate with any sort of realism is what the world looks like and how the world changes when the sun goes behind a cloud,” he says. “A simulation just doesn’t cut it.”
Furthermore, there are still open questions about what parts of the environment are important to animals, and virtual simulations are inherently biased by the researchers’ knowledge. “People say simulation is doomed to succeed,” says Webb. “If you build the simulated world and the simulated animal to live in that world, then what you put into the simulated world is all the things that you think are important . . . so there’s a certain circularity” in the logic, she says. As a result, it should not be surprising that the simulated animal “works” in the simulated world, she explains, but “in the real world, you nearly always get caught out with something that you didn’t expect.”
Building animal-mimicking robots is not easy, however, particularly when knowledge of the system’s biology is lacking. For example, when Prescott and his collaborators went to program whisker movements in their motorized robo-rats, which they use to test theories about cognition and motor control, they realized they didn’t know how the whiskers should react when they touch objects. “No one had asked that question,” Prescott says. (For more on Prescott’s research, see “Robo Rat,” The Scientist, April 2012.)
LOBSTER COURTESY OF FRANK GRASSO
But by asking such engineering questions, researchers often get biological answers. When Frank Grasso, director of the Biomimetic and Cognitive Robotics Lab at the City University of New York in Brooklyn, began designing robots to investigate lobster navigation, he soon learned that having the robots recognize and follow scents wasn’t sufficient. Grasso first programmed his robo-lobsters—which consisted of a cylindrical body on two large wheels and fiber-optic antennae that detected chemicals in the water—to head toward high concentrations of an odor, the general principle believed to be used by real lobsters to locate their prey. (See photograph at right.) But this rule failed to recapitulate natural lobster behavior. However, when the researchers also gave the robots a sense of flow, and programmed them to assume that odors come from upstream, the bots much more closely mimicked real lobster behavior. “That was a demonstration that the animals’ brains were multimodal—that they were using chemical information and flow information,” says Grasso, who has since worked on robotic models of octopus arms and crayfish.
QUADCOPTER ROBOT COURTESY OF CHELSEA SABO
University of Sheffield evolutionary biologist Marshall is tackling a similar problem as he tries to simulate how honeybees process visual input. Part of a multi-institution collaboration to model the entire honeybee brain on a supercomputer and use the simulation to control two flying robots, Marshall is developing the algorithms that will dictate how the robots see through their camera eyes. While the neural circuitry underlying the olfaction system has been well studied in bees, the vision system has “not been described to anywhere near the same extent,” says Marshall. “So there we’re working much more in the dark,” gathering clues from what’s known about vision in other insect species, including Drosophila and bumblebees, while making logical assumptions from an engineering perspective as well.
Of course, Marshall emphasizes, the critical test will come when the researchers implement the cognitive model they’ve developed in the flying robots—currently being adapted from premade 50-cm X-shaped “quadcopters”—which will communicate wirelessly with the supercomputer running the honeybee brain simulation. (See photograph at right.) “The [model’s] embodiment in the robot is a really important part of the project,” Marshall says. “You get richer sensory information from the real world . . . than you could ever hope to achieve in a simulated world.”
In some sense, the use of robotics in animal-behavior research is not that new. Since the inception of the field of ethology, researchers have been using simple physical models of animals—“dummies”—to examine the social behavior of real animals, and biologists began animating their dummies as soon as technology would allow. “The fundamental problem when you’re studying an interaction between two individuals is that it’s a two-way interaction—you’ve got two players whose behaviors are both variable,” says Gail Patricelli, a behavioral ecologist at the University of California, Davis, who has animated taxidermied bowerbirds and sage grouse to study their courtship behavior. Dummies allow biologists to control one side of the interaction, and robotics is equipping the dummies with ever more-advanced behaviors. (See “Sophisticated Dummies.”)
COCKROACHES COURTESY OF THE LEURRE PROJECT
With the advent of autonomous, sensing-and-reacting robots, however, the introduction of robots into animal societies has taken on a whole different meaning. “The idea here is to build mixed groups of animals and robots that interact [and] show social interaction in the long term,” says Université Paris Diderot researcher José Halloy, who has tested the ability of robotic cockroaches to interact with the real insects.
But building a robot that animals will accept as one of their own is complicated, to say the least. Robots employed to explore theoretical concepts of behavior and cognition don’t necessarily have to look exactly like the animals they’re mimicking. But to develop social relationships with real animals, robots have to look, smell, and act the part at least well enough to fool the research subjects. “It’s a very challenging task . . . to build a device that’s capable of being part of the group,” says Halloy.
So he started simple. While at Université Libre de Bruxelles in Belgium, Halloy and colleagues developed robots that could successfully integrate with a group of cockroaches. Cockroach cognition is relatively straightforward to program, and it takes just a drop of cockroach pheromone to convince real cockroaches that the robot is a member of their species. “What is important in animal-robot interactions is to send the correct signals,” says Halloy. “And in the case of the cockroaches, it doesn’t matter that you look like a cockroach, but it does matter that you smell like a cockroach.”
After programming the robots to select between lighter and darker shelters just as cockroaches do, the researchers allowed them to interact with the real insects. Sometimes they kept the robots programmed to the natural cockroach behavior of preferring darker shelters, and watched as the robots seamlessly integrated into the group. Other times, the team programmed the robots to frequent more well-lit ones, and even when there were only four robots in a group with 12 cockroaches, the researchers saw a dramatic shift in the group’s behavior: many of the insects would follow the robots into the lighter shelters as a result of live cockroaches’ tendency to aggregate.4 (See photograph above.)
The robot is translating a message, Halloy says. “We want to say to the cockroach, ‘Okay, guys, you prefer the dark shelter, but we’d like you to be in the lighter one.’ By programming the robot, [we] were capable of switching the whole group decision.”
A handful of other researchers have also successfully integrated robots with live animals—including fish, ducks, and chickens. There are several notable benefits to intermixing robots and animals; first and foremost, control. “One of the problems when studying behavior is that, of course, it’s very difficult to have control of animals, and so it’s hard for us to interpret fully how they interact with each other,” says Iain Couzin of Princeton University, who has used autonomous fish robots, controlled by a magnet carried by a wheeled robot below the tank, to test responses of sticklebacks and golden shiners to both robotic peers and faux predators.5 “One advantage of employing a robot is you can have control over one or even a number of different individuals within groups, so you can set up scenarios—repeatable scenarios—[to reliably] test the responses of individuals.” (See “Crowd Control,” The Scientist, July 2013.)
If you have machines and animals interacting, then the question is, what kind of collective intelligence can they display?—José Halloy, Université Paris Diderot
Of course, with more discerning species, the task becomes more difficult, says Couzin, who has found that while sticklebacks seem to display fairly natural responses to the faux fish, golden shiners are more sensitive to the acoustic vibrations caused by the robot’s motor. “To convince them that your model fish is really a fish can be quite tricky,” he says.
From an engineering perspective, this challenge raises some fundamental questions. “What do you need to be capable of doing if you want to be a social animal?” Halloy says. “We don’t really understand that fully on the animal side, and we certainly don’t understand that on the artificial side.”
So for his second venture into mixed animal-robot societies, Halloy turned to another well-studied system: imprinting in birds. Inspired by the classic imprinting experiment in which a group of goslings hatched to see Austrian biologist Konrad Lorenz hovering over them, and then proceeded to follow the father of ethology instead of their real mother, Halloy taught hundreds of 8- to 12-hour-old chickens to imprint on a robot that he controlled.
By and large, the experiment seemed to work, but it did prove a more difficult task than getting a group of cockroaches to follow a pheromone-spiked robot into a lit shelter. Most of the baby chickens imprinted strongly; however, for a minority the imprinting failed. In some cases, the chicks were even afraid of the robot—something Halloy hadn’t expected. Nevertheless, the work once again showed that robots can alter the behavior of the group: when robot-imprinted chicks were mixed with chicks that had failed to imprint, the group displayed some interesting dynamics, with the animals getting closer to, then farther from, the robot as they explored the space. Groups of six strongly imprinted chicks had, of course, no problem faithfully following their robot mother.6
Now in Paris, Halloy continues to probe animal group dynamics using robots. In February, he began a project to design robots to interact with schools of fish. The robots will be situated outside of the tank, where they can carry some sort of lure, Halloy says—“a fake fish or anything else that can send signals to the group of fish.” In a parallel project, researchers at the University of Graz in Austria are aiming to do something similar with juvenile honeybees, using a static array of devices that can send a moving signal to the bees.
In addition to the basic science—namely, understanding why animals behave the way they do—another aspect of the research motivates Halloy: “What kind of artificial collective intelligence can the animals produce when they’re interacting with machines?” he wonders. “Are the animals capable of using machines?” It feels a bit like science fiction, Halloy admits, but animals can do plenty of things that robots cannot—most simply, they can effectively navigate complex landscapes. At the same time, robots, equipped with powerful computers and wireless communication technologies, can do many things animals can’t. So, says Halloy, the question is, “Can they do more together than they can do separately?”
CHICKS COURTESY OF JOSE HALLOY, FRANCESCO MONDADA EPFL GROUP
They built their robo-frogs using a rubber model, then hooked the robots up to a machine that pumps air into faux vocal sacs, causing them to inflate like those of a real frog. They can then program the device to coordinate the audio signal from the speakers with inflation of the vocal sac, or to dissociate those stimuli. Sure enough, the timing of the vocal sac inflation, relative to the two notes of the male’s call, “seems to be really important for the females,” says Taylor.
“By creating something like a doppelgänger, a kinematic model, you can try to elicit behaviors in a standardized fashion,” says collaborator Barrett Klein, now at the University of Wisconsin–La Crosse. “And since the research direction has to do with looking at multiple modalities—vision and acoustics at this point, and possibly tactile sensations in the near future—this allows, you could say, an unprecedented level of control into the realm of the impossible.”
“It was moderately successful,” says Gail Patricelli, a behavioral ecologist at the University of California, Davis—though “some bees attacked it.” Of course, the honeybee waggle dance is “one of the most complicated forms of communication outside of humans, so it was a pretty hard place to start.”
After setting a robot loose in the lek, Patricelli’s team observed the males’ reactions. As the grouse-bot approached, males that generally were more successful at securing mates increased the frequency of their courtship display—which consists of inflating two air sacs on their necks to produce a loud whooping call—without sacrificing any volume or intensity.9
November 2, 2013
The robot frog may not have an intelligent cognitive system, but neverthless it processes sounds intelligently because it was intelligently constructed to do so.
And so were any biological systems or their robot copies whose use of intelligence may be in doubt.