© 2005 AAAS

The MIT-built learning biped moves based on the principles behind passive "ramp-walking" machines. (From S. Collins et al., Science, 307:1082–5, 2005).

Rodney Brooks has what seems like modest career goals: to achieve the manual dexterity of a 6-year-old and the object-recognition skills of a toddler. But for the moment, his field of robotics remains in its infancy. "Over the past 20 years we've gotten pretty good at navigation," says Brooks, who heads Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory. "But none of the robots manipulates the environment. None of them really understands the objects they encounter outside of the lab."

A toddler, by contrast, can recognize a cup as a cup, even if its shape or size changes, while a six-year-old can tie her own shoelaces. For robotics, what it will take to reach those goals is a tremendous advance in artificial...


At the moment, few groups have the expertise to create even simple anthropomorphic robots, and up to now, they haven't been communicating. Brooks compares the situation to the early days of computer science: "Everyone had to start from scratch and build his or her own, until they had the idea of sharing languages." He says that robotics may now have reached the same point of enlightenment, with the launch of a European Union-funded project to build a robot from state-of-the-art technology that will be shared openly within the community.

Over the next five years, Sandini will coordinate 16 labs across Europe, the United States, and Japan in the construction of RobotCub, a robot with arms, legs, and a head. Although RobotCub will only crawl to begin with, Sandini hopes it will learn and grow, and perhaps even walk. Researchers will be able to monitor its maturation, gaining insights into human cognition and human-machine interactions.

Most researchers, however, still grapple with basic problems of locomotion. Rolf Pfeifer's group at the Artificial Intelligence Laboratory at the University of Zurich, Switzerland, has constructed a dog-like robot, called RunningDog or Puppy, which runs on a treadmill. They are just now adding pressure sensors to Puppy, so that it can adapt its motion in response to sensory feedback. But Pfeifer says these sensors are inadequate, a major limitation to creating intelligent robots. "If we had the sensors, that would be a quantum leap," he says.

Others have moved to two legs. Researchers announced last month that they had succeeded in building robots that walk using passive dynamics, the premise by which simple machines can descend an incline, often in human-like ways, without actuators or motors.1 Built independently by groups at MIT, Cornell University, and Delft University of Technology in the Netherlands, the three robots are propelled along flat grades with a fraction of the energy required by such technically advanced bipeds as Asimo, Honda's famous demonstration robot. The walker from MIT incorporates a learning program that optimizes movement over various terrains by making random adjustments and measuring efficiency. Such a learning rule, the authors write, might also describe biological motor learning.



Top: Courtesy JSK lab, Dept. of Mechano-informatics, The Univ. of Tokyo; Bottom: Courtesy Fumiya Iida, Dept. of Information Technology, Univ. Zurich

Kenta (top), a whole-body tendon-driven humanoid robot with a flexible spine, tracks a moving object based on visual information cooperatively using its eyeballs, neck, and spine. PuppyI (bottom) is built to capitulate rapid quadruped motion by exploiting morphological properties.

But even these robots are powered by computer-controlled servomotors which often produce jerky movements. Pfeifer says researchers are realizing that a better way to power and direct motion is again to follow the human model. "The brain doesn't control the trajectory of the joints; rather, it initiates that trajectory and controls the material properties of the muscles," he says. "By looking at the materials, you reduce the need for computation by orders of magnitude."

But the flipside, that recreating human movement might be the key to understanding human neural corelates, is an idea reinforced a decade or more ago with the discovery of the mirror neuron system. Mirror neurons fire not only when an individual performs an action, but also when he or she watches someone else perform the same action. Luciano Fadiga of the University of Ferrara, Italy, and colleagues demonstrated the existence of this system in humans.2

This mirror system, Fadiga and others have suggested, is the basis for understanding others' actions and attributing meaning to them. He argues that it makes shared understanding and ultimately communication possible. Mirror neurons have been found in various brain regions, including the Broca region, which is associated with speech.

Fadiga wants to equip RobotCub with a mirror system so that it can learn the same way as a human does. "I think it's important that the two systems proceed with the same speed during development, both the control of our own actions and the understanding of others' actions," he says. The robot will, in turn, allow him to study an analog of the mirror system, removed from the complexity of the human brain.


Minoru Asada of the University of Osaka, Japan, is a pioneer of the application of developmental psychology to robotics. "In conventional robotics, designers have implemented robot intelligence as a consequence of their own understanding of physics in the external world, so robot intelligence seems superficial. Robots just play back the motor commands as instructed and may not know the meaning of their behaviors," says Asada.

His approach, by contrast, involves designing self-developing structures connected to or inside the robot's brain, which consists of artificial neural networks. The robot is then placed in an environment where it can adapt to increasingly complex tasks.

An example of such a structure would be a plastic tube fitted with five motors that deform it, distorting sounds that pass through. At first, the robot makes random cooing sounds, but if an experimenter parrots back only those coos that match vowels, the robot learns which sounds provoke a real response, that is, which sounds have meaning. "As a result it obtains the category of vowels at the same time as the skills necessary to articulate them," explains Asada.

What these robotics researchers are learning about intelligence could have implications for people. Autistic children, for instance, have delayed language development, but sometimes show a phenomenon called echolalia, where they repeat words spoken to them without apparently understanding them. Giacomo Rizzolatti of the University of Parma, Italy, who discovered mirror neurons in monkeys,3 says that the mirror system may not be properly developed in autistic children.

At the University of Hertfordshire, UK, Kerstin Dautenhahn works with severely autistic children, observing their interactions with robotic dolls that respond and answer back to them. The idea behind her project, called Aurora, is to encourage these kids who barely speak to acquire basic social skills such as joint attention and imitation, considered essential precursors of language.

It's too early to show that the method has any therapeutic benefit. But, says Dautenhahn, "What we have shown is that in interaction with the robot, the children show a variety of skills that are unexpected." One little boy, for instance, would dance in front of his toy, and when he left the playroom, he would make a sign to it that his teacher later identified as his personal goodbye sign.

Jürgen Konczak of the University of Minnesota, Minneapolis, studies neural control of movement, and is particularly interested in why diseases of the cerebellum cause ataxia. Although those affected still have the necessary force to perform movements, their movements lack coordination.

Konczak and his colleagues have tested healthy people and also patients with a degenerative disease called hereditary cerebellar ataxia, on a task that involved them moving a virtual object with a joystick. The joystick was designed so that, every so often, their hand actions were opposed by an unexpected counter force, which they then had to overcome to reach the target. While healthy people learned to adapt their arm movements to this counter force over time, the patients with ataxia could not.4

That suggested to Konczak that the cerebellum might harbor an internal model of limb movements, a model that could be honed through experience to respond to new situations. "The idea of an internal representation of limb dynamics is pure robotics language, but now we use it," he says. In daily life, people constantly have to adapt to novel forces. Robots maneuvering in a new environment must do the same. "In essence, the roboticists are trying to solve the same problem as us," says Konczak. In that way, he says, neuroscience and robotics can guide and inform one other.

Interested in reading more?

Magaizne Cover

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!
Already a member?