Advertisement
MO BIO
MO BIO

Neural Net Scientists Take Long View

PASADENA, CALIF.—A new approach to pattern recognition and similarly difficult problems, called neural-net computing, is stirring increasing interest among computer scientists. Despite recent reports in the media, however, the approach is far from ready for large-scale applications. "There's a lot of hype in the field," declared Yaser AbuMostafa, a researcher at California Institute of Technology. "The problem is how to achieve generalizable learning, to extend a computer's experience to n

By | June 15, 1987

PASADENA, CALIF.—A new approach to pattern recognition and similarly difficult problems, called neural-net computing, is stirring increasing interest among computer scientists. Despite recent reports in the media, however, the approach is far from ready for large-scale applications.

"There's a lot of hype in the field," declared Yaser AbuMostafa, a researcher at California Institute of Technology. "The problem is how to achieve generalizable learning, to extend a computer's experience to new fields."

Technology Venture Investors, of Menlo Park, Calif., is backing a new firm, Synaptics Inc., that features the work of Carver Mead and Federico Faggin of Caltech. But venture capitalist Jim Bochnowski admits that "we might be too early. The field might develop more slowly than we expect."

Neural-net computing is based on a design patterned after the human brain, in which each fundamental element is linked to a large number of other elements. But the scale is quite a bit smaller: while each neuron in the brain is connected to about 10,000 others, neural-net computers have so far joined about 100 such elements.

The design contrasts with the better-known parallel processing, which has already reached the marketplace. Although parallel processing consists of a large number of computing units working together, each of its logic gates is connected typically to only a few close neighbors. The result is a faster version of conventional computers that can tackle more tasks.

The concepts behind neural-net computing have linked computer scientists with researchers from various disciplines, in particular the neurological sciences. The use of optical as well as electronic systems has further broadened the field. The result is significant activity at such places as the California Institute of Technology, MIT, Stanford University and AT&T's Bell Laboratories.

First-Ever Conference

Recognizing the growing interest in the field, the Institute of Electrical and Electronic Engineers (IEEE) is sponsoring the first International Conference on Neural Networks, to be held June 21-24 in San Diego. The organizers see neural-net computing at the brink of reaching the size necessary for its emergence as a true discipline. A text-to-speech conversion system called NETtalk, devised by Terrence Sejnowski of Johns Hopkins University, illustrates both the promise and problems of neural-net computing. It can "learn" by being shown a standard printed text, along with a letter-by-letter phonetic transcription, and then sorting each letter into one of 55 bins representing standard phonemes or sounds of spoken English. NETtalk then compares its choice with the proper one, from the transcription; after several hours of computer practice, reasonably clear synthesized speech emerges.

But the system works well only after repeated practice on the same page; it handles a new page of text much more poorly than other text-to-speech converters, based on older technology, that have been on the market for years.

What is needed, Caltech's Mostafa noted, is a system free of algorithms that can learn without being programmed. "In complex cases such as finding a tree in a picture," he said, "we don't know what the pattern is. The system could proceed only by learning without instructions."

Despite these limitations, a few pioneers have launched neural-net companies. Synaptics Inc., of San Jose, Calif., features the involvement of Mead, co-inventor of very large-scale integration, and Faggin, founder of Ziog and designer of the first microprocessor. Swimming against the flow, Synaptics has attracted $1 million in venture capital. Nestor Co., of Providence, R.I., is offering a system for recognizing handwritten text, including Japanese characters. Its founders include Nobel Prize-winning physicist Leon Cooper of Brown University.

The Defense Department is also supporting work on neural nets through its Advanced Strategic Computing program within the Defense Advanced Research Projects Agency (DARPA). The money is funding modest efforts at Texas Instruments, AT&T, IBM, Bendix, TRW and General Electric.

Another milestone may be achieved later this summer. NETtalk's Sejnowski has devised a net that can play high-quality backgammon. He intends, in the weeks following the conference, to match his system against the current world champion, a computer program written by Hans Berliner of Carnegie-Mellon University.

Heppenheimer is a science writer with a Ph.D. in aerospace engineering.

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube
Advertisement

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Advertisement
The Scientist
The Scientist
Life Technologies