PASADENA, CALIF.—A new approach to pattern recognition and similarly difficult problems, called neural-net computing, is stirring increasing interest among computer scientists. Despite recent reports in the media, however, the approach is far from ready for large-scale applications.

"There's a lot of hype in the field," declared Yaser AbuMostafa, a researcher at California Institute of Technology. "The problem is how to achieve generalizable learning, to extend a computer's experience to new fields."

Technology Venture Investors, of Menlo Park, Calif., is backing a new firm, Synaptics Inc., that features the work of Carver Mead and Federico Faggin of Caltech. But venture capitalist Jim Bochnowski admits that "we might be too early. The field might develop more slowly than we expect."

Neural-net computing is based on a design patterned after the human brain, in which each fundamental element is linked to a large number of other elements. But the scale is quite a...

Interested in reading more?

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!
Already a member?