ADVERTISEMENT
ADVERTISEMENT

Entrepreneur Briefs

The advent of neural networks promised the computing world a machine that could actually think for itself-learn as people do, by extrapolating general rules from a set of examples, rather than being bound step-by-step by a program. From the start, however, computer scientists recognized a major problem: explaining how neural networks reached their conclusions. Now the Hecht-Nielsen Neurocomputer Co. has announced that it's developed software to open up and shed light on the "black box" that is t

The Scientist Staff
The advent of neural networks promised the computing world a machine that could actually think for itself-learn as people do, by extrapolating general rules from a set of examples, rather than being bound step-by-step by a program. From the start, however, computer scientists recognized a major problem: explaining how neural networks reached their conclusions. Now the Hecht-Nielsen Neurocomputer Co. has announced that it's developed software to open up and shed light on the "black box" that is the typical network's decision-making process. The three-year-old San Diego firm's product, so far unnamed and unpriced, but likely to be on the market by the end of the year, is the first capable of developing an audit trail of a neural network's thinking processes, according to David Shlager, vice president for sales and marketing. This would enable users to determine the computer's reasoning, as well as learn the rules by which the computer...

Interested in reading more?

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!
Already a member?
ADVERTISEMENT