A recent toast to James Watson highlights a tolerance for bigotry many want excised from the scientific community.
In Chapter 2 author J.D. Trout highlights the dividing line between truth and scientific “fact.”
June 1, 2016|
OXFORD UNIVERSITY PRESS, MAY 2016Psychological Fluency: Natural Kinds, Prototypicality, and Beauty in the Mean
Science is about special kinds of classes in nature. It is not interested in phony hybrid objects like a salamander 54 miles southeast of the Liberty Bell or elements with atomic numbers that are the sum of pets you have. Instead, science is interested in “carving nature at its joints,” to use the apt Platonic phrase further popularized by Quine. Once the theory carves those joints, it has identified Natural Kinds, objects in nature that play a taxonomic role in a mature, working science. What makes these kinds natural is that it is nature itself, not human practices of categorization, that fixes whether an object belongs in one class or another. Species and crystals, copper and maple trees, and visual transduction and lexical priming are natural kinds. Natural kinds are the very stuff of science, and yet many of them are not the compositionally inert, timeless objects science has promoted as its image. These natural kinds are no more timeless than an evolutionary lineage, no more frozen in composition than the deviation of isotopes and the variability of neurally plastic processes.
As a psychological description, we recognize natural kinds by looking for their central tendencies, the most representative member of a category or the average of all the members belonging to a category. Psychologists of concepts call this the prototype. The preference for prototypes is robust: people display it for living categories like human faces, as well as fish, dogs, and birds. But the drive is so strong to find the center that humans do it even with categories of nonliving objects, such as color patches, and even artifactual objects, such as furniture, wristwatches, and automobiles. So robust is this drive, in fact, that objects that clearly do not admit of gradations receive this treatment from humans. Whole numbers are one such example: certain odd numbers are reported by people to be “more odd” than others. Who would have guessed that 7 and 13 are the oddest of odd numbers, and 15 and 23 the least odd of them? Or that 8 and 22 are the evenest even numbers, and 30 and 18 the least even?
This drive leads to all sorts of different prototypes: in the United States, robins and sparrows are prototypical birds, followed by birds of prey, then poultry, and finally, the clumsy flightless ones. There are prototypical molecules (taken as normal compared to their isotopes), as well as prototypes of a biological species, of a disease, or of a mountain range. Years of training make these prototypes present-to-mind, what psychologists call “chronically accessible”—that is, what becomes “prototypical” is that which is easiest for the brain to process. And in science, just as in everyday life, explanatory prototypes that free up processing space in the brain are deemed more attractive, and more accurate, whether or not they actually are. Fluency feels good, and disfluency feels bad.
How Can Fluency’s Sense of Understanding Cause Poor Explanations?
Brain imaging evidence is now a potent part of explanatory prototypes in cognitive neuroscience. A novel series of experiments by Deena Skolnick Weisberg and her colleagues show that nonexpert consumers of behavioral explanations assign greater standing to explanations that contain neuroscientific details, even if these details provide no additional explanatory value. This “placebic” information produces a potentially misleading sense of intellectual fluency and, consequently, an unreliable sense of understanding. This extraneous information, especially neuroscientific information, gives people a mistaken feeling that they have received a good explanation. But why would placebic information (rather than, say, false or shocking information) create a sense of fluency? This question goes beyond the scope of Weisberg et al., but the conceptual connections are easy to trace. Placebic information has characteristics that promote the feeling of intellectual fluency. The technical vocabulary and causal taxonomy of placebic neuroscientific information might activate conceptual representations contained in “true” psychological and neuroscientific explanations. Thus, irrelevant information can still convey the good feeling of fluency experienced when we assemble and process an explanation.
In their second study, Weisberg et al. created bad explanations: ones that were circular in nature, thus violating a fundamental formal and logical constraint of explanation. Because the bad explanations were just circular restatements of the interesting effects, they were not explanatory. For example, one circular “explanation” for an error in perspective taking states that the error “happens because subjects make more mistakes when they have to judge the knowledge of others. People are much better at judging what they themselves know.” This “explanation” is just a restatement of the initial clause, and it assumes the truth of the very thing it purports to explain—that people are relatively worse at judging what others know. But subjects found the circular explanations more “satisfying” when they contained neuroscientific vocabulary, or “neurobabble.”
These two experiments cross quality of explanation (good vs. bad) with presence of neuroscientific information (with vs. without). Overall, nonexperts found good explanations significantly more satisfying than bad explanations, and explanations with neuroscientific information more satisfying than those without. In addition, bad explanations with neuroscientific detail enjoyed a neurophilic premium; neuroscientific information produced a special boost in perceived accuracy. Neuroscientific vocabulary delivers additional fluency over behavioral explanations, whether actually useful or not.
Most people would like to think that we are not so gullible about bad (circular) explanations with pretty neuroscientific words—that a little training in cognitive neuroscience would make us critical enough to defeat such seduction. But training in that field does not help much—similar to novices, students enrolled in an introductory neuroscience class appraised explanations with neuroscientific information as more plausible than those without. Furthermore, bad explanations benefited more from placebic neuroscientific information than did good explanations. Experts were less sensitive overall: they didn’t recognize how bad the non-neuroscientific explanations were, and they were less impressed with the good neuroscientific explanations. There are similar effects on experts in other domains, in which experts moderate their assessments. Experts might be less hoodwinked by neuroscientific explanations for the same reason that improving a person’s skills reduces their overconfidence about them. Except for truly expert subjects, placebic neuroscientific-explanation-primed conceptual vehicles deliver feelings of intellectual satisfaction. And placebic information is not the sole supplier of fluency; this sense of understanding is conveyed by several well-documented psychological effects, such as the feeling of knowing, illusion of explanatory depth, and tip-of-the tongue experiences.
Novices accepted circular explanations because the reductive neuroscientific details sounded so credible. And scientists routinely accepted bad explanations when their arcane vocabulary was used. The lesson of this research, then, is general: explanations, bad and good, are accepted for non-truth-related reasons. This is all the more reason to be cautious about our gut reliance on our sense of understanding.
Excerpted from Wondrous Truths: The Improbable Triumph of Modern Science. Copyright © 2016 by J.D. Trout. Published by Oxford University Press.