Curiosity and the Scientific Method

Graphic: Cathleen Heard The amazing strides forward in biomedical research over the past two decades, led by an American triumvirate of academia, industry, and government, are not without accompanying concerns. One such worry is that curiosity could become an endangered justification for the conduct of life science. Basking in the sun of its results, biomedical research in particular may risk becoming too results-oriented. Increasingly, universities and teaching hospitals are turning to private

Apr 3, 2000
Steve Bunk

Graphic: Cathleen Heard


The amazing strides forward in biomedical research over the past two decades, led by an American triumvirate of academia, industry, and government, are not without accompanying concerns. One such worry is that curiosity could become an endangered justification for the conduct of life science. Basking in the sun of its results, biomedical research in particular may risk becoming too results-oriented. Increasingly, universities and teaching hospitals are turning to private funding from companies eager to capitalize quickly on research. Could it be that an overemphasis on applied science--especially through the development of patentable technologies and other profit-making arrangements between academia and industry--might affect the very nature of the scientific method?

Surely, curiosity itself is not endangered. Then why should anyone worry that an apparently subtle shift in the motivation of research might change how science is done? And even if the scientific method were altered, what would be at risk? Before seeking answers, it's important to establish that, for the purposes of this essay, "scientific method" refers to the system by which hypotheses and theories are subjected to experimentation. It does not include the innumerable ways in which concepts come to mind. Those processes belong to the nonrational component of scientific discovery and, as such, are not methodological. But inspiration finds its fulfillment through procedure. Accordingly, a look back to the foundations of the scientific method can show how the motivations of curiosity and practicality interacted with the rise of European universities and specialized learning.

From classical antiquity to medieval times, the learning process was the province of the master-disciple relationship. The great masters of medicine, law, or theology were primarily practitioners. They taught as amateurs, and amateur teaching was therefore more highly esteemed than professional instruction.1 Theory, like scholarship itself, was marginalized, because learning was a means to a practical end. Although the ancient Greeks regarded philosophy as important, it had both practical and moral aims for them. Those emphases were lost under the religious influences of medieval Europe, when neither philosophy, natural science, nor mathematics were central subjects. Specialization had not yet developed significantly.

Medieval students came from afar to sit at the feet of the masters, but these foreign pupils were not under the dominion of the king, and rowdiness was a problem. In response, corporations for learning were established, sanctioned by leaders of both church and state. Masters and their disciples now constituted a collective body, and by the 13th century, universities had arisen. The professional teacher ascended in wealth and status, and specialization began to blossom. An interest in learning for its own sake awakened, establishing philosophy as the basis of the university student's intellectual culture.

Science emerged as a field derived from philosophy yet independent of it, albeit not particularly practical nor highly regarded for several more centuries. Philosophy and the classics were considered prerequisites to professional studies, but from the 15th century to the end of the 18th century, science flourished largely outside universities. Mutually beneficial collaborations began between scholarly specialists and artisan-technicians such as Leonardo da Vinci, whose training was through apprenticeships. Thus germinated the scientific partnership of academia and industry.

Induction and Deduction

What method science would use to explore nature was likewise evolutionary. Until the time of Francis Bacon, who died in 1626, the practical problems of life were rarely given as reasons for scientific efforts.2 Bacon championed inductive logic, which basically holds that the weight of observations or experiences can indicate universal truths. In the 18th century, philosopher David Hume clarified the "problem of induction," which is the question of whether any such inferred truths are justified, or under what conditions. In the first half of the 19th century, French philosopher Auguste Comte founded positivism. Later developed by the Vienna Circle of scientists, philosophers, and mathematicians, its primary tenet is: Only that which can, in principle, be verified by observation is meaningful.

Around the turn of the 20th century, Thomas H. Huxley pointed out that such "verification" doesn't always make a thing true. He noted that many scientific hypotheses are useful for long periods before eventually proven to be wrong.3 Six decades later, Karl R. Popper took another swipe at absolute truth. In arguing against the induction that he felt was still widely in use, he described deductive logic, which subjects a hypothesis to empirical testing, as the way to corroborate theory. However, he stressed that such corroboration doesn't make theories true. Rather than being concerned with truth, he thought the best scientists seek statements or systems that it is possible to disprove--the criterion of falsifiability.4

Popper regarded the Vienna Circle's refined form of positivism as part of instrumentalism, a theory suggesting that ideas have value according to their function in human experience or progress. This doctrine allowed church leaders to accept scientific theories as tools for analyzing nature, without conceding they represented real entities that could clash with biblical views.5 Instrumentalism betrays science, Popper declared, by asserting that science cannot offer explanations, and therefore cannot discover the hidden essences of things.

Theory and Experiment

These ideas, and many others aimed at explaining how science proceeds, imply the difficulty of following a purely logical path into the unknown. Scientists take this journey in two ways, by theorizing and by experimentation, but the former is perhaps less tightly circumscribed by the scientific method than the latter. Primarily a cognitive approach, a theory is a relatively rare attempt to go to the heart of a problem. The more common experiment deals with details, and is mostly perceptual. The relationship between the two is dialectical; research is the vehicle for theory development.6

Secure on a grounding of facts and logic, theorists take creative leaps of thinking, leaps that nevertheless require courage in the face of the skepticism that characterizes science. They then test their theories by experiment. On the other hand, experimenters work within the confines of a theory, exercising their innovation through the choice of a problem and a method of exploring it. For both theorists and experimenters, to focus their attention on something that is appropriate or potentially useful can unleash the processes of creativity, intuition, or even serendipity that are, in a sense, nonrational.7

In 1962, these two ways of making discoveries, by theory and experiment, received an influential analysis from Thomas S. Kuhn.8 Central among his thoughts was that scientific "paradigms" derive from observation, logical conjecture, and experiment. The subsequent work of "normal science" solves puzzles in adding details to the paradigm, but it also eventually produces anomalies, which can culminate in a crisis that leads to the replacement of the old paradigm by a new one. The new paradigm reverses the perception of its predecessor. For example, heliocentrism was a paradigm shift in astronomy from Earth-centered theory. In chemistry, the theory that a substance called phlogiston was released during combustion was supplanted by the discovery that oxygen is added during burning. In physics, Einstein's theory of mass interconvertible with energy was a shift from Newton's conservation of mass.

In biology, evolution might have qualified as a Kuhnian paradigm shift, if any scientific theory had preceded it. The Book of Genesis does not lend itself to the process of paradigm, experimentation, anomalies, crisis, and paradigm shift. Even Gregor Mendel's work with peas, which prepared the way for modern genetics, was perhaps not so much a new paradigm as a development of Darwin's ideas about "blended inheritance." Kuhn noted that a more complex developmental pattern applies to biology than to physical sciences, precisely because of its close ties to medieval crafts and institutions.

Nonrational Factors

Whether or not a given theory qualifies as a paradigm shift, it can lead science in new directions, on paths replete with mystery. Subsequent experimentation can only solve puzzles and expose anomalies within that theory. In other words, the concoction of theories and the establishment of their absolute truth are both beyond the grasp of the scientific method. Forceful arguments even have been made that such historical and cultural factors as ideology, ambition, and charismatic leadership significantly infuse the logic of science with the nonrational.9 In this context, government's role in enabling inquiry can suffer from tunnel vision as much as that of industry. The ability of experimenters to recognize and follow promising anomalies likewise can be compromised.

All the more reason to ask: At a time when funding opportunities in biomedical research appear to be increasingly influenced by a priority on practical applications, how certain is it that the current paradigms are the best? For example, is work on pharmaceuticals or genomics being overemphasized at the expense of other research? If the Kuhnian view is accepted, experimentation might eventually produce so many anomalies (perhaps involving environmental, mental, and behavioral factors) that paradigm shifts will occur, leading in unanticipated directions. If so, will enough curiosity-driven science have been funded to prepare researchers for following those new leads?

Ultimately, the applications-oriented trend might not cause harm. But with no oracle to proclaim the future, history must suffice, and it warns against scientists being too much diverted from their most natural motivation, the pursuit of knowledge for its own sake. "Pure" research should be protected and nurtured. It is the inspirational engine, the prime mover of great science.

Steve Bunk (sbunk@uswest.net) is a contributing editor for The Scientist.

References

1. J. Ben-David, The Scientist's Role in Society: A Comparative Study, Englewood Cliffs, N.J., Prentice-Hall, 1971.

2. R. Dubos, The Dreams of Reason: Science and Utopias, New York and London, Columbia University Press, 1961.

3. T.H. Huxley, Method and Results, New York, D. Appleton and Co., 1898.

4. K.R. Popper, The Logic of Scientific Discovery, 2nd ed., New York, Basic Books, 1961.

5. M. Radford, "Psychoanalysis and the science of problem-solving man: an appreciation of Popper's philosophy and a response to Will (1980)," British Journal of Medical Psychology, 56:9-26, 1983.

6. J. Fawcett and F.S. Downs, The Relationship of Theory and Research, 2nd ed., Philadelphia, F.A. Davis Co., 1992.

7. B.T. Eiduson, Scientists: Their Psychological World, New York, Basic Books, 1962.

8. T.S. Kuhn, The Structure of Scientific Revolutions, 2nd ed., Chicago, Chicago University Press, 1970.

9. W. Broad and N. Wade, Betrayers of the Truth, New York, Simon and Schuster, 1982.