**THE HISTORY OF STATISTICS**

The Measurement of Uncertainty Before 1900.

Stephen M. Stigler.

Harvard University Press, Cambridge, MA, 1986. 427 pp., illus.

$25.

Stephen Stigler is a first-rank research worker in statistics. He is also an indefatigable and insightful historian of statistics, delving without fear into the enormous, and often enormously confused, literature of early statistical thinking. Out of the mist, Stigler extracts a truly satisfying intellectual history.

The origin of modern science in renaissance Europe begins with a willingness to observe nature directly rather than relying on theological or Platonic notions of how the world should be. In the merci less language of the statistician, sample size n=1 is superior to n=O.

By mid-l8th century, samples comprising 20 or 30 independent observations of the same astronomical phenomena were available and a method of combining observations became necessary. This is the beginning of modern statistical theory. (More exactly it is the beginning of statistical estimation theory; uncombining observations to reveal interesting differences, what statisticians now call hypothesis testing, is a different story. Stigler deals with both histories, but with a definite emphasis on estimation.)

Stigler begins with Legendre's 1805 publication of the method of least squares. Observations are to be combined by choosing values for the unknown coefficients-the objects of interest-that minimize the total squared discrepancy of theory from data. This can be done by solving an easy set of linear relationships (now called the "normal equations"). When there is only one unknown coefficient, the method reduces to taking the average of the observations. How simple. How elegant. And how difficult to arrive at, as Stigler shows by examining the blunders and timid half-steps of the previous century.

The incredibly productive Euler ("If publish or perish were literally true, Euler would still be alive") cannot stomach combining observations at all. Might not the errors in the individual observations add up instead of canceling? In one of the book's most effective pas sages, Stigler contrasts Euler's mathematical pessimism, "intellectual cold feet" as he calls it, with the more sanguine intuitions of the astronomer-physicists. Errors, at least of the common observational type, do tend to cancel out and least-squares estimates do tend to be better than the individual data from which they are computed. It took minds of the caliber of Legendre, Gauss and Laplace to reach this conclusion and to found modem statistical theory.

Legendre discovered, or perhaps co-discovered with Gauss, the method of least squares. Gauss showed that the method fits in perfectly with the assumption of normal ("bell-shaped") error curves. But it was Laplace who put the whole story together and who is the early hero of Stigler's narrative. The key step is his central limit theorem, which gives us good rea son to believe that the normal error curve applies to real data. Genius does not always display itself in a blinding flash. Laplace's ultimately successful attempts to understand statistical estimation span 50 years of determined effort.

One of the joys of this book is Stigler's presentation of original data sets. Laplace's 24 four-variate observations on the latitude of Saturn, distilled from 200 years of astronomical experience, give us a concrete appreciation for the situation Laplace faced and for how difficult the problem of reconciling inconsistent linear equations must have seemed in 1788. Galton's amazing kinship diagram of emi nent families (famous men's sons are four times more likely to themselves be eminent than famous men's grandsons) is a vivid snap-shot of a brilliantly simple idea.

Probability and statistics are natural mathematical tools for dealing with the inherently variable world of the social sciences. This was recognized early on, by Bernoulli and Laplace among many others, but there was very little genuine success to show for a lot of effort until the late 19th century. Galton, the English gentleman scientist, is the hero of this latter story, which makes up the second half of Stigler's book. Quetelet, often portrayed as the founder of modern social statistics, receives a good deal less credit.

Quetelet's main accomplishment, aside from convincing the world of statistics' power to de scribe social phenomena, was to show that the normal curve, originally derived for astronomical purposes, fit social data (for example men's heights) quite well. In fact, as Stigler points out, it fit everything quite well, and so became useless as a tool for dissecting cause and effect in social phenomena.

Galton's great achievement was to reconcile the ubiquity of the normal curve with an analytical theory that allowed for interesting differences between different populations. For example, tall parents have children who are taller than average (but not as unusually tall as the parents); these children follow a normal curve of heights; and when the children from parents of all. heights are aggregated the ensemble follows another, wider normal curve of heights. All of this fits together in a mathematically satisfying way, which quantitatively assesses the influence of heredity on height.

Galton's discovery of regression and correlation (originally "co-relation") set the stage for the explosion of modern statistical theory beginning in England in 1900. This is the year Stigler's history ends, so perhaps we can expect, or at least hope for, a sequel.

This book reaches beyond the audience of statisticians, and beyond historians of science, to those who are interested in the idea of information as a quantifiable concept. Computer scientists in particular should be interested in the early struggles of statistical theory, for similar efforts are now being waged in their own arena.

*Bradley Efron is professor of statistics, Stanford University, Stanford, GA94305*