No to Negative Data

Why I believe findings that disprove a hypothesis are largely not worth publishing.

By | April 1, 2008

A frequent criticism in biology is that we don't publish our negative data. As a result, the literature has become biased towards papers that favor specific hypotheses (Nature, 422:554—5, 2003). Some scientists have become so concerned about this trend that they have created journals dedicated to publishing negative results (e.g., Journal of Negative Results in Biomedicine). Personally, I don't think they should bother.

I say this because I believe negative results are not worth publishing. Rest assured that I do not include drug studies that show a lack of effectiveness towards a specific disease or condition. This type of finding is significant in a societal context, not a scientific one, and we all have a vested interest in seeing this type of result published. I am talking about a set of experimental results that fail to support a particular hypothesis. The problem with these types of negative results is that they don't actually advance science.

Science is a set of ideas that can be supported by observations. A negative result does not support any specific idea, but only tells you what isn't right. Well, only a small number of potential hypotheses are correct, but essentially an infinite number of ideas are not correct. I don't want to waste my time reading a paper about what doesn't happen; I'd rather read just those things that do happen. I can remember a positive result because I can associate it with a specific concept. What do I do with a negative one? It is hard enough to follow the current literature. A flood of negative results would make that task all but impossible.

Although publishing a negative result could potentially save other scientists from repeating an unproductive line of investigation, the likelihood is exceeding small. The number of laboratories working on the exact same problem is relatively small, and thus the overlap between scientific pursuits at the experimental level is likely to be miniscule. It is a favorite conceit of some young scientists that they are doing the next great experiment, and if it doesn't work, then the world needs to know. Experience suggests otherwise.

Twenty-five years ago, I tried to publish a paper showing that thrombin did not stimulate cells by binding to its receptor. Using a combination of computer models and experiments, I showed that the receptor hypothesis was clearly wrong. The paper detailing this negative result was emphatically rejected by all journals. I was convinced that the status quo was threatened by my contrary finding. However, what I failed to do was replace a hypothesis that was wrong with one that was correct.

Negative results can also be biased and misleading in their own way, and are often the result of experimental errors, rather than true findings. I have fielded questions from investigators who could not reproduce my results due to the lack of a critical reagent or culture condition. Similarly, I have not been able to reproduce the results of other scientists on occasions, but I don't automatically assume they are wrong. Experimental biology can be tricky, and consistently obtaining results that support a hypothesis can be challenging. It's much easier to get a negative result and mistake a technical error for a true finding.

Although I believe negative findings do not merit publication, they are the foundation of experimental biology. Positive findings are always built from a vastly greater number of negative results that were discarded along the way to publication. And certainly, if scientists feel pressure to publish positive data, it stands to reason that some of those positive data are wrong. The solution to that bias is to treat published results more skeptically. For example, we should consider all published reports the same way we consider microarray data. They are useful in the aggregate, but you should not pay much attention to an individual result.

Even if literature bias exists regarding a particular hypothesis, positive results that are wrong eventually suffer the fate of all scientific errors: They are forgotten because they are dead ends. Unless new ideas can lead to a continuous series of productive studies, they are abandoned. The erroneous thrombin receptor hypothesis that I tried so hard to disprove was rapidly abandoned several years later when the correct model was introduced (it clips a specific protein).

Steven Wiley is a Pacific Northwest National Laboratory Fellow and director of PNNL's Biomolecular Systems Initiative.

Comments

April 2, 2008

While I do agree with the author that a flood of negative results would be a bit overwhelming, it may be hard to deny that a somewhat large sum of money may be saved by publishing negative results. The author contends that only a few labs are working on the same thing (so there isn't a need for negative results), I disagree. Though we do tie a concept with remembering how something works, the biological sciences are so intricate that it's possible to theorize a plethora of possibilities, that all seem conceptially conceivable. Knowing how something doesn't work is sometimes as important as how it does. Thus, publishing thought-out experiments that may yield "negative" results still leads to the answer. It can save money in experiments, and potentially lead a research group to finding the "positive" result more quickly. And this is beneficial to everyone.
Avatar of: anonymous poster

anonymous poster

Posts: 1

April 2, 2008

I do not agree with this article at all. While I do agree negative results do not contribute to the overall hypothesis that a paper may be trying to prove, they do provide readers an idea of what not to do. In this way, by publication of negative results, we avoid redundancy in scientific research. This also would save vast amounts of money by reducing research on something that already has been shown not to work. By not publishing negatives we do not have the privilege of avoiding redundant research.
Avatar of: Jean-Luc Lebrun

Jean-Luc Lebrun

Posts: 1

April 3, 2008

A dead-end is only a dead-end when it has been peer-reviewed as such. Is it worth peer-reviewing dead-ends?\nI agree that it is more helpful to point in the right direction than to placard an entry with a dead-end sign, but I regret that the brevity of scientific papers rarely allows writers to share their failures alongside their successes. Both are instructive, in particular the path leading from failure to success.\n
Avatar of: bjoern brembs

bjoern brembs

Posts: 9

April 3, 2008

Ouch!\nThere is no such thing as "correct" or "right" in science. I suggest the authors takes some epistemology 101 and starts over.\nScience only advances when we disprove a hypothesis. Hypotheses can never be proven.\nI suggest the author reads:\nhttp://en.wikipedia.org/wiki/Karl_Popper\nhttp://www.amazon.com/Objective-Knowledge-\nEvolutionary-Karl-Popper/dp/0198750242\nand deletes this embarrassing article.
Avatar of: Stephen Hopkins

Stephen Hopkins

Posts: 2

April 3, 2008

Sometimes I despair and this article does it again for me. Firstly, the author should know that it is not possible to prove a hypothesis. The most common qualification in science is doctor of philosophy and I can only think we need to put more philosophy in our science training. Sadly, lots of scientists, perhaps in the biosciences in particular, seem to spend a lot of time designing experiments to support their pet hypotheses, rather than identifying weaknesses and bringing us closer to the truth. I?ve certainly spent (wasted) my fair share of time chasing dodgy hypotheses and positive data. Selective use of data to support hypotheses, and turning a blind eye to experiments that ?failed?, may not quite count as fraud, but in my mind comes a pretty close second, and I believe this is encouraged by discounting negative data. The tendency to favour positive data, even when not well supported, has occasionally led some of the most prestigious journals to get egg on their faces. We know that in biology, the use of an arbitrary p<0.5 means that a good number of positive experiments are bound to be wrong. In some less statistically rigorous journals and areas of biology, where multiple and post-hoc testing of the data seems allowed, this will occur even more often. As has been documented in this journal, in genetics (and perhaps in systems biology), or other frequently hypothesis-free areas, ?positive? data seem more often disproved than not in subsequent studies. In these cases the negative data can be more valuable than the initial time-wasting positive data. One point I almost agree on, however. If the author really did disprove a hypothesis, there certainly was a responsibility on him, but this was to replace a hypothesis that was wrong with one that may be correct, not one that ?was? correct: you could never prove it.
Avatar of: anonymous poster

anonymous poster

Posts: 1

April 3, 2008

"...they [negative findings] are the foundation of experimental biology. Positive findings are always built from a vastly greater number of negative results that were discarded along the way to publication." Why should something that in the authors own words is so important to the scientific process be hidden?\n\nIn my opinion there are a couple of reasons, why making 'negative' data publicly available - not necessarily publish - is important:\n\n* if they are part of the process to come to a scientific conclusion, they are just as important as any other data. Just as the X-ray crystallographers deposit the zero intensities along with the positive ones.\n\n* building hypotheses and falsifying them is central to process of generating scientific knowledge. Hiding the falsified ones from the community slows the process. And I believe there is considerable overlap on what people do.\n\n* what is now considered 'negative' data, maybe negative, because we didn't ask the right question. From the authors point of view "drug studies that show a lack of effectiveness" have only a societal context. Actually a nice example, why I think, it is scientifically important to keep a depository of negative results as well. ADME/Tox is the keyword in this case. A bit more than a decade ago researchers in pharma had learned to use data-mining tools to predict activity, but when they tried to use the same strategy towards ADME/T they found out there isn't any data to work with. Data worth a couple of decades of scientific work had gone to the waste bin without an undelete button, because somebody considered it negative data at the time.\n\n
Avatar of: Kevin Healey

Kevin Healey

Posts: 2

April 3, 2008

Its not uncommon to see papers of the "Food ingredient is harmful!" ilk, and yet the daily intake used bears no relationship to conventional usage patterns. It is valid to test high doses for toxicological purposes, and yet the results become distorted in the popular medium who fail to realise that just about anythingis toxic in high doses. The scientists who publish the studies know there is little point in submitting an article which shows no harm. Regards............Kevin.
Avatar of: John Horsfall

John Horsfall

Posts: 3

April 4, 2008

The author seems to be rather innocent about the way science really works. All results are about whittling down among a set of alternatives. Both the so-called ¨positives¨and the so-called ¨negatives¨therefore achieve the same thing, very few results are utterly decisive, and we need to see the weight of evidence unfolding before we change our models. The unarguable traditional bias towards ¨positives¨ is because of paper-based published causing economic decisions to be made. As more democratic and cheaper avenues become available, working scientists will evolve the tools to summarize the pluses and minuses and editors (if they exist) more willing to to publish the latter. Publication of his negative result in the thrombin study may well have been able to save others labs a great deal of effort, as well as speed up an approach to the ¨truth¨!

April 6, 2008

I'm not sure how I feel about publishing negative data but one thing I am certain about is that if I want to prove that all ravens are black then I must record the presence of black ravens. Let's say I do that and I report my findings in the peer reviewed literature. Several colleagues in Australia pick up on the theme and record more black ravens. Colleagues in America do the same. All looks good for the black ravens hypothesis. Colleagues in Korea record black ravens as well and come up with a good mechanistic account of why ravens must be black; it is what they are. Then someone finds a white raven and it appears on the front page of Science. White raven disproves black raven hypothesis screams the cover. Except it doesn't. Maybe that black raven is a dove. Or maybe it is a black raven that fell into a pot of white paint. Or something else.\n\nThe point is that science accumulates positive findings and gradually acquires authority. We don't blithely dismiss what is understood because of one negative finding. If that was our approach to science then clinical trials showing the efficacy of homeopathy would lead to an abandonment of everything we know about chemistry, physics and biology.\n\nScience moves forward through the accumulation of knowledge that is accepted as true because further testing bears it out - the earth is round and exerts a gravitational pull that supports the orbit of satellites; DNA is a double helix that supports self replication; electrons form a standing wave around a dense nucleus that supports matter. Okay we are still working that last one out but you get the idea.\n\nPopper hated all that and preferred to pour scorn upon authority and knowledge by relativising truth. His falsification doctrine is the result of his profound disdain for humanity and science.
Avatar of: SUI HUANG

SUI HUANG

Posts: 3

April 8, 2008

This discussion is moot without defining what is the hypothesis in an individual case. The relation to truth of the attributes POSITIVE or NEGATIVE depends on the definition of the hypothesis. \nA NEGATIVE result does not mean a WRONG fact, nor does a POSITIVE result establish the TRUTH. It depends on how one defines the (hypo)thesis: One can prove the hypothesis, or, for Popperians, better, one can disprove a hypothesis. But one can also DISPROVE the ANTITHESIS in order to PROVE the THESIS. \nHence, I can prove that the earth is a sphere, and a negative result could support the flat-earth hypothesis. But I can also try to prove that earth is flat, and my negative result means that I DISPROVED the flat earth-idea, and thus, I can believe that earth is perhaps spherical. Thus, by turning proving a thesis into disproving the antithesis, I superficially satisfy the (overrated) Popperian requirement to disprove a hypothesis. In this case the NEGATIVE result is what is more important.
Avatar of: Cecily Bishop

Cecily Bishop

Posts: 9

April 8, 2008

I agree with the comments posted for this ignorant article. Data that is considered negative when it disproves the hypothesis, but a hypothesis is a guess (an educated guess, but a guess nonetheless). All guesses are wrong at least 50% of time. At one time science was sure that sperm contained tiny humans that were inserted into the uterus. When a paper reports failure to support a particular hypothesis, or that a particular model is not useful in investigating a hypothesis, it does further science by helping other investigators to not waste time, money, and effort pursuing similar avenues.
Avatar of: anonymous poster

anonymous poster

Posts: 3

April 8, 2008

It is interesting how often the reporting of negative data is discouraged--even by dissertation advisors! In fact, just recently, I analyzed data and all of the findings based on the data collected suggested that our hypothesis was completely wrong. \n\nWhile initially one of the researcher colleagues wanted to trash the data, I insisted on its publish. Luckily the 3rd coauthor agreed to the publish and since then many researchers we talked to recommended the publish; we are now in the process of doing just that.\n\nI believe that a "no finding" is VERY important to publish. If we only publish support for our hypothesis, we \n\n1) stand the chance of formulating our hypothesis based on what we find--and that is not scientific research, or \n\n2) we dig until we find what we looked for--and that is shameful science.\n\nThus, to be professional, we really should publish our negative findings as well as the positive. \n\nWhat we need to change is the peer review process!
Avatar of: James coyne

James coyne

Posts: 1

April 8, 2008

if so it falls flat. but why else did the author display such ignorance?
Avatar of: Ellen Hunt

Ellen Hunt

Posts: 199

April 8, 2008

First, in the modern scientific world, investigators do not expect to read all literature on their subject. As this author notes, it is such an avalanche it is impossible. But, investigators DO use search facilities, from Pubmed and Medline to Google Scholar to find relevant articles on their subject of investigation. \n\nSecond, in some cases, negative results are relevant to saving lives. When a lot of money is going into something that has commercial backing and something is found wrong with an assay, instrument or idea, negative publishing is the only way to handle it. Once published in peer review, it can be quoted without fear of lawsuits from corporate attorneys - just as a for instance. \n\nLast, the writer apparently does not realize that in the brave new world we live in, scientists can go to the article (such as in Journal of Negative Results) and post a comment. If it holds water, the editor will let it be published as a comment/criticism of the paper.
Avatar of: STEVEN WILEY

STEVEN WILEY

Posts: 4

April 8, 2008

No, this is not an April's fool joke and most of my friends consider me relatively intelligent! ;) I am amused by the flaming responses that seemed to object to the title of my article rather than its content (which is actually relatively limited and mild). It is clear that the issue of negative data triggers a gut reaction in most biologists. I think that is because most of our results are negative and it frustrates us to think that no one cares about that. The reality, however, is that we don?t. The journals that were formed to publish negative data have struggled because no one will submit articles. Clearly, biologists as a whole do not feel it is worth our time to write up such articles, but we object when someone points out that it is OK to feel that way!\n\nThe point I was trying to make is that negative data needs to be integrated into biological knowledge at the level of the individual investigator. It is OUR responsibility to filter all of our experimental data and to present a coherent story regarding biological processes. If we fudge results and misrepresent data, then shame on us! Flooding the literature with negative observations, however, is not helpful. By their nature, hypotheses supported by experimental data (e.g. positive results) are far more useful than observations inconsistent with any hypothesis (e.g. negative results). I agree that my work would have greatly benefited from knowing about potential blind alleys beforehand, but I am skeptical that 1) other scientists went down the same blind alleys and 2) they would (or should) have taken the time to write them up into coherent scientific articles.\n\n
Avatar of: anonymous poster

anonymous poster

Posts: 2

April 8, 2008

Hard to believe the author is a scientist. \nScience is defined by the pursuit of unbiased knowledge, regardless of whether it fits our own man-made hypotheses or not. \nYES to Negative Data, regardless of what that entails!
Avatar of: anonymous poster

anonymous poster

Posts: 1

April 8, 2008

As a member of the patent community, I must comment that published negative data is invaluable in arguing the nonobviousness of an invention. Since the Supreme Court's decision last year in KSR v Teleflex, the bar to patentability has been raised. We need all the ammunition we can get. Publish those negative data!
Avatar of: Rhodri Harfoot

Rhodri Harfoot

Posts: 4

April 8, 2008

I think what the author is trying to say is that, instead of just saying " we had a negative result", come up with an alternate hypothesis that your data and published data may also fit. Alternatives not negatives.
Avatar of: Bart Janssen

Bart Janssen

Posts: 5

April 8, 2008

I can sympathise with the author, too many papers to read, review and synthesise, but he is making a blanket statement in a field where all data should be properly reviewed.\n\nBy saying no to negative data, he is prejudging the data and that is the mistake he has made.\n\nThe power of quality science is that hypotheses can be and are tested. Positive data is usually easier to interpret as the author pointed out, but inherently it is of less value in generating new and more advanced hypotheses.\n\nIt is harder to properly assess data that challenges the accepted dogma. It takes more talent and effort on the part of reviewers, editors and readers. But the value of quality negative data is that it allows new hypotheses to be considered and developed.\n\nThe history of science is marked more indelibly by data that challenged the accepted hypotheses and resulted in a change to the dogma, than by "positive data" that merely supported.\n\nDon't say "no" to either positive or negative data, instead apply good review and editorial standards and assess the data for what it is, then make your judgement.\n\ncheers\nBart
Avatar of: Ruth Rosin

Ruth Rosin

Posts: 117

April 8, 2008

Contrary to the author, negative results must be published, of course, after carefully checking for errors, or equipment malfunctioning!\n\nIf the author recommends otherwise, he should study the history, as well as current status, of the 1973 Nobel winning "discovery" of the "instinctive" , genetically predetermined,"amazing" honeybee "dance language:" (DL), which concerns the very foundatiions of the whole field of psychobiology, i.e. the problem of the existence of "instincts", and what to incorporate instead, if you kick "instinct" out\n\nThe "discovery" of the honeybee DL was first published by K. v. Frisch in a scientific journal in 1946, and overly quickly became revered ruling paradigm. Except that, in spite of an almost endless number of futile attempts, by very many scientists all over the world, (at an incredible waste of time, talent, and financial resources), no one, including v. Frisch himself, has ever been able to achieve the required experimental confirmation for the existence of such a DL. \n\nThis is not surprising, because v. Frisch's DL hypothesis, (which claims that honeybee recruits obtain & use spatial information contained in foragers'-dances about the location of the source visited by the foragers, to help them find the source on their own), was stillborn, when he correctly concluded, following his first study on honeybee-recruitment, (published in a very extensive summary in 1923), that recruits use only odor, and NO information about the location of any source. He held on to that conclusion until 1943, in the midst of WWII, when he erroneously began to conclude that it was an error; which it never was. Moreover, the results of his first study on honeybee-recruitment already grossly contradicted his later DL hypothesis long before its inception, (in terms of the expectations from round dances).\n\nShortly after WWII he published the "discovery" of the "amazing" honeybee DL. He, (as well as very many others)innocently, but quite erroneously, belied that such a DL simply had to exist, to explain the adaptive value of the presumably "instinctive" honeybee dances, to avoid a severe crisis in The Theory of Evolution, the most important ruling paradigm over the whole field of biology. Consequently, even though he repeatedly mentioned that originally he had believed that recruits use only odor, he did not mention the results of his first study on honeybee-recruitment, which fully justified that initial conclusion, but suppressed them. In his definitive 1967 book on the honeybee DL, (translated from the German 1965 edition), he published instead, the results of new tests, (actually done in 1962), using round dances, and a drastically different experimental design, than that used in that first study. This time the results fit the expectations from the DL hypothesis.\n\nThis was none other than outright scientific fraud, albeit committed with the noble, but misguided intention, to save The Theory of Evolution from an imaginary crisis it never faced.\n\nThus, when Wenner & his team discovered, and published in 1967, the finding that honeybee-recruits use only odor, they actually unknowingly re-discovered what v. Frisch himself had already discovered & published in the early 20's of last century. Wenner & his team were "rewarded" by being quickly turned into pariahs!\n\nSix years later the v. Frisch was awarded the Nobel Prize for the "discovery' of the "instinctive" honeybee DL, jointly with Lorenz & Tinbergen, the co-founders of a general approach to psychobiology that is based on the belief in the existence of "instincts". \n\nDL opponents could not have known of v. Frisch's suppressed results, because he repeatedly claimed to have experimentally confirmed his sensational DL hypothesis, which naturally led DL opponents to examine only his evidence-for those claims, but not examine any of his pre-DL publications, where they knew such evidence could not be found. Eventually I accidentally stumbled on v. Frisch's first study on honeybee-recruitment, and his suppressed results, in a little known 1939 publication of his, included in The Annual Report of The Smithsonian Institution, in the U.S., published in 1939. That article by v. Frisch turned out to be a reprint of his British 1937 publication in Science Progress, based on a guest-lecture he had delivered at the University College of London, summarizing his whole honeybee-research up to that time. I published the "find" in J. theoret. Biol., in 1980. But I was completely ignored. Another reprint was published, with an introduction by Wenner, in Bee World of 1993. But this was also completely ignored.\n\nThus, as late as 2005, we still get a study by Riley et al., published in Nature, in which the authors express the hope that their results, (obtained by studying only bees that never found their foragers food-source), would be accepted as a vindication of v. Frisch's DL hypothesis. It is In PRINCIPLE impossible to determine how honeybee-recruits find their foragers' food-source, by studying only bees that never found it. But, you would not even know that none of the 36 bees for which radar-tracked flights are provided, never found their foragers'-feeder, because two of those bees are reported to have "found the feeder". Only after I contacted the authors with requests for additional information was I informed by one of the authors,(Greggers), that those two bees actually found and alighted on the feeder's stand, (a chair),but never found the feeder itself.\n\nThis is what can happen due to suppression of negative results, which had been, in this specific case, originally repeatedly published!!!
Avatar of: HANS BERGMANS

HANS BERGMANS

Posts: 1

April 8, 2008

Treat yourself to a course in philosophy of science, or get your head examined.
Avatar of: Michael Morris

Michael Morris

Posts: 19

April 8, 2008

The author is correct is one sense: Unfettered publication of negative data will serve little purpose. However, there are many exceptions of great importance.\n\n(i) The author nominates one himself regarding negative results obtained in drug studies. The author's comment that "This type of finding is significant in a societal context, not a scientific one" does not ring true.\n\n(ii) Retractions. These are usually admissions that positive results are in fact negative results. We have all seen enough of these in recent years to realise their significance and importance.\n\n(iii) Some of the greatest scientific experiments ever performed have been based on negative results. The Michelson-Morley experiment disproving the presence of the aether is one of many classic examples. The hypothesis concerning prions led to a host of papers (very boring but necessary papers) showing that DNA was not involved.\n\n(iv) Finally, most papers contain negative results because frequently experiments are performed to disprove alternatives to the hypothesis being touted. Is it possible that this is so much part of the fabric of scientific publishing that the author has failed to see it?\n\nIn summary, publication of negative results is part of everyday science. Papers dealing exclusively with negative results aren't common, nor should they be, but they are nevertheless, in selected cases, crucial to the advancement of science.

April 9, 2008

I regret to disagree, but I think that negative data must be published for more than one reason:\n1) they prevent researchers from repeating the same experiment. If I know that you already did it, I will not waste my time repeating the experiment;\n2) they give clues about what hypothesis may be correct, and what may not. If I am trying to build a hypothesis about a specific problem, I want to know what data are already there that may disprove it;\n3) the Author correctly makes an example (failure to demonstrate drug effectiveness against a specific disease) where negative data are important; there are more such examples, and who is going to write an all-inclusive list of all cases where negative data are useful?\n4) last but not least, if we do not publish negative data a publication bias clearly occurs (on a specific problem, we know only those experiments that confirm the hypothesis, so we may wrongly support it).\nNegative data may not advance science, but they sure help it. I think that they should definitely be published.
Avatar of: THOMAS DECOURSEY

THOMAS DECOURSEY

Posts: 3

April 9, 2008

Steven Wiley?s column espousing the view that negative data is not worth publishing is wrong. Or perhaps it is simply ill-defined. What he describes is testing a hypothesis in new ways, showing it to be unsupported, and publishing this result. Defined this narrowly, his complaint may have some merit. However, I do not think this is what most people think of as negative data. And it is obvious that he has never worked in an area in which there was significant controversy, although I cannot imagine that such an area exists! Negative data MUST be published if science is to progress.\n\nIn general, science advances by a process of proposing a hypothesis, testing it, and then supporting, modifying, or disproving it. If we refuse to publish data that disprove a hypothesis (negative data), then science is stuck with the erroneous hypothesis. Wiley somehow fails to see the illogic in his outlandish remark, ?Although I believe negative findings do not merit publication, they are the foundation of experimental biology.? He says ?positive results that are wrong eventually suffer the fate of all scientific errors: They are forgotten because they are dead ends.? How can they be forgotten if we are not allowed to publish evidence of their wrongness? Wiley?s solution is ?to treat published results more skeptically?. We should consider all published reports the same way we consider microarray data. They are useful in the aggregate, but you should not pay too much attention to an individual result.? This sets a very low standard indeed! Perhaps we should do away with peer review, and simply let anyone publish anything. Who cares if it is true or not? Or maybe we should let anyone with an opinion vote on what the ?right? answer is, and forget about experimental research altogether. Of course, in the early days, Peter Mitchell had very few supporters, and the voting method would have kept us looking for that pesky high-energy intermediate to this day! I believe that most scientists actually care whether what they publish is true. Most try very hard to build a reputation that their results can be reproduced and can be trusted. Probably the nastiest outcome of the Wiley approach is that unethical or incompetent researchers will get their erroneous (but novel and exciting!) results published in high-profile journals, whereas those of us poor sods who are constrained by reality and actual data will be unable to publish anything. Wiley will not let our negative results be published, and even our ?positive? data will not be accepted, because reviewers will say that it contradicts the erroneous hypothesis based on incorrect data that was just published in a high-profile journal! How can science progress in this scenario?\n\n Wiley says ?Although publishing a negative result could potentially save other scientists from repeating an unproductive line of investigation, the likelihood is exceedingly small.? This is simply preposterous. I have published ?negative data? that contradicted a prevailing hypothesis, and received unsolicited thank-you letters from scientists who had planned to expend major effort pursuing the hypothesis I had disproved. One was a Ph.D. student who may well have squandered years and possibly left science altogether in disgust had he continued his plan to base his dissertation project on the erroneous hypothesis. There is typically little positive feedback for those who publish negative data, because they may make enemies, and even if they are found to be correct, their papers are rapidly forgotten because the field is enabled to move forward into other, more promising areas. But first it is necessary to correct the error!\n\n Another, more important factor that Wiley ignores is that when a wrong hypothesis is published, especially in a high-profile journal (and you know the ones of which I speak!), it spawns numerous other ?me-too? papers whose logical framework is based on that theory. Many studies will be designed on the basis of a fictional view of reality, even though these studies are not specifically designed to test the hypothesis. I once reviewed a manuscript whose authors interpreted their data in terms of a model that was proposed in a recently published Nature paper. Despite the fact that their own data contradicted this high-profile hypothesis, they twisted their data into the wrong model, with appalling consequences. The propagation of errors in science is bad enough already, but if Wiley has his way, errors will compound and spread like cancer, and empirical science will grind to a pathetic halt.\n\n Wiley continues, ?The number of laboratories working on the exact same problem is relatively small, and thus the overlap between scientific pursuits at the experimental level is likely to be miniscule.? Also preposterous! In fact, most researchers are chronically worried that someone else will do the same experiment and publish the result first. In most fields, there are many labs pursuing similar or identical problems using similar or identical methodology. The only motivation for not publishing a negative result in such a situation might be to sabotage one?s competitors by allowing them to waste time and money pursuing a pathway that you know to be wrong!\n\n I fail to see how Wiley imagines that errors can ever be corrected. Are we supposed to depend on the grapevine to learn which results cannot be repeated? So we submit a grant proposal and it is rejected because the Study Section has a member who knows the secret that hypothesis X is baseless and irreproducible, although no one could publish this result? Do we simply live for decades with competing hypotheses, each of which has a steadily increasing number of studies in support, because none has ever been contradicted in print? I see Wiley?s proposal as a recipe for disaster.\n\n
Avatar of: James Ketchum

James Ketchum

Posts: 2

April 9, 2008

Some years ago I wrote a letter advocating the publishing of properly designed reputable studies that produce negative or non-significant results. It was published with the editor's concurrence and support.\n\nMany products are sold with the backing of one or two selected publications, even though many other studies done with equal skill showed no benefit or no superior benefit from the product. I often check Google and find that a convincing article is not valid in the opinion of several other good scientists.\n\nHead-to-head studies of comparable drugs, for example, often conclude that there is no demonstrable difference in effectivess. Haldol was recently shown to be as effective (and cheaper) than the newer expensive antipsychotics, although its side effect profile might be not quite as good. The same goes for antidepressants, which are constantly being advertised with graphs showing them to be better than a competitor's product. I'm sure the pharmaceutical companies are delighted to know that the average clinician will often try another drug when the first one fails, once he reads the claims of drug number two.\n\nGood studies with plausible hypotheses that yield negative findings are as deserving of being cited in one's CV as those that show positive findings. \n\nJames S. Ketchum, MD
Avatar of: null null

null null

Posts: 5

April 9, 2008

As one reviewer said, the author could do with a good course in philosophy. Also, he could do with a course in humility, and also in statistics.\nwhile I haven't read ALL the reviews, there seems to be a strong bias in both the original article and the reviews towards medical science. Medical science is not ALL of science.\nMore importantly however, there seems to be no recognition of the difference between type 1 and 2 errors as triggers for the need to publish on an hypothesis. Clearly, generating data that fails to support an hypothesis does not disprove it. And publishing it can still be of value in saving others the time to re-do the experiment, or added to other works in a meta-analysis it may lead to more statistical (? objective) results. On the other hand, generating data or observations that can disprove an hypothesis is clearly of interest and will generally involve the presentation of an alternative hypothesis - thus progressing science.
Avatar of: anonymous poster

anonymous poster

Posts: 6

April 10, 2008

"Negative results can also be biased and misleading in their own way, and are often the result of experimental errors, rather than true findings."\nAnd positive results are never the result of experimental error, or fiddling or keeping on doing it until you get the answer you want, however erroneous? Apparently only experiments that don't give the "right" answer are subject to error. Go look at the list of fraudulent and wonderfully positive results that have been published in high profile journals over the last five years then come back and tell us that negative results should not be published.
Avatar of: Bill Todd

Bill Todd

Posts: 1

April 14, 2008

In his article, Wiley says, "However, what I failed to do was replace a hypothesis that was wrong with one that was correct."\n\nWhat's wrong with saying, "Gee, we just don't know" every now and then? To me this is one of the essential differences between Science and Faith--that Science is willing to admit that there are things we don't know.\n\nEven after Wiley had his data pulled together, it was only "several years later" that the thrombin receptor idea was abandoned--several years that could easily have been spent on other areas of research if good scientists hadn't still been walking down that dead end.

April 21, 2008

Just a few comments, I wrote most of this in response to the paper article (I rarely feel compelled to do so), before being dully warned that there was an ongoing online discussion. At any rate, as many others here, I have profound disagreements regarding the column, not only because of its "positivistic" tone, but also, more indirectly, about what it seems to define as science worthy of attention.\nWiley's position is, in essence, that science will sort itself out in the end, and thus publishing negative data is a waste of time, given that usually negative results arise from technical errors. The exception might be "drug studies that show lack of effectiveness towards a specific disease", but this is a negative result "significant in a societal context, not a scientific one". First of all this "exception" sounds a bit odd, surely, and presuming the studies were carried out because a benefit was assumed by a previous analysis (be it a computer model, a culture dish, model animals, or a selected human population) the lack of a result is also a scientific problem, at least inasmuch as to question the studies premise, and the predictive limitations of different models. The label "societal" seems clearly to be derogatory here, and I find that unsettling, and not very humble. Secondly, in can be just about as easy to get a (false) positive result from a technical error as it is to get a (false) negative one, most techniques cut both ways. Thirdly, and given the general positivistic bias of both scientists and journals (and science students, and the media, and even the population for that matter), there can great value in publishing negative data, especially in fast-paced charged areas, since it speaks to the issue of reproducibility. For example, issues such as the possibility of the presence of renewing germ cells in the adult mammalian ovary, or if hematopoietic cells can repair other tissues, would not have been discussed so richly from a scientific standpoint if only the initial (positive) results had been published. Another, less savory, aspect is that in the fields I am familiar with there is positive data "everyone" in the know (i.e. your drinking buddies at a Gordon Conference, for example) is aware not to be valid, but the negative results were never accepted for publication even as editors acknowledged their "truth". Sure, in the end it might not matter, but tell that to the poor sap wasting her/his breath. Of course there is negative data published clearly to pad CVs (some positive, as well) but I don't think this issue is as clear-cut as, well, a binary positive/negative response.\n\n
Avatar of: Vincent Ferrera

Vincent Ferrera

Posts: 3

April 22, 2008

It might be useful to distinguish different classes of negative result:\n\n1. Failure to support any particular hypothesis. You went on a fishing expedition and came up empty. In this case, I agree, not publication worthy.\n\n2. Failure to support a particular hypothesis. Suppose someone has a Really Bad Idea and does a flawed experiment to support that idea and manages to get it published. Then 100 people do experiments that fail to support the idea. Seems like you might want to publish at least one of the failures. You know, just to avoid "media bias."\n\n3. Falsification of hypothesis. A clear falsification should be publishable regardless of whether the results do or do not show an effect.\n\nIn general, I think the "prediction error" model is useful. In one case, you might predict an effect and fail to find it. In another, the null hypothesis is no effect, and an effect is found. Both cases are potentially equally informative.\n\nIn basketball, slam dunks and blocked shots are equally noteworthy.\n\n
Avatar of: Steve Chervitz

Steve Chervitz

Posts: 1

April 28, 2008

A take-home for me from this discussion is that both positive and negative data have value in science and in the publication record. \n\nNegative data are most useful when there are alternative, competing hypotheses and the data provides a rationale for supporting one hypothesis over the alternatives. \n\nNegative data can also provide a valuable check on popular hypotheses that carry substantial weight due to their popularity (example: the amyloid hypothesis in Alzheimer's disease, see here and here).\n\nIf we have obtained results through careful experimentation that refute some established hypothesis, as Dr. Wiley apparently did for his thrombin receptor studies, then it behooves us to provide an alternative, testable hypothesis that is consistent with our findings (and perhaps some preliminary positive evidence supporting it). Shame on us for not doing so! In this way, negative results can positively contribute to the advancement of science.
Avatar of: Jay Whelan

Jay Whelan

Posts: 1

December 26, 2008

The last portion of this statement has no scientific basis, "Although publishing a negative result could potentially save other scientists from repeating an unproductive line of investigation, the likelihood is exceeding small". I would suspect the "likelihood" is greater that you would expect.\n\nWe spend an enormous amount of time, money and energy investigating what is not verifiable. An example is the search for the mammalian delta-4 desaturase, a key pseudo enzyme once believed to be part of the metabolic chain for polyunsaturated fats. What a waste of time and money on that folly. Following the discovery of the alternative pathway, it still took a decade before it was generally accepted. Sometimes it is just as important knowing where not to go.\n\nPapers should be published based on scientific integrity, whatever the results.
Avatar of: Gary An

Gary An

Posts: 1

December 26, 2008

I respectively disagree with Dr. Wiley with respect to his general statement on the lack of a need to publish negative data, and would agree with the prior post by Vincent Ferrera regarding the need to classify "negative results." In particular, I would point back to Popper's emphasis in his writings on the philosophy of science on the importance of falsification in the scientific process. In order for this concept to be appropriately utilized (and used to refute the statements in Dr. Wiley's article) it is necessary to follow a classification/distinction between "undirected" negative results and those negative results specifically related to an existing hypothesis (as in Dr. Wiley's own Thrombin example). The requirement that a competing explanation necessarily accompany a nullifying result is to fall back into the limitations of Logical Positivism. While it is possible/likely that the inability to reproduce results may be due to technical issues, without these results seeing the light of day (and the cold eye of scrutiny) the actual case may not be discovered. Reproduce-ability leads to the scientific consensus necessary for a hypothesis to survive (at least until it is next challenged); failure to note contradictory evidence is to stifle the basis for informed consensus. To say that only positive tests of hypothesis advance science is to miss the essential property of the scientific process: skepticism. To quote Mark Twain:\n"IIt ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so."
Avatar of: Sarah Baran

Sarah Baran

Posts: 2

December 26, 2008

One of the biggest frustrations in grad scool for me was hearing from a prominent scientist in my field say that they had never found results when applying a certain procedure. In fact, they had tried several times to no avail. This was in response to me voicing some frustration at negative data I had found using the same procedure. Later studies using a slightly modified procedure proved successful, but would have been published years earlier if the null results had been made public by the prominent researcher. We spend so much time and money on our research that not putting out good negative results pushes the science back years, if not longer. One simple concept comes to mind. If you don't want to waste your time with negative data, then don't read more than the abstract.
Avatar of: LEWIS Sheffield

LEWIS Sheffield

Posts: 2

December 26, 2008

I appreciate the desire to avoid flooding the literature with studies that do not advance science. Most of us have explored ideas that turned out to be a waste of time, and should not be published. But when a study is well designed and fails to support an emerging (or even established) idea, it needs to be published. Otherwise we have experiences like sometimes occurs: "everyone that knows this area knows that _ is wrong" where _ is a previously published finding that has never been replicated, the studies failing to replicate it have never been published and there is no way to determine from the literature that the result is likely incorrect.\nInstead, we need some way to publish these failures in a way they will appear in a literature search. The papers often need not be long, but detailed enough to ensure it was well done.\nThe final judgement should be if it advances science. Sometimes, knowing what we this isn't true is an advancement.

December 27, 2008

Instead of taking sides in an argument that clearly has two sides to it (like everything else), I'd like to relate an experience that I've had myself. Then I shall rest my case, without stating which side I'm on....\n\nWhile I was still doing my Ph.D in the nineteen nineties, it was fashionable to place two fluorescent labels on a molecule (or a molecular complex) and measure intramolecular distances, using fluorescence resonance energy transfer (FRET). FRET can be estimated either by measuring changes in the intensity of fluorescence of a donor fluor under conditions of steady-state illumination in a standard spectrofluorimeter, or by making measurements of changes in the donor's lifetime of fluorescence. Since the instrumentation for lifetime measurements used to cost an arm, and a leg, at that time, tens of papers were appearing in the literature every year - in all the best journals in the field of protein biochemistry - using steady-state measurements. \n\nA technical assistant in the lab (B.Raman), and I, were sure that the steady-state measurements were all wrong, because it seemed to us intuitively that they could (and surely would) be contaminated by the emission and reabsorption of photons (something that we later learnt was called trivial energy transfer); something that everyone had forgotten about ! We discussed it with the boss, who said 'Put your money where your mouth is, and find a way to prove what you are saying' ! \n\nWell, to cut a long story short, I thought of a way, and Raman and I did the experiment together, with great excitement. And we proved that you can measure a bogus intramolecular distance using a system in which the donor and acceptor fluors are separated by millimeters (rather than by tens of Angstroms). The boss was even more excited than us. Being an ultra-ethical boss and Ph.D supervisor (and one to whom I owe more than I can ever write-down), he refused to co-author the paper with us - saying 'look guys, I didn't think of it'. However, from that point on he helped us to refine our thinking, helped us to edit a manuscript that we put together, and encouraged Raman and me to publish it ourselves. \n\nIt was a negative result; one that seemed to make nonsense of the results of hundreds of papers from the best labs, reporting intramolecular distances in all sort of proteins that had not yet been subjected to X-ray crystallographic structure determination. Journal after journal turned our paper down ! \n\nOne journal (I shall not name it, but I can probably still find the original correspondence) even published two papers measuring intramolecular distances by steady-state methods during the same month in which they rejected our paper - saying 'this is not biochemistry'. Another journal of even greater repute rejected it because one of the referees impugned that we had 'rediscovered' the 'inner filter' effect. Anyone who understands fluorescence will know that only someone who knows spectroscopic jargon (and not spectroscopy per se) would make such a comment.\n\nAnyway, we got a terrific review from the Journal of Physical Chemistry, although that journal also didn't accept the paper. A referee pointed out that he and many people of his age had suspected that what we were saying could be true, even since the time that Stryer and Haugland had published their paper on using FRET as a spectroscopic ruler of intramolecular distances, but that no one had ever found a way of demonstrating it. He lauded our experimental approach and offered his best wishes for publishing a negative result. He even gave us a couple of useful references to cite !\n\nWe finally turned the paper inside-out - turning it from a 'negative' paper into a 'positive' paper - and published it in Analytical Biochemistry as a 'methods' paper in a section called 'notes and tips'. Having pointed out first how trivial energy transfer could contaminate FRET measurements, and secondly, how such contamination could not possibly be measured or substracted, we proceeded to point out how our approach could be used as a control, to check - in each case, before beginning a FRET experiment using steady-state fluorescence - whether the method was safe to use !\n\nSo, it was really simple. Instead of saying 'this is a terrible method that is subject to unsubstractable contributions from an effect you've neglected; don't use it', we simply said 'this contaminating effect can affect your data and measurements; here's how you can check whether it's safe to proceed'. Of course, we also strongly advocated the use of lifetime measurements by pointing out exactly why such measurements would not be subject to the same contaminations. The paper went through like a shot ! Of course, it helped that by this time I was on a short postdoc in Cambridge, UK, before returning to take up an independent job in India, and the Cambridge address (present address) must have helped.\n\nThe reference for our paper in Anal. Biochem. is : Raman and Guptasarma (1995). Use of tandem cuvettes to determine whether radiative (trivial) energy transfer can contaminate steady-state measurements of fluorescence resonance energy transfer. Analytical Biochemistry 230, 187-191.\n\nOur paper has never been cited since it appeared. But I've checked the literature, off and on. There has been a precipitious drop in the number of papers using steady-state fluorescence, or reporting intramolecular distances, since 1995 ! \n\nOf course, with lifetime measurements having become much cheaper now, one hopes that the approach hasn't become completely killed as a technique - and that there will be a revival. However, before FRET can become useful as a spectroscopic ruler, there are many unknowables in the equations that need to be figured out (e.g., the orientation parameters, the refractive index of the medium separating the donor and acceptor - for all of which, people have been using some sort of fudge-factor approximations), and so only experts can ever use FRET reliably to measure intramolecular distances. I must confess that I've stayed away from the temptation of becoming such an expert :-).\n\nSo, is there any merit in publishing negative results ? I don't know. You go figure ...!\n\n\n\n\n\n
Avatar of: Peter HIbbard

Peter HIbbard

Posts: 7

December 27, 2008

While I agree that negative opinions should not be published, I cannot support the same conclusion about negative findings for several reasons. All properly conducted research, if it passes peer review, contributes to the general fund of knowledge. I disagree with several writers who say that a hypothesis is "just a guess" Hopefully, it is a well thought out potential solution, which has evaluated and rejected unproductive lines of research, thus has passed through a filter of negative results. I agree that a hypothesis cannot be proven, but with enough positive results, it may be generally accepted as true, in which case, it becomes a theory. A single negative result may or may not be enough to block this process, but should be explored to advance science. With the advantages of computer searches, the task of filtering useful articles is not as tedious as it once was. Perhaps the satndard should be different, and negative results should be shorter. The fact remains that "what is not' tells us much about "what is". There is a secondary reason to publish negative results. High School students and undergraduates are told that there is no such thing as a failure in science. Even negative results teach us something about what to investigate next. Should we reject negative results as inconsequential, the next generation of scientists may take a totally different approach to research, with negatives being supressed as unproductive. With such an approach already adopted by some companies that are driven by the profit motive, I would dislike to see this become the standard in unbiased research. If students are to have role models to emulate, then published papers should give all results that are supported by good research, the good, the bad and the ugly.
Avatar of: Bob Roehr

Bob Roehr

Posts: 3

December 27, 2008

Wiley laments that nobody writes up or reads negative data. Perhaps what is needed is a new, streamlined format for reporting negative data that is less of a burden on author and reader alike. \n\nI'm also troubled by the proprietary meme that is behind much of this thread. Most research is funded at least in part with public money, so it is not solely the "property" of the researcher. The public partner [including other researchers who also are taxpayers] needs to be informed of negative findings that it is paying for -- so publish it. In fact, I'd make it a requirement of all federal grants in much the same way that journal publications must now be open access after an appropriate time. \n\n
Avatar of: Ruth Rosin

Ruth Rosin

Posts: 117

December 27, 2008

Wiley's credo is just a counter-scientific bad joke, that should never have been posted on TheScientist in the first place!
Avatar of: null null

null null

Posts: 16

December 28, 2008

By falsifying existing theories, negative results are the primary producer of scientific revolutions. The best-known example is the Mickelson-Morley experiment, showing that the speed of light is invariant no matter how fast you go. Eventually, this led to relativity and much of modern physics.
Avatar of: anonymous poster

anonymous poster

Posts: 2

December 28, 2008

Wiley's credo is just a counter-scientific bad joke, that should never have been posted on TheScientist in the first place!\nAll sound data is important, and publishing negative information helps others not to waste resources and energy going down a proven blind alley.
Avatar of: anonymous poster

anonymous poster

Posts: 1

December 28, 2008

His views or arguments are always narrow, bias, and not professional. It is time to go.
Avatar of: anonymous poster

anonymous poster

Posts: 125

December 28, 2008

I have read several articles by Wiley and must conclude that it is precisely because of narrow-minded scientists like him that progresses in biological science is so much slower than should be. Basically, his scientific philosophy seems to be based on a purely hypothesis-driven research that must generate data that supports a paricular hypothesis or dogma, in order to be publishable, even if the hypothesis itself is shaky or false. This is very unhealthy for the central goal of discovering a scientific fact with an open mind. He seems to forget that some very crucial scientific discoveries were made from the data that did not support the prevailing hypotheses of the day as from those that did. I point to Ernest Rutherford and his discovery of atomic nucleus as a famous example of a negative data that turned out to be one of the most fundamental scientic facts uncovered about our Universe. On a flip side, we have many examples of the so-called positive data that have yet to support a hypothesis convincingly or, even, not at all. For that, I only need to point to the results from the drug studies based on the positive data supporting certain hypotheses for treating malignant tumors that proved to be ineffective - after many years and costs in money and lives.\n\nIt is quite disturbing to read such a biased and irresponsible article that goes against the fundamental philosophy of science from a scientist of Steven Wiley's stature, indeed.
Avatar of: anoop kumar

anoop kumar

Posts: 1

December 29, 2008

The finding in research will not always be worthful. What does it mean scientist are not doing well or there is some problem in their performance. I am not agree with author's comments on negative mindedness. He should be open to thind like a person you can collect the things from garbage, means negative result. If I am going to publish negative result. is it not my findings? It is. If someone has problem with that than prove it positive and if agree with me than stop doing it and stop wasting time on the samethings which I have published. It is not only going to help me but going to help others also who are thinking to do work on that. It will save money and time.
Avatar of: anonymous poster

anonymous poster

Posts: 2

December 29, 2008

Shooting down a bad scientific paradigm is the only way to clear a path towards a new one. The author erroneously overstates his case that negative results do not advance scientific knowledge. On the contrary, it is a process of discovery and a renewal of purpose to use new tools and technologies to modify, attenuate, or destroy hypothesis built on incomplete data (as are most). We should welcome studies that upend our current thinking as opportunities to move in new directions.
Avatar of: anonymous poster

anonymous poster

Posts: 28

December 29, 2008

The essay reminds my experiences in R01 application. Currently, some R01 reviewers are essentially have the author's bias. When they criticize a research approach, they always say that the experiments will not support the hypothesis if the results are turn out to be negative. Problem is that they ignore ingore that the negative results are important to test a hypothesis, too, which will amend an existing concept or lead to a new direction of research. \n\nI searched the NIH supported R01s, and find that it is safe for a lab to get one more grants by just repeating a hypothesis in different disease models. For example, the function of NF-kB on the immune cells can be investigated in arthritis, SLE, and diabetes, etc. So what for one lab to do the same experiments but for more grants. Does this advance science if oen get positive results? It would be best to do such kind of repeats by other labs. Program director should take responsiblity to limit such kind of funds, regardless of how high the score is.\n\nAnother problem is that in some prestigious laboratories persistently generate unconvincing or false positive data bacause of technical problems. For example, flow cytometry analysis of cell surface marker always generate unreproducible data if the cells were fixed with formalin. Moreover, they ususally did not state the fixative they used in publications. \n\nThus, publishing negative data is necessary to counter false positive data.\n\n
Avatar of: Uma  Shaanker

Uma Shaanker

Posts: 3

December 29, 2008

I should admit that the article makes compelling reading but not necessarily the right one. \n\nI am reminded about a Sherlock Holmes incident. Dr. Watson is eager to leave the scene of a crime, having obtained no tangible clue from the inmate of the house. Just when Sherlock Holmes and Watson are beginning to return, Sherlock runs back to the house and asks the owner, "Did your dog bark in the night". The owner replied, "No". \n\nIn his inimitabe manner, Sherlock Holmes beckoned Watson to come over and exclaimed, "I have the answer. The dog did not bark. And so the crime must have been committed by some one within the house". \n\nWatson was baffled, that Sherlock Holmes, could solve a crime, when there were no signals (dog's barking).\n\nThis story has a message. \n\nSignals can be everywhere, in negative (no barking) or positive (barking) results, data or observation. One cannot overlook them. I think that would be the fair way to educate researchers, than taking an extreme step as has been done in the article.\n
Avatar of: Barry Williams

Barry Williams

Posts: 2

December 30, 2008

Because he uses both 'biological' and 'science' in his article, I am not sure if Dr. Wiley is proposing that his rule of not publishing negative findings applies only to the biological sciences. If the latter, I would point him to perhaps the most famous negative result in science: the 1887 Michelson-Morley experiment demonstrating that the speed of light is a constant independent of the relative speed of the observer, thus disproving the existence of an 'ether' through which light was until then believed to propagate.\n\nAlthough it would be a number of years before the this experiment would lead to a new theory--relativity--and experiments with 'positive' results that would validate the theory more broadly, consider how longer the scientific community might have labored in vain without the publication of the earlier 'negative' result.\n\nWithin the field of the biological sciences, I just read last weekend the undertaking of yet another major research project to study the possible link between cell phones and cancer. Because any connection is probably extremely weak, prior research has resulted in contradictory findings, and the link has neither been proven nor disproven. In such a case, a study both long enough and large enough to settle the question will be significant whether it proves or disproves the connection between cell phones and cancer.
Avatar of: null null

null null

Posts: 16

December 30, 2008

Thomas Kuhn's germinal work "The Structure of Scientific Revolutions" notes that the textbooks get rewritten because older theories become incapable of explaining new data. Traditionally, this data is negative data.\n\nIt is true that negative data plays little role in "ordinary science". However, this is not how science generally progresses.\n\nhttp://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions
Avatar of: Vladimir Matveev

Vladimir Matveev

Posts: 2

December 31, 2008

The battle of theories in cell physiology:\nhttp://ru.youtube.com/watch?v=cuvoGbD5h3g

Popular Now

  1. Publishers’ Legal Action Advances Against Sci-Hub
  2. How Microbes May Influence Our Behavior
  3. Metabolomics Data Under Scrutiny
    Daily News Metabolomics Data Under Scrutiny

    Out of 25,000 features originally detected by metabolic profiling of E. coli, fewer than 1,000 represent unique metabolites, a study finds.

  4. Sexual Touch Promotes Early Puberty
    Daily News Sexual Touch Promotes Early Puberty

    The brains and bodies of young female rats can be accelerated into puberty by the presence of an older male or by stimulation of the genitals.

AAAS