The nationwide experiment will initially include around 100,000 volunteers.
Non-confirmatory or “negative” results are not worthless.
January 15, 2013|
FLICKR, JHRITZHypothesis-driven research is at the heart of scientific endeavor, and it is often the positive, confirmatory data that get the most attention and guide further research. But many studies produce non-confirmatory data—observations that refute current ideas and carefully constructed hypotheses. And it can be argued that these “negative data,” far from having little value in science, are actually an integral part of scientific progress that deserve more attention.
At first glance, this may seem a little nonsensical; after all, how can non-confirmatory results help science to progress when they fail to substantiate anything? But in fact, in a philosophical sense, only negative data resulting in rejection of a hypothesis represent real progress; positive data in support of the hypothesis cannot exclude the possibility that it may be rejected by future experiments. As philosopher of science Karl Popper stated in 1963: “Every refutation should be regarded as a great success; not merely a success of the scientist who refuted the theory, but also of the scientist who created the refuted theory and who thus in the first instance suggested, if only indirectly, the refuting experiment.”
On a more practical level, Journal of Negative Results in Biomedicine (JNRBM) was launched in 2002 on the premise that “failure” is as important in science as in other aspects of life, and that scientific progress depends not only on the accomplishments of individuals but requires collaboration, teamwork, and open communication of all results—positive and negative. After all, the scientific community can only learn from negative results if the data are published. With JNRBM, we want to provide a forum for the discussion of these non-confirmatory findings, and to provide scientists with balanced information to advance the improvement of experimental design and clinical decision-making.
Negative data have always been harder to disseminate, yet ostensibly insignificant results can sometimes lead to a paradigm shift. One noteworthy example is that of Albert Michelson and Edward Morley, two 19th-century physicists, who performed a series of experiments to detect the relative motion of matter through the “luminiferous aether”—a theoretical medium thought to carry light waves. Despite the fact that their negative results clearly contradicted the theory of stationary aether, the scientific community initially overlooked them. It was only when they eventually published their findings in the American Journal of Science in 1881 that the prevailing theory was questioned, thereby opening up a line of research that ultimately led to Einstein’s special theory of relativity.
This does not mean that every negative result will turn out to be of groundbreaking significance. In the same way that positive findings can be refuted by additional research, not all negative data will be confirmed by subsequent work. However, it is imperative to be aware of the more balanced perspective that can result from the publication of non-confirmatory findings.
The first and most obvious benefits of publishing negative results are a reduction in the duplication of effort between researchers, leading to the acceleration of scientific progress, and greater transparency and openness. Furthermore, the publication of well-documented failures allows for negative results to be discussed, confirmed, or refuted by others, and in some cases might also reveal fundamental flaws in commonly used methods, drugs, or reagents.
More broadly, publication of negative data might also contribute to a more realistic appreciation of the “messy” nature of science. Scientific endeavors rarely result in perfect discoveries of elements of “truth” about the world. This is largely because they are frequently based on methods with real limitations, imperfect experimental models, and hypotheses based on uncertain premises.
It is perhaps this “messy” aspect of science that contributes to a hesitation within the scientific community to publish negative data. In an ever more competitive environment, it may be that scientific journals prefer to publish studies with clear and specific conclusions. Indeed, Daniele Fanelli of the University of Edinburgh in the United Kingdom suggests that results may be distorted by a "publish or perish" culture in which the progress of scientific careers depends on the frequency and quality of citations. This leads to a situation in which data that support a hypothesis may be perceived in a more positive light and receive more citations than data that only generate more questions and uncertainty.
Despite the effects of this competitive environment, however, a willingness to publish negative or unexpected data is emerging among researchers, suggesting a growing need for initiatives such as JNRBM. Publications that emphasize positive findings and minimize discordant observations are of course useful, but a more balanced presentation of all the data, including negative or failed experiments, would also make a significant contribution to scientific progress.
Gabriella Anderson is the journal development editor for Journal of Negative Results in Biomedicine. Haiko Sprott is head of the Pain Clinic Basel and professor of rheumatology at the University of Zurich, Switzerland, and an editorial board member for Journal of Negative Results in Biomedicine. Bjorn R. Olsen is the Editor-in-Chief of Journal of Negative Results in Biomedicine. He is also a Hersey Professor of Cell Biology at Harvard Medical School, and a professor of developmental biology and Dean for Research at Harvard School of Dental Medicine.
January 17, 2013
If Pharma was REQUIRED to do this, the world would be a better place.
January 17, 2013
Some 30 years ago my friend Bob Safierstein and I were working in Mount Sinai, New York. Me, in the Department of Physiology and Biophysics; Bob, in the Department of Medicine, Divison of Nephrology. One day we were discussing the possible molecular mechanisms that trigger the well known Compensatory Renal Growth (CRG): Ablate one kidney and the other will grow to "compensate" for the absence of the other. We considered the potential for the involvement of the Antidiuretic Hormone or Vasopressin (ADH or VP) as a growth factor and act as a signal in this process. We acquired Brattleboro rats, in which ADH is congenitally absent. We extirpated one kidney and quantified DNA, RNA and proteins. A few days later we sacrificied the rats and proceeded the same way with the remaing kidneys. They demonstrated full CRG. We repeated the same procedure in Wistar (normal) rats and saw that identical CRG to those seen in the Brattleboros had taken place. We obviously concluded that ADH (VP) was not a a growth factor involved in CRG. Though it was a negative result we thought it was important to communicate it in the literature because ADH was emerging at the time as an important growth factor and that it was not apparently involved in the growth of the remnant kidney. We thought that many researchers might arrive to the same idea concerning the hormone and attempt to do what we have already done and would save their time and money and look for other mechanisms in triggering CRG. It did not matter to which journals we sent the paper, Physiology or Medicine, including Surgery, they all rejected its publication, not based on merit or methodology, but on the grounds that the journals "do not publish negative results." A few years later I left basic research and entered the field of practicing Medicine and lost track of the field of CRG. I still have with me our manuscript that I find when pruning my office of extra papers. I look into it, marvel at all the thinking and work we put into it and the pretty hand-drawn graphs, think "what a waste" and throw it into the "discard" pile and... rescue it at the last minute to nostalgically put it back in my files. Hopefully, after so many years, somebody repeated the experiments.
January 18, 2013
I agree with this sentiment to have negative results published. There must be thousands of "good idea" experiments performed giving negative results that are repeated because the previous negative results do not get into print.
Let us remember that the best experimental hypotheses are ones that have a 50% chance of NOT being proven. As I remind my neuroscience students, excitation and inhibition in the brain both carry information. Zeros and ones in binary code have equal value. Both positve and negative results in science carry information!
January 25, 2013
I work at a company that hosts drug discovery screening data (both the positive and negative SAR data in Private Vaults and Public spaces, the default being private but it is trivial to share it publicly) When details change or the experiment goes awry, you to manipulate the data on the fly for each run without forcing you to do repetitive tasks each time. You can capture the results and the changes, even the nuanced details, like a 2/3 full final plate that is different from all the others in the run.
When there are outliers or even entire bad plates, they are easily identified. Outliers can be removed to refit curves, but the values are still shown so each researcher can decide if it is reasonable or not to remove that outlier (important for IC50 best curve fitting). Experimental and data irregularities are gracefully handled, so colleagues can illuminate the real trends. To separate the wheat from the chaff. Since web-based software can be collaborative, even social, all the benefits (security, QC, time saved, trends found) for your own data extends to data from others who wish to share with you. This includes existing direct collaborators and over 100 public datasets, which even if not always of interest to you, may be the cat’s meow for someone else. You can learn more about the collaborative platform for drug dsicovery here: https://www.collaborativedrug.com/pages/product_info
And there is a 12-part thought-piece called "Collaboration as the Key to Turning Around the Drug Discovery Business" on these and related topics that folks interested in reading more may find of high interest here (or a few of the 12-part series applicable to them):
January 26, 2013
I could not agree more with this article.
Both my husband and I published what I consider important papers in our respective fields that provided very sound "negative" data -- i.e., data that roundly disproved a couple of hypotheses that had until then been fairly well entrenched in our fields. Mine was published in the early 1970's and his about a decade earlier. Publishing these results spared a lot of needless energy and resources in many labs. There is no point pursuing evidence for a hypothesis that someone else has logically and carefully and reproducibly shown to be false.
As pointed out in the article, a hypothesis can, logically, never be "proven" -- data can be consistent with a hypothesis, even compellingly consistend with it, but the only certainty that you can have is if someone actually DISPROVES it. Thus, negative data -- sound negative data that disproves a popular working hypothesis in the field -- is EXTREMELY IMPORTANT.
I sometimes think that my "negative" paper was the last such "negative report" ever published. I wish ALL reputable journals would recognize the importance of negative data.
January 31, 2013
It looks as if we finally got it published, Marcos! So many hypotheses in Science persist because of the difficulty in getting negative results in press. In the clinic, this has devastating effects on health care delivery, as Carrie Elsass points out.
March 12, 2013
Mention should also be made of the untold saving of grant funding if we were more privy to the negative results by others.
I recall going to a scientific meeting and finding out, only through personal communication at a poster session, that the authors had tried what I was about to do in the lab and found "negative" results. We modified the next series of experiments based upon my conversation with those colleagues at that meeting and made a nice little discovery thereafter.
Research funding could benefit from the dissemination of such "negative" information. Not just in the potential outcome of a novel discovery, but to simply not waste the time and money where others have trodden.