After implantation, the tissue developed blood vessels and became integrated into neuronal networks in the animals’ brains.
A recent roundtable discussion identifies challenges facing the scientific community regarding a lack of reproducible results in the literature.
December 18, 2012|
WIKIMEDIA, DOROTHEUMThe gold standard for science is reproducibility. Ideally, research results are only worthy of attention, publication, and citation based if independent researchers can replicate them using a particular study’s methods and materials. But for much of the scientific literature, results aren’t reproducible at all. The causes and remedies for this state of affairs was the topic of a recent panel discussion titled “Sense and Reproducibility,” held at the annual meeting of the American Society for Cell Biology in San Francisco, California.
Glenn Begley, former head of research at Amgen and roundtable panel member, spoke of his March revelation that the biotech company’s scientists were unable to replicate the results of 47 out of 53 papers that were seminal to launching drug-discovery programs. “This is a systemic problem built on current incentives,” he said according to Nature.
The panel offered suggestions, such as raising journals’ publication standards, establishing the use of electronic lab notebooks at research facilities, and helping laboratory supervisors provide improved supervision by reducing the size of labs. See a discussion of other initiatives launched this year to help solve the problem of irreproducible science—as well as examples of outright fraud in the scientific community—in yesterday’s bad behavior recap of 2012.
December 18, 2012
One of the best ethical quality controls is transparency.
If an individual has no internalized ethical controls (such as -- shudder -- any moral compunctions, nature forbid) then the next best thing is exposure.
A psychiatrist friend of mine in the 1960s did research on an idea then called "environmental therapy," in which paroled convicts who volunteered to do so were provided a "controled environment," to determine whether they could, or would, internalize certain values and self-control that would both benefit the society in which they would live and benefit themselves by avoiding the inevitable unhappy consequences of certain kinds of behavior (chronic lying, stealing, physical violence and such).
Although I moved to another city, and never saw the final data on the study, I do recall a number of private, conversations with my friend who confided in me his frustrations. Most of the subjects did very well in an artificially controlled environment, where there were immediate unpleasant consequences for any rule breaking, where positive rewards were immediate for "desirable" acts and words, and where massive doses of positive thinking and uplifting philosophical discussions took place under pleasant surroundings.
Dr. (name withheld) confided to me in (then) strictest confidence, that he was highly frustrated and disappointed in finding that most of the subjects of the study (on whom much follow up was done) quickly reverted to former styles of thinking and behavior after release from the program.
The greatest improvement in behavior occurred when there was an abundance of attention and awareness of their speech and actions. But when a high level of close observation was relaxed, true attitudes, thinking habits and behavior were, in most cases, resumed.
What lesson might be taken from this? Perhaps Hervey Cleckley said it best in reference to his still poignant studies of psychopathic personalities. It's not what a person KNOWS about ethics that influences his behavior but, rather, how he or she FEELS about it.
The greatest deterrent to cheating by people who WANT to cheat is very close scrutiny. For those who do not WANT to cheat, little scrutiny is required. In scientific research, as in any other field, there are some individuals who will do or say anything, just as long as they feel sure that no one is watching, or that no consequences will follow if they get "caught.".
December 18, 2012
I have a feeling that much irreproducability has to do with the misuse of statistical methods. Oftentimes, researchers "cheat" by doing statistical analyses between groups after the addition of every animal until statistical significance occurs at P<0.05 (alpha statistic). This, again oftentimes, creates very small numbers per group, and everyone knows that if an additional animal were added that went in the "wrong" direction, several more would then have to be added to "correct" the situation in favor of one's hypothesis.
With the high cost of animal studies and the inordinate push by IACUC's towards the 3R's, one can definitely see the incentive for this "cheat". Unfortunately, this modern system produces a lot of bad science. There needs to be much more emphasis on testing using the beta statistic, something many biological scientists know little about and use even less. There needs to be much more institutionalized strength to our statistical testing. Either do that or it's time to throw out P<0.05 in favor of 0.025 or 0.01.
December 18, 2012
As Paul suggests, the lack of statistical rigor is certainly a contributing factor.
As a reviewer, I am consistently appalled by widespread attempts to present 'triplicate measurements of the same sample', rather than independent replications of the study. Of course, those manucripts are rejected by me. But it is so common that I can't help but believe that a significant portion of other reviews allow bogus publications with an 'n of one'. That would certainly cause the reproducibility problem.
It is somewhat ironic that this is being discussed at ASCB since cell biology remains anchored in the 'representative image' method of investigation and data presentation. So, many cell biologists tend to cherry-pick images that fit the hypothesis which of course leads to false conclusions and poor reproducibility. Seriously, in any plate of cells, I can find some cells that behave in a way that fit any hypothesis I might have. Cell biologists need to think in terms of the population of responses and get over this cherry-picking if they are to reduce their contributions to the irreproducibility problem.
Within the past decade, the ability to quantify many features within an image of a cell has become available, but remains poorly adopted by the cell biology community. I will admit that some (few) cellular features remain poorly quantifiable and that the human brain has ways of deciphering complex patterns that can not be replicated by existing image analysis software. However, most cell biology today is amenable to existing image analysis. Thus, as long as one collects images randomly, image analysis should provide the rigor to cell biologist so that they may reduce their contributions to irreproducible results. I applaud ASCB for at least discussing this at their meeting. It is in the best interests of all to try to nudge ASCB membership (and indeed the membership of most societies in most fields) towards the long-overdue goal of presenting rigorously assessed data.