The prominent researcher has been put on administrative leave pending an investigation into unspecified allegations.
Overestimation of effect sizes in meta-analyses is linked with early-career status, small collaborations, or misconduct records, according to a study.
March 21, 2017|
© BRYAN SATALINOLooking for patterns of potential bias in scientific studies, a Stanford University–based research team found a number of risk factors. Among more than 3,000 meta-analyses, small studies that were highly cited were more likely to contain bias, as were studies authored by scientists with a history of misconduct or by small but global research teams. On the other hand, the study, published in PNAS yesterday (March 20), found no association between bias and the authors’ country of origin giving incentives to individuals for performance, refuting the idea of a “publish or perish” environment.
“To the best of my knowledge, all the evidence that we have about pressures to publish comes from surveys, i.e. what scientists say. Now, there is no question that these pressures exist, we all feel them, and it is reasonable to suspect that they might have negative effects on the literature,” coauthor Daniele Fanelli told Retraction Watch. “However, as scientists we should verify empirically whether these concerns are justified or not. And, to the best of my current understanding, as explained above, evidence suggests that they are partially misguided.”
Fanelli and colleagues collected data from more than 3,000 meta-analyses, and looked for correlations among various characteristics, such as authors’ retraction history, citations, career level, and gender, as well as studies’ effect sizes. Scientists who had at least one retraction were more likely to overestimate effect sizes, as were small studies and those published in the “gray literature,” such as PhD theses and conference proceedings.
“Our results should reassure scientists that the scientific enterprise is not in jeopardy, that our understanding of bias in science is improving, and that efforts to improve scientific reliability are addressing the right priorities,” the authors wrote in their study.
March 21, 2017
The writer of this article does not seem to know anything about serious studies on retractions. He only comments on the work by others without giving any proof that the meta-analysis carried out by a SU-team is not flawed and that the data were not manipulated. How do you know the SU-team’s meta-analysis is reliable to come up which such an absurd and ridiculous conclusion? Did you verify and analyzed their data? It is really worrisome and scary to see unverified opinions, and the continuous trend of use, misuse, and abuse of mathematics and statistics that is all-too-frequently intentionally done to interpret, obtain favorable results that the researcher wants to see, spreading false findings. To say that papers published in “grey literature” like ‘proceedings’ come from small studies from scientists, who had at least one retraction, is absurd, biased and a serious insult to all the contributors of outstanding research papers published in proceedings. In fact, there are serious studies showing that controlled high impact journals (and not “gray literature”), are the ones with the highest paper retractions due to plagiarism, data falsification, figure duplication, data manipulation, data fabrication and the like, and such journals include Nature, Lancet, Cell, Science, EMBO J, NEJM, IAI, J. Immunol., J. Exp. Med., and more. The writer and the SU-based research team are asked to read, for example, the Proceedings of the Society for Imaging Science and Technology, which publishes every year very high quality peer-reviewed papers covering about 20 different topics from their annual international conferences. SU-team should retract such an absurd study.