Antibodies are among the most commonly used reagents in life-science laboratories, employed in everyday experiments, diagnostics, and clinical tests. Despite their widespread use, however, no standard guidelines define how these invaluable biological tools should be validated prior to use. Poorly characterized antibodies may yield nonspecific results that are difficult to replicate even when the only difference is the antibody’s production lot.
In 2012, a group of Amgen researchers attempted to reproduce the results of 53 “landmark” papers; only 6 had scientific findings that could be replicated. “Even knowing the limitations of preclinical research, this was a shocking result,” the Amgen group, led by C. Glenn Begley, wrote in its commentary (Nature, 483:531-33, 2012). In several instances, the analysis found that failure to reproduce experimental data could be attributed to antibodies that were nonspecific or poorly validated. This lack of reproducibility can also spill over into...
However, recent initiatives aim to rectify this situation at several levels. Begley, David Rimm, and Anita Bandrowski, speakers at The Scientist’s webinar “An Urgent Need for Validating and Characterizing Antibodies,” define what constitutes a good antibody and offer validation methods that individual researchers can apply to their experiments. These thought leaders also offer their perspectives on the need for more-stringent guidelines in publications and for community efforts to standardize validation and reporting of antibody use.
The following contains highlights from the talks and discussion.
Validating Antibodies: An Urgent Need
C. Glenn Begley
Oncology researcher Glenn Begley, who led the 2012 reproducibility study while at Amgen, began the webinar by stressing why data reproducibility is critical to preclinical research—and that researchers and journals must work toward greater accountability.
While the publication of novel discoveries in top-tier journals can drive promotions, grants, and the stature and respect scientists receive within the community, the pressure to publish can also lead to the inclusion of flawed data, according to Begley. Publication bias is “often unspoken and unacknowledged,” he says. “[It] is one of the unappreciated challenges to drug discovery. We get what we incentivize.”
Examining the results of studies that proved difficult to reproduce, Begley described six “red flags for suspect work” in a 2013 Nature comment (doi:10.1038/497433a, 2013). These included factors such as studies not being blinded, cherry-picked results that were not representative of all experiments, lack of controls, and reagents not being validated. Most such studies “typically fail at multiple levels,” says Begley. “The papers that we were unable to reproduce had a number of things in common.”
“The hope is that there might be, in the future, some sort of standardization agency that can give a certain level of validation.—David Rimm
In their attempts to replicate some of the 53 studies, the Amgen researchers were required to sign confidentiality agreements that prevent them from revealing specific citations or authors’ identities. Quoting from one such example, Begley says the authors used a specific antibody purchased from Santa Cruz Biotechnology at a 1:100 dilution. However, “the [manufacturer’s] datasheet itself says this antibody is not appropriate for immunohistochemistry, even though the investigators elected to use it for this purpose,” says Begley.
“We have a systemic problem; our system tolerates and even encourages these behaviors,” says Begley. Although the primary responsibility to ensure that published data are accurate and reproducible rests with investigators and their institutions, Begley adds that resolving the problem “requires a multipronged approach” that also includes funding agencies such as the National Institutes of Health (NIH) and initiatives led by the Global Biological Standards Institute and the Reproducibility Initiative.
“Patients expect—and certainly deserve—more,” he says, adding that this webinar “is a step towards trying to address some of these concerns as we all begin to discuss how antibodies should be validated and appropriately used.”
David L. Rimm
Studies that rely on antibody-based techniques would be significantly more reproducible if the antibodies themselves were well validated, according to webinar panelist David Rimm. “Mishaps of antibody validation or omission of validation [can lead to] work that is scientifically incorrect or nonreproducible,” says Rimm.
Antibodies used in companion diagnostic tests—to predict a patient’s response to a particular drug—can also suffer from these errors. In 2001, the College of American Pathologists (CAP) began a survey to measure the success of an antibody-based diagnostic test for the EGFR protein, whose presence was expected to predict a patient’s response to the cancer drug Erbitux (cetuximab). Across 70 labs, only 4 out of 10 cases yielded results that were 90 percent concordant with one another in 2004. In one case from 2005, “half the labs doing the test said it was positive and the other half said it was negative,” says Rimm.
When Rimm and his colleagues analyzed five antibodies used across labs, almost none produced comparable results on tissue microarrays or when used to quantify levels of EGFR expression. The differences could result from antibodies binding to different domains of an antigen, the conditions under which antigens were retrieved, or even small differences in protocols, according to Rimm.
But testing antibodies for four aspects—sensitivity, specificity, reproducibility, and function in formalin-fixed, paraffin-embedded tissues—could go a long way toward resolving these problems. Knockdown experiments and Western blots with cell lines could help validate sensitivity and specificity; cell lines transfected with the antigen could be used to test antibodies against the full span of expression seen in human tissues. “If all of these tests are met then you can have some degree of confidence that the epitope you’re detecting in an immunohistochemistry or immunofluorescence assay is likely to be representative of the actual scientific facts,” he says.
Although commercial suppliers provide some details, antibodies are frequently sold without information on what dilution is appropriate for a particular technique. A different lot of the same antibody from the same vendor can show marked differences in results, according to Rimm. The variations highlight the need for repeated validation, he adds. “Validation may be lab-specific and experiment-specific with respect to controls,” says Rimm. “However, wouldn’t it be nice if we could buy reagents that we knew were validated? The hope is that there might be, in the future, some sort of standardization agency that can give a certain level of validation. You’d still need to do it in your own lab, but you’d have a high likelihood of at least having good starting material.”
STANDARDIZING REAGENT PROVENANCE
A few years ago, the NIH initiated a project to study how easy (or difficult) it would be to track down antibodies mentioned in published studies using an automated system.
As part of the Neuroscience Information Framework, Anita Bandrowski and her colleagues attempted to build a robot search engine to accomplish the task. However, when they manually surveyed a single volume of the Journal of Neuroscience, which contained only eight studies that reported using antibody-based methods, the researchers found more than a hundred unique antibodies cited in methods sections—and 52 of the citations didn’t contain enough information to determine a catalog number for the antibody used. Few were identified with either a clone number or a catalog number (in any case, Bandrowski stresses, clone numbers are neither unique nor consistent). And although a majority of the antibodies were listed with a supplier’s name and location, none had lot numbers associated with them.
The tiny sampling proved that retracing the source of antibodies in the published literature was not just difficult, it was impossible in some cases. “So we actually didn’t build the [automated system],” says Bandrowski. “If we can’t do it, neither can a robot.”
Bandrowski likens the problem to a rock in a river: published literature remains stationary even as companies and information technologies move and protocols evolve. She and her colleagues identified several classes of problems with tracking antibodies used in published studies, including vendor transition or the possibility that a cited antibody is no longer manufactured. For example, says Bandrowski, “If Millipore goes out of business tomorrow, will anyone be able to reproduce the findings in the paper [that uses their antibodies]?”
Identification problems are not unique to antibodies. Bandrowski and her colleagues found that antibodies were only identifiable in 45–50 percent of studies; cell line labeling was about as poor. Knockdown reagents and transgenic organisms were among the most readily identified, as they were clearly described in 75–80 percent of studies. “We are doing a poor job, looking at a very basic level, just of reporting the catalog numbers of the methods and reagents that we’re using,” says Bandrowski.
A pilot effort initiated by several researchers, funding institutions, and journals now aims to catalog all antibodies from all vendors, assigning unique identifiers to every single product. Dubbed The Antibody Registry (theantibodyregistry.org), the project consolidates disparate identifiers and requests authors—and publications—to cite these unique IDs in papers.
“Many labs do keep meticulous track of reagents, but when we take off the lab coat and put on the author hat, we don’t think about reagent provenance,” says Bandrowski. “Resources should be identifiable in such a way that they are machine readable, available outside of the paywall, and uniform across publishers and journals. We really need to move forward as a discipline and make this better.”
A video link to the webinar can be found at
A link to download the poster can be found at