A heat block, a truck battery, and a novel RNA amplification assay make for in-the-field surveillance of the virus.
Highlights from a webinar held by The Scientist to celebrate 30 years of PCR: the technique's invention, quantitative real-time PCR, and digital PCR
December 1, 2013|
© ALFRED PASIEKA/SCIENCE SOURCE
In the 30 years since Kary Mullis imagined PCR while cruising a California highway, the technology has impacted just about every area of life science research. No longer are researchers required to laboriously clone, identify, and isolate pieces of DNA before studying them—they can simply amplify them instead, using paired oligonucleotides and a hardy polymerase to pluck the needle from the metaphorical haystack. Shattered, too, are researchers’ assumptions regarding how much DNA is enough for analysis. When concentration doubles every PCR cycle, even one copy of DNA is sufficient.
PCR has transformed disciplines from forensics to food safety, clinical diagnostics to genomics. It has, in fact, become ubiquitous. “Polymerase chain reaction is now a word in Merriam-Webster’s Collegiate Dictionary,” Mullis says. But the technology, too, has transformed, and innovation after innovation carries the methodology closer to use in rapid, point-of-care clinical diagnostics. In this webinar, conducted by The Scientist to celebrate three decades of PCR, Mullis, Stephen Bustin, and Reginald Beer reflect on the past, present, and future of the technique and its derivatives: quantitative real-time PCR and digital PCR.
INSPIRATION ON THE OPEN ROAD
Mullis kicked off the webinar by recounting his story (also related on his website, www.karymullis.com/pcr.shtml) of the invention of PCR, for which he won the 1993 Nobel Prize in Chemistry.
Mullis’s inspiration, he says, came from a desire to save jobs and sell oligonucleotides. He was a DNA chemist at Cetus Corporation, and his team of seven used to fashion oligos by hand. But new automated synthesizers could accomplish in hours what his team took weeks to make. So, Mullis was looking for a way to help Cetus sell more oligos.
In the spring of 1983, Mullis relates, he was driving his silver Honda “up a long and winding road” towards his weekend cabin in Mendocino County, his girlfriend asleep in the passenger seat. “The California buckeyes poked heavy blossoms out onto Highway 128,” he recalls poetically. “It seemed to be the night of the buckeyes, but something else was stirring,” he says.
As he drove, Mullis mulled over a clinical diagnostic assay he was developing, which would be based on Fred Sanger’s dideoxynucleotide-based sequencing strategy. The goal was to use DNA polymerase and paired oligonucleotide primers to read the single nucleotide in human DNA lying between the two primers—effectively, an early test for genetic variants.
Mullis planned to run this assay on human samples. The problem, he imagined, was that those samples might contain their own deoxynucleotide triphosphates, which would ruin the assay by extending the primers more than one base before the desired dideoxy terminator could be added. So, he was thinking of ways to get rid of those potential confounders.
The strategy he was considering was to mix the two primers with denatured genomic DNA, allow them to anneal, and use the Klenow fragment of DNA polymerase to eat up any contaminating nucleotides. Then, he would simply reheat the same sample to remove the used primers from the template, cool it so that new primers could anneal, add fresh enzyme, and go.
“I had it. PCR. But I didn’t see it yet,” he says.
As he considered the consequences of the reaction, Mullis asked himself what would happen if the oligos in the first step were extended a long way. “Eureka!” he thought, bringing his car to a standstill on the shoulder.
“Their extension products would be primed by the other oligos, and these would also now be extended. I would have doubled the signal. And I could do that over and over.” PCR was born.
By the following morning, Mullis says, “there were diagrams of PCR reactions on every surface that would take pencil or crayon in my cabin. I woke up in a new world.”
Interestingly, both Science and Nature passed on the seminal PCR manuscript; it was finally published in Methods in Enzymology (155:335-50, 1987).
TOWARDS EXPERIMENTAL TRANSPARENCY AND REPRODUCIBILITY
This year also marks the 21st anniversary of real-time quantitative PCR, says Stephen Bustin, who edited and coauthored the 2004 “bible,” A–Z of Quantitative PCR.
Quantitative PCR (qPCR) uses fluorescence to make an otherwise qualitative technique quantitative. It can be used to quantify DNA and even proteins, Bustin says, but its “most prevalent” application is RNA quantitation—also, he warns, its “least reliable.”
“There are six parameters that define a reverse-transcription PCR quantification, and I would like to begin by demonstrating how these are being disregarded,” Bustin says.
One of these is the cDNA synthesis strategy. For instance, Bustin shows, the choice of reverse transcriptase used to convert RNA into cDNA can have a tremendous impact on results, with a survey of 13 enzymes producing values that differed by more than 40-fold.
Another key parameter is data analysis, where, for instance, the reference gene used to normalize expression values is a crucial consideration. “Unfortunately, even today most papers continue to use single, unvalidated reference genes for data normalization,” he says. The other parameters to which attention must be paid are quality assessment, primer specificity, PCR efficiency, and reporting.
The significance of making qPCR experiments transparent and reproducible goes beyond good science, Bustin says. “qPCR data can affect peoples’ lives.” Bustin served as an expert witness during the “autism omnibus trial,” in which plaintiffs asserted that the MMR vaccine for measles, mumps, and rubella led to autism. They relied, in part, on qPCR data. “I was able to show that the authors’ conclusions were based on extremely poorly executed qPCR experiments,” Bustin says.
His involvement with that case “was the final inspiration for attempting to establish a set of guidelines to help researchers publish more reliable qPCR results.” The result of that effort, the so-called MIQE guidelines (Minimal Information for Publication of Quantitative Real-Time PCR Experiments), were published in 2009 (Clin Chem, 55:611-22, 2009). Bustin describes the guidelines as “a set of parameters that would help with transparency of reporting and could serve as a blueprint for good assay design.” They include detailed checklists describing key parameters, including instrument choice. But the goal, he says, is not to dictate experimental design. Instead, the MIQE guidelines are “commonsense suggestions and much more akin to a friendly chat, trying to explain why it is important to do certain things and that people understand for themselves why it is in all of our interests that people report exactly what they do and do things in a consistent and reproducible way.”
Looking to the future, Bustin says, “I think we’re heading towards faster [experiments] and smaller volumes,” a development with significant implications for clinical diagnostics. He relates that his lab has been testing an instrument that can run reaction cycles in just eight seconds. “We can do our PCRs in less than five minutes now,” he says.
ON DIGITAL PCR
Digital PCR (dPCR) is a highly quantitative approach that is fundamentally different from qPCR. Instead of relying on fluorescence thresholds and standard curves to convert signal into quantity, dPCR partitions a sample into millions of tiny reaction wells or droplets, each containing (on average) 0 or 1 target molecule. By literally counting positive reactions and applying some Poisson statistics, researchers can obtain highly accurate titers of sample concentration while at the same time negating the impact of background sequences that might confound results.
“Digital PCR is a method to provide tighter accuracy in addition to or in excess of what standard qPCR provides,” Beer says.
The method has its roots in limiting-dilution PCR, and relies on the ability to generate millions of independent reactions, often using microfluidics. It’s theoretically possible, Beer says, to find a single copy of HIV in 100 μL of a patient’s blood sample.
Beer cites four recent studies that highlight the power of dPCR, including identification of cell-free DNA from dying transplanted organ cells (PCR Insider, Aug. 22, 2013); mitochondrial-DNA detection in cerebrospinal fluid as a potential marker of Alzheimer’s disease (Ann Neurol, 2013, doi:10.1002/ana.23955); latent HIV monitoring (PLOS One, 2013, 8:e55943); and circulating-DNA monitoring for cancer screening (Clin Chem, 2013, clinchem.2013.206359).
While dPCR is a powerful technique, like anything else, it must be done correctly, hence the publication in 2013 of digital MIQE guidelines (Clin Chem, 59:892-902, 2013).
“This is still PCR,” Beer says. “We still need standard reference materials, and how you do extraction, sample preparation, how you operate your instrument, [and] what instrument you use—it still affects your results.”
A video link to the webinar can be found at
Full bios of the webinar panelists can be found at
A link to download the full-size PCR poster can be found at