TOP ROW, L TO R: COURTESY OF KELLEY FOYIL; WIKIMEDIA COMMONS/BOB GOLDSTEIN, UNC CHAPEL HILL; COURTESY OF INSTITUTE FOR SYSTEMS BIOLOGY. BOTTOM ROW: COURTESY OF OXFORD NANOPORE

University of Oklahoma graduate student Richard Wilson spent the early 1980s reading DNA. First he’d add four radioactively labeled synthesis-terminating nucleotides—one corresponding to each of the four natural bases—to mixtures of DNA fragments. He’d then load fragments treated with different radioactive bases into separate wells of a polyacrylamide gel and use electrophoresis to separate the strands into a pattern that reflected their length, and, consequently, where the unnatural bases had incorporated.

“It was all very manual,” recalls Wilson, now director of the McDonnell Genome Institute at Washington University in St. Louis. “We used to get the sequencing gels running, go have dinner and probably a few beers. Then we’d come back to the lab around two in the morning, take the gels...

Ramping up

In 1986, ABI announced the first automated DNA sequencer. Although based on the Sanger technique, the new machine used fluorescent, not radioactive, labels. With one color for each nucleotide, that meant sequencing one section of DNA required just one lane in a gel instead of four (Nature, 321:674-79). After electrophoresis, the base sequence could be read from the gel by a computer equipped with a lens and photomultiplier. Later versions of the technology incorporated automatic lane loading, too.

1977: Sanger—NIH, Gilbert—National Library of Medicine; 1996: TS Staff; 2005: Courtesy of Roche; 2006: Biochemical Society Transactions Feb 01, 2015,43(1)1-5; DOI: 10.1042/BST20140254; 2014: Courtesy of Illumina It wasn’t cheap, costing between $2 and $5 per sequenced base, but reading DNA suddenly became more practical. In 1990, Wilson, along with a team of researchers at Washington University in St. Louis and the Laboratory of Molecular Biology in Cambridge, set out into uncharted territory—sequencing the whole nuclear genome of a multicellular animal, the nematode Caenorhabditis elegans—using ABI machines. After eight years, the 97-megabase project was deemed complete. Meanwhile, also starting in 1990, a much larger international team was tackling an even bigger project: the sequencing of the 3.3 billion nucleotides making up the human genome.

“We thought it would be transformative,” says Kim Worley, a geneticist at Baylor College of Medicine who was involved in the Human Genome Project. “Every lab around the world was spending lots of time analyzing one part of one gene. Giving people all the genes, all at once, so they could just do the biology would be a tremendous benefit.” Ten years and $3 billion later, the Project’s members completed a draft of the human genome.

Working in parallel

As researchers sifted through the data pouring out of these projects, a wave of technologies that would become next-generation (next-gen) sequencing was already gathering steam. These technologies used massive parallelization, with the ability to produce “millions of different sequences simultaneously, but with very short reads,” Hood says.

In the first commercially successful next-gen sequencers, released by 454 Life Sciences in 2005, parallelization was achieved via rapid amplification of small, bead-bound fragments of DNA using polymerase chain reaction (PCR). And nucleotides were read using a technique called pyrosequencing (Nature, 437:376-80, 2005). The system could sequence 25 million bases with 99 percent accuracy in a single 4-hour run—a 100-fold increase in throughput—at less than one-sixth the cost of conventional methods.

The following year, Solexa (acquired by biotech giant Illumina in 2007) presented its take on next-gen sequencing, introducing the technology that is most widely used today. Instead of bead-based amplification, Illumina machines employ a technique called bridge amplification to clone fragments of single-stranded DNA immobilized in a flow cell (Nature, 456:53-59, 2008). The sequences themselves are read using fluorescently labeled nucleotides similar to those of the Sanger method. Along with their offshoots, these technologies have come to dominate research and clinical labs as the cheap and effective sequencers of choice; the release of Illumina’s HiSeq X Ten system in 2014 brought the cost of sequencing a human genome below the $1,000 mark.

“Now, my students, some of whom don’t know any sequencing, think nothing of it,” says Harvard University’s George Church, who pioneered one of the first next-gen bead-based methods back in 2005 (Science, 309:1728-32). “If they change one base pair in the human genome, they’ll send it out for sequencing and check they changed that base pair and nothing else. That’s kind of a ridiculous assay by 1980s standards, but it actually makes sense today.”

New frontiers

1977: ShiftFn/Wikimedia Commons; 1995: CDC/Dr. W.A. Clark (PHIL #1617), 1977; 2001: Arthimedes/Shutterstock.com; 2007: Cold Spring Harbor Laboratory; 2013: Alex Staroseltsev/Shutterstock.com; 2016: CDC Global

The sequencing field shows no signs of slowing down. Today, emerging technologies such as single-molecule real-time (SMRT) and nanopore sequencing are beginning to eliminate the need for amplification, with advantages that go beyond just increasing speed: in addition to reducing PCR-derived bias and permitting longer reads, these single-molecule techniques retain DNA-bound molecules so researchers “could read out methylation and footprinting information,” Church notes, presenting the possibility of obtaining genetic and epigenetic reads simultaneously. (See “Sons of Next Gen,” The Scientist, June 2012.)

Such “third-generation sequencing” is already making its debut in biomedical research. Earlier this year, for example, scientists used Oxford Nanopore’s portable MinION device to classify Ebola strains in West Africa with a genome sequencing time of under 60 minutes (Nature, 530:228-32). The same device is currently being used in Brazil to map the spread of Zika virus across the country in real time, and was used this summer to sequence DNA on the space station.

Of course, these nascent technologies are not without problems, says Wilson. “I would say there’s not much that’s really shown itself to be incredibly robust,” he notes. “If you’re going to use those technologies, either in the research or clinical setting, you’ve got to be able to get consistent results from experiment to experiment. I’m not sure we’re quite there yet.”

But according to Hood, now 77 years old and president of the Institute for Systems Biology in Seattle, that transition is just on the horizon, and will reinforce the remarkably swift scientific progress that has characterized the last 30 years of DNA sequencing. “Living through it, you were very impatient, and always wondered when we’d be able to move to the next stage,” he reflects. “But in looking back, all of the things that have happened since ’85, they’re really pretty astounding.”

 


Read about how technological advances over the last three decades have enabled research in other life science fields in The Scientist’s special 30th anniversary feature.

Interested in reading more?

Magaizne Cover

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!